threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "We had a discussion in October about adding more optimizer items to the\nrelease notes:\n\n\thttps://www.postgresql.org/message-id/flat/20181010220601.GA7807%40momjian.us#11d805ea0b0fcd0552dfa99251417cc1\n\nThere was no agreement on a change, but if people want to propose a\nchange, please post here and we can discuss it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:54:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Optimizer items in the release notes"
},
{
"msg_contents": "On 4/22/19 6:54 PM, Bruce Momjian wrote:\n> We had a discussion in October about adding more optimizer items to the\n> release notes:\n> \n> \thttps://www.postgresql.org/message-id/flat/20181010220601.GA7807%40momjian.us#11d805ea0b0fcd0552dfa99251417cc1\n> \n> There was no agreement on a change, but if people want to propose a\n> change, please post here and we can discuss it.\n> \n\nHello,\n\nThanks Bruce to start this thread.\n\nI still think it is useful to mention changes in the optimizer for \nseveral reasons:\n\n- help to understand why a plan can change between different majors \nversions. I don't have any example but an improvement for some users \ncould be a regression for others.\n- knowing optimizer improvements can motivate users to upgrade to newer \nversions\n- help to find bug between majors versions\n\n\nRegards,\n\n\n",
"msg_date": "Wed, 24 Apr 2019 13:08:53 +0200",
"msg_from": "Adrien NAYRAT <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Tue, 23 Apr 2019 at 04:54, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> We had a discussion in October about adding more optimizer items to the\n> release notes:\n>\n> https://www.postgresql.org/message-id/flat/20181010220601.GA7807%40momjian.us#11d805ea0b0fcd0552dfa99251417cc1\n>\n> There was no agreement on a change, but if people want to propose a\n> change, please post here and we can discuss it.\n\nI'd say these sorts of changes are important. TBH, these days, I think\nquery planner smarts are arguably one of our weakest areas when\ncompared to the big commercial databases. The more we can throw in\nthere about this sort of thing the better. The strange thing about\nmaking improvements to the planner is often that the benefits of doing\nso can range massively. e.g has zero effect all the way up to perhaps\nthousands or even millions of times faster. The users that get the 1\nmillion times speedup will likely want to know that more than some\nexecutor speedup that gets them 5% across the board. I believe these\nare useful to keep in the release notes to catch the eye of all those\npeople blocked from upgrading from <commercial database> due to us not\noptimising their queries the same way as they and their applications\nare accustomed to.\n\nI see from the v11 release notes that we have \"E.3.3.1.4. Optimizer\",\nit seems fairly simple for someone to skip this if they happen not to\nbe interested in what's been changed in that area.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 00:43:31 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "As a user, I am interested in the optimizer changes for sure, and I\nactually had wished they were highlighted more in previous releases.\n\n> I think planner smarts are arguably one of our weakest areas when\n> compared to the big commercial databases. The more we can throw in\n> there about this sort of thing the better.\n\nCompletely agree on both fronts. I have run into numerous optimizations I\nhad taken for granted when I worked primarily with SQL Server and were not\npresent in Postgres.\nWork being done to make the Postgres optimizer smarter is great, as is\nhighlighting that work in the release notes IMO.\n\nAs a user, I am interested in the optimizer changes for sure, and I actually had wished they were highlighted more in previous releases.> I think planner smarts are arguably one of our weakest areas when> compared to the big commercial databases. The more we can throw in> there about this sort of thing the better.Completely agree on both fronts. I have run into numerous optimizations I had taken for granted when I worked primarily with SQL Server and were not present in Postgres.Work being done to make the Postgres optimizer smarter is great, as is highlighting that work in the release notes IMO.",
"msg_date": "Wed, 24 Apr 2019 14:46:15 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 02:46:15PM -0400, Adam Brusselback wrote:\n> As a user, I am interested in the optimizer changes for sure, and I\n> actually had wished they were highlighted more in previous releases.\n>\n> > I think planner smarts are arguably one of our weakest areas when\n> > compared to the big commercial databases. The more we can throw in\n> > there about this sort of thing the better.\n>\n> Completely agree on both fronts. I have run into numerous\n> optimizations I had taken for granted when I worked primarily with SQL\n> Server and were not present in Postgres. Work being done to make the\n> Postgres optimizer smarter is great, as is highlighting that work in\n> the release notes IMO.\n\nThis thread highlights the challenges of having optimizer items in the\nrelease notes:\n\n* They often reply to only a small percentage of queries\n\n* They are hard to explain\n\nI see the argument as wanting vague warm and fuzzy feelings that we are\nimproving the optimizer, which we are. I will see what I can do to get\nthose ideas into the PG 12 release notes in as concrete a way as\npossible.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 26 Apr 2019 19:49:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Sat, 27 Apr 2019 at 11:49, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Apr 24, 2019 at 02:46:15PM -0400, Adam Brusselback wrote:\n> > As a user, I am interested in the optimizer changes for sure, and I\n> > actually had wished they were highlighted more in previous releases.\n> >\n> > > I think planner smarts are arguably one of our weakest areas when\n> > > compared to the big commercial databases. The more we can throw in\n> > > there about this sort of thing the better.\n> >\n> > Completely agree on both fronts. I have run into numerous\n> > optimizations I had taken for granted when I worked primarily with SQL\n> > Server and were not present in Postgres. Work being done to make the\n> > Postgres optimizer smarter is great, as is highlighting that work in\n> > the release notes IMO.\n>\n> This thread highlights the challenges of having optimizer items in the\n> release notes:\n>\n> * They often reply to only a small percentage of queries\n\nThat can often be true, but as I mentioned that it's quite common that\nthe queries that the changes do effect see massive performance\nimprovements. Some patch that gave us 5% across the board is unlikely\nto unblock someone from migrating to PostgreSQL, but some query\nplanner smarts that reduce query times by orders of magnitude could\nunblock someone.\n\n> * They are hard to explain\n\nThat can be true, but we generally get there if not the first time\nthen after a few iterations. Authors and committers of the\nimprovements are likely to be able to help find suitable wording.\n\n> I see the argument as wanting vague warm and fuzzy feelings that we are\n> improving the optimizer, which we are.\n\nNot sure where the warm and fuzzy argument is from. The point I tried\nto make was that if we're making changes to PostgreSQL that are likely\ngoing to be useful to people, then we likely should put them in the\nrelease notes. It was my understanding that this was why major version\nrelease notes were useful.\n\n> I will see what I can do to get\n> those ideas into the PG 12 release notes in as concrete a way as\n> possible.\n\nI think the current process is really good. You take on the hard task\nof drafting them up, to which everyone is very grateful for as it's a\npretty tedious job. Various people that might have been closer to the\nactual work done for certain items then suggest improvements.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 27 Apr 2019 14:04:33 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 02:04:33PM +1200, David Rowley wrote:\n> On Sat, 27 Apr 2019 at 11:49, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, Apr 24, 2019 at 02:46:15PM -0400, Adam Brusselback wrote:\n> > > As a user, I am interested in the optimizer changes for sure, and I\n> > > actually had wished they were highlighted more in previous releases.\n> > >\n> > > > I think planner smarts are arguably one of our weakest areas when\n> > > > compared to the big commercial databases. The more we can throw in\n> > > > there about this sort of thing the better.\n> > >\n> > > Completely agree on both fronts. I have run into numerous\n> > > optimizations I had taken for granted when I worked primarily with SQL\n> > > Server and were not present in Postgres. Work being done to make the\n> > > Postgres optimizer smarter is great, as is highlighting that work in\n> > > the release notes IMO.\n> >\n> > This thread highlights the challenges of having optimizer items in the\n> > release notes:\n> >\n> > * They often reply to only a small percentage of queries\n> \n> That can often be true, but as I mentioned that it's quite common that\n> the queries that the changes do effect see massive performance\n> improvements. Some patch that gave us 5% across the board is unlikely\n> to unblock someone from migrating to PostgreSQL, but some query\n> planner smarts that reduce query times by orders of magnitude could\n> unblock someone.\n\nThe problem is that it is rare, and hard to explain, so it is almost\nimpossible for an average user to have any hope in guessing if it will\nhelp them.\n\n> > * They are hard to explain\n> \n> That can be true, but we generally get there if not the first time\n> then after a few iterations. Authors and committers of the\n> improvements are likely to be able to help find suitable wording.\n\nIt is not the text that is hard, but hard to explain the concept in a\nway that relates to anything a normal user is familiar with.\n\n> > I see the argument as wanting vague warm and fuzzy feelings that we are\n> > improving the optimizer, which we are.\n> \n> Not sure where the warm and fuzzy argument is from. The point I tried\n> to make was that if we're making changes to PostgreSQL that are likely\n> going to be useful to people, then we likely should put them in the\n> release notes. It was my understanding that this was why major version\n> release notes were useful.\n\nIt is for generally-common user behavior changes. If we just repeat the\ncommit logs, it will we much less readable.\n\n> > I will see what I can do to get\n> > those ideas into the PG 12 release notes in as concrete a way as\n> > possible.\n> \n> I think the current process is really good. You take on the hard task\n> of drafting them up, to which everyone is very grateful for as it's a\n> pretty tedious job. Various people that might have been closer to the\n> actual work done for certain items then suggest improvements.\n\nYes, that has been the plan.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 26 Apr 2019 22:22:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Sat, 27 Apr 2019 at 14:22, Bruce Momjian <bruce@momjian.us> wrote:\n> > > * They are hard to explain\n> >\n> > That can be true, but we generally get there if not the first time\n> > then after a few iterations. Authors and committers of the\n> > improvements are likely to be able to help find suitable wording.\n>\n> It is not the text that is hard, but hard to explain the concept in a\n> way that relates to anything a normal user is familiar with.\n\nYeah, that's no doubt often going to be a struggle, but we can't\nexpect every person who reads the release notes to understand\neverything. You could probably say the same for any internal\nimplementation change we make though. I don't think the planner is\nunique in this regard, so I don't think it needs any special\ntreatment. I also don't think we need to go into great detail. The\nitem could be as light on detail as:\n\n* Improve query planner's ability to push LIMIT through Sort nodes.\n\nIf you don't know what LIMIT is or what a Sort node is, then you're\nprobably not going to care about the change. They can keep reading on\nthe next line, but if the reader happens to have suffered some pain on\nthat during their migration attempt, then they might be quite happy to\nsee those words. If they want more details then they might be savvy\nenough to hunt those down, or perhaps they'll come asking.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 27 Apr 2019 14:47:44 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer items in the release notes"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 02:47:44PM +1200, David Rowley wrote:\n> On Sat, 27 Apr 2019 at 14:22, Bruce Momjian <bruce@momjian.us> wrote:\n> > > > * They are hard to explain\n> > >\n> > > That can be true, but we generally get there if not the first time\n> > > then after a few iterations. Authors and committers of the\n> > > improvements are likely to be able to help find suitable wording.\n> >\n> > It is not the text that is hard, but hard to explain the concept in a\n> > way that relates to anything a normal user is familiar with.\n> \n> Yeah, that's no doubt often going to be a struggle, but we can't\n> expect every person who reads the release notes to understand\n> everything. You could probably say the same for any internal\n> implementation change we make though. I don't think the planner is\n> unique in this regard, so I don't think it needs any special\n\nI do believe the planner is unique in this regard in the sense that the\nchanges can make 100x difference in performance, and it is often unclear\nfrom the user interface exactly what is happening.\n\n> treatment. I also don't think we need to go into great detail. The\n> item could be as light on detail as:\n> \n> * Improve query planner's ability to push LIMIT through Sort nodes.\n> \n> If you don't know what LIMIT is or what a Sort node is, then you're\n> probably not going to care about the change. They can keep reading on\n> the next line, but if the reader happens to have suffered some pain on\n> that during their migration attempt, then they might be quite happy to\n> see those words. If they want more details then they might be savvy\n> enough to hunt those down, or perhaps they'll come asking.\n\nUh, that is not clear to me so I am not sure the average user would know\nabout it. I would probably try to explain that one in a way that can be\nunderstood, like LIMIT in subqueries affecting the outer query sort\nperformance.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 27 Apr 2019 07:40:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer items in the release notes"
}
] |
[
{
"msg_contents": " Hi,\n\nWhen FETCH_COUNT is set, queries combined in a single request don't work\nas expected:\n\n \\set FETCH_COUNT 10\n select pg_sleep(2) \\; select 1;\n\nNo result is displayed, the pg_sleep(2) is not run, and no error\nis shown. That's disconcerting.\n\nThe sequence that is sent under the hood is:\n\n#1 BEGIN\n#2 DECLARE _psql_cursor NO SCROLL CURSOR FOR\n\tselect pg_sleep(2) ; select 1;\n#3 CLOSE _psql_cursor\n#4 ROLLBACK\n\nThe root problem is in deciding that a statement can be run\nthrough a cursor if the query text starts with \"select\" or \"values\"\n(in is_select_command() in common.c), but not knowing about multiple\nqueries in the buffer, which are not compatible with the cursor thing.\n\nWhen sending #2, psql expects the PQexec(\"DECLARE...\") to yield a\nPGRES_COMMAND_OK, but it gets a PGRES_TUPLES_OK instead. Given\nthat, it abandons the cursor, rollbacks the transaction (if\nit opened it), and clears out the results of the second select\nwithout displaying them.\n\nIf there was already a transaction open, the problem is worse because\nit doesn't rollback and we're silently missing an SQL statement that\nwas possibly meant to change the state of the data, as in\n BEGIN; SELECT compute_something() \\; select get_results(); END;\n\nDoes anyone have thoughts about how to fix this?\nATM I don't see a plausible fix that does not involve the parser\nto store the information that it's a multiple-query command and pass\nit down somehow to is_select_command().\nOr a more modern approach could be to give up on the\ncursor-based method in favor of PQsetSingleRowMode().\nThat might be too big a change for a bug fix though,\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 22 Apr 2019 19:03:37 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "\nBonjour Daniel,\n\n> When FETCH_COUNT is set, queries combined in a single request don't work\n> as expected:\n>\n> \\set FETCH_COUNT 10\n> select pg_sleep(2) \\; select 1;\n>\n> No result is displayed, the pg_sleep(2) is not run, and no error\n> is shown. That's disconcerting.\n\nIndeed.\n\n> Does anyone have thoughts about how to fix this?\n\n> ATM I don't see a plausible fix that does not involve the parser\n> to store the information that it's a multiple-query command and pass\n> it down somehow to is_select_command().\n\nThe lexer (not parser) is called by psql to know where the query stops \n(i.e. waiting for \";\"), so it could indeed know whether there are several \nqueries.\n\nI added some stuff to extract embedded \"\\;\" for pgbench \"\\cset\", which has \nbeen removed though, but it is easy to add back a detection of \"\\;\", and \nalso to detect select. If the position of the last select is known, the \ncursor can be declared in the right place, which would also solve the \nproblem.\n\n> Or a more modern approach could be to give up on the cursor-based method \n> in favor of PQsetSingleRowMode().\n\nHmmm. I'm not sure that row count is available under this mode? ISTM that \nthe FETCH_COUNT stuff should really batch fetching result by this amount.\nI'm not sure of the 1 by 1 row approach.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Apr 2019 19:42:38 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "\tFabien COELHO wrote:\n\n> I added some stuff to extract embedded \"\\;\" for pgbench \"\\cset\", which has \n> been removed though, but it is easy to add back a detection of \"\\;\", and \n> also to detect select. If the position of the last select is known, the \n> cursor can be declared in the right place, which would also solve the \n> problem.\n\nThanks, I'll extract the necessary bits from your patch.\nI don't plan to go as far as injecting a DECLARE CURSOR inside\nthe query, but rather just forbid the use of the cursor in\nthe combined-queries case.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 23 Apr 2019 11:46:44 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tFabien COELHO wrote:\n>> I added some stuff to extract embedded \"\\;\" for pgbench \"\\cset\", which has \n>> been removed though, but it is easy to add back a detection of \"\\;\", and \n>> also to detect select. If the position of the last select is known, the \n>> cursor can be declared in the right place, which would also solve the \n>> problem.\n\n> Thanks, I'll extract the necessary bits from your patch.\n> I don't plan to go as far as injecting a DECLARE CURSOR inside\n> the query, but rather just forbid the use of the cursor in\n> the combined-queries case.\n\nKeep in mind that a large part of the reason why the \\cset patch got\nbounced was exactly that its detection of \\; was impossibly ugly\nand broken. Don't expect another patch using the same logic to\nget looked on more favorably.\n\nI'm not really sure how far we should go to try to make this case \"work\".\nTo my mind, use of \\; in this way represents an intentional defeat of\npsql's algorithms for deciding where end-of-query is. If that ends up\nin behavior you don't like, ISTM that's your fault not psql's.\n\nHaving said that, I did like the idea of maybe going over to\nPQsetSingleRowMode instead of using an explicit cursor. That\nwould represent a net decrease of cruftiness here, instead of\nlayering more cruft on top of what's already a mighty ugly hack.\n\nHowever ... that'd require using PQsendQuery, which means that the\ncase at hand with \\; would result in a server error rather than\nsurprising client-side behavior. Is that an acceptable outcome?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 09:56:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "\nHello Tom,\n\n> Keep in mind that a large part of the reason why the \\cset patch got\n> bounced was exactly that its detection of \\; was impossibly ugly\n> and broken. Don't expect another patch using the same logic to\n> get looked on more favorably.\n\nAlthough I do not claim that my implementation was very good, I must admit \nthat I'm not sure why there would be an issue if the lexer API is made \naware of the position of embedded \\;, if this information is useful for a \nfeature.\n\n> I'm not really sure how far we should go to try to make this case \"work\".\n> To my mind, use of \\; in this way represents an intentional defeat of\n> psql's algorithms for deciding where end-of-query is.\n\nI do not understand: the lexer knows where all queries ends, it just does \nnot say so currently?\n\n> If that ends up in behavior you don't like, ISTM that's your fault not \n> psql's.\n\nISTM that the user is not responsible for non orthogonal features provided \nby psql or pgbench (we implemented this great feature, but not in all \ncases).\n\n> Having said that, I did like the idea of maybe going over to\n> PQsetSingleRowMode instead of using an explicit cursor. That\n> would represent a net decrease of cruftiness here, instead of\n> layering more cruft on top of what's already a mighty ugly hack.\n\nPossibly, but, without having looked precisely at the implementation, I'm \nafraid that it would result in more messages. Maybe I'm wrong. Also, I'm \nensure how returning new pg results for each row (as I understood the mode \nfrom the doc) would interact with \\;, i.e. how to know whether the current \nquery has changed. Maybe all this has simple answer.\n\n> However ... that'd require using PQsendQuery, which means that the\n> case at hand with \\; would result in a server error rather than\n> surprising client-side behavior. Is that an acceptable outcome?\n\nI've sent a patch to replace the existing PQexec by PQsendQuery for a not \ndirectly related feature, see https://commitfest.postgresql.org/23/2096/\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 23 Apr 2019 19:30:38 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Keep in mind that a large part of the reason why the \\cset patch got\n> bounced was exactly that its detection of \\; was impossibly ugly\n> and broken. Don't expect another patch using the same logic to\n> get looked on more favorably.\n\nLooking at the end of the discussion about \\cset, it seems what\nyou were against was not much how the detection was done rather\nthan how and why it was used thereafter.\n\nIn the case of the present bug, we just need to know whether there\nare any \\; query separators in the command string.\nIf yes, then SendQuery() doesn't get to use the cursor technique to\navoid any risk with that command string, despite FETCH_COUNT>0.\n\nPFA a simple POC patch implementing this.\n\n> Having said that, I did like the idea of maybe going over to\n> PQsetSingleRowMode instead of using an explicit cursor. That\n> would represent a net decrease of cruftiness here, instead of\n> layering more cruft on top of what's already a mighty ugly hack.\n\nIt would also work with queries that start with a CTE, and queries\nlike (UPDATE/INSERT/DELETE.. RETURNING), that the current way\nwith the cursor cannot handle. But that looks like a project for PG13,\nwhereas a fix like the attached could be backpatched.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite",
"msg_date": "Tue, 23 Apr 2019 20:58:05 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
},
{
"msg_contents": "\n>> Keep in mind that a large part of the reason why the \\cset patch got\n>> bounced was exactly that its detection of \\; was impossibly ugly\n>> and broken. Don't expect another patch using the same logic to\n>> get looked on more favorably.\n>\n> Looking at the end of the discussion about \\cset, it seems what\n> you were against was not much how the detection was done rather\n> than how and why it was used thereafter.\n>\n> In the case of the present bug, we just need to know whether there\n> are any \\; query separators in the command string.\n> If yes, then SendQuery() doesn't get to use the cursor technique to\n> avoid any risk with that command string, despite FETCH_COUNT>0.\n>\n> PFA a simple POC patch implementing this.\n\nIndeed it does not look that bad.\n\nNote a side effect of simply counting: \"SELECT 1 \\; ;\" is detected as \ncompound, but internal misplaced optimization would result in only one \nresult as the empty one is removed, so the cursor trick would work.\n\nIn some earlier version, not sure whether I sent it, I tried to keep their \nposition with some int array and detect empty queries, which was a lot of \n(ugly) efforts.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 23 Apr 2019 22:07:05 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Trouble with FETCH_COUNT and combined queries in psql"
}
] |
[
{
"msg_contents": "Here's a small number of translatability tweaks to the error messages in\nthe backend. I found these while updating the Spanish translation over\nthe past weekend, a task I had neglected for two or three years, so they\nmight involve some older messages. However, I won't backpatch this --\nonly apply to pg12 tomorrow or so.\n\n(I haven't yet gone over the full backend message catalog yet, so I\nmight propose more fixes later on.)\n\n-- \n�lvaro Herrera http://www.twitter.com/alvherre",
"msg_date": "Mon, 22 Apr 2019 16:15:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "translatability tweaks"
}
] |
[
{
"msg_contents": "Hi,\nPrepareTempTablespaces is called by most callers of BufFileCreateTemp, so I\nwas\nwondering if there is a reason not to call it inside BufFileCreateTemp.\n\nAs a developer using BufFileCreateTemp to write code that will create spill\nfiles, it was easy to forget the extra step of checking the temp_tablespaces\nGUC to ensure I create the spill files there if it is set.\n\nThanks,\nMelanie Plageman\n\nHi,PrepareTempTablespaces is called by most callers of BufFileCreateTemp, so I waswondering if there is a reason not to call it inside BufFileCreateTemp.As a developer using BufFileCreateTemp to write code that will create spillfiles, it was easy to forget the extra step of checking the temp_tablespacesGUC to ensure I create the spill files there if it is set.Thanks,Melanie Plageman",
"msg_date": "Mon, 22 Apr 2019 15:12:21 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 3:12 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> PrepareTempTablespaces is called by most callers of BufFileCreateTemp, so I was\n> wondering if there is a reason not to call it inside BufFileCreateTemp.\n\nThe best answer I can think of is that a BufFileCreateTemp() caller\nmight not want to do catalog access. Perhaps the contortions within\nassign_temp_tablespaces() are something that callers ought to opt in\nto explicitly.\n\nThat doesn't seem like a particularly good or complete answer, though.\nPerhaps it should simply be called within BufFileCreateTemp(). The\nBufFile/fd.c layering is confusing in a number of ways IMV.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 22 Apr 2019 15:44:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Apr 22, 2019 at 3:12 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>> PrepareTempTablespaces is called by most callers of BufFileCreateTemp, so I was\n>> wondering if there is a reason not to call it inside BufFileCreateTemp.\n\n> The best answer I can think of is that a BufFileCreateTemp() caller\n> might not want to do catalog access. Perhaps the contortions within\n> assign_temp_tablespaces() are something that callers ought to opt in\n> to explicitly.\n\nIt's kind of hard to see a reason to call it outside a transaction,\nand even if we did, there are provisions for it not to go boom.\n\n> That doesn't seem like a particularly good or complete answer, though.\n> Perhaps it should simply be called within BufFileCreateTemp(). The\n> BufFile/fd.c layering is confusing in a number of ways IMV.\n\nI don't actually see why BufFileCreateTemp should do it; if\nwe're to add a call, seems like OpenTemporaryFile is the place,\nas that's what is really concerned with the temp tablespace(s).\n\nI'm in favor of doing this, I think, as it sure looks to me like\ngistInitBuildBuffers() is calling BufFileCreateTemp without any\nclosely preceding PrepareTempTablespaces. So we already have an\ninstance of Melanie's bug in core. It'd be difficult to notice\nbecause of the silent-fallback-to-default-tablespace behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 19:07:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "I wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> That doesn't seem like a particularly good or complete answer, though.\n>> Perhaps it should simply be called within BufFileCreateTemp(). The\n>> BufFile/fd.c layering is confusing in a number of ways IMV.\n\n> I don't actually see why BufFileCreateTemp should do it; if\n> we're to add a call, seems like OpenTemporaryFile is the place,\n> as that's what is really concerned with the temp tablespace(s).\n> I'm in favor of doing this, I think, as it sure looks to me like\n> gistInitBuildBuffers() is calling BufFileCreateTemp without any\n> closely preceding PrepareTempTablespaces. So we already have an\n> instance of Melanie's bug in core. It'd be difficult to notice\n> because of the silent-fallback-to-default-tablespace behavior.\n\nHere's a draft patch for that.\n\nIt's slightly ugly that this adds a dependency on commands/tablespace\nto fd.c, which is a pretty low-level module. I think wanting to avoid\nthat layering violation might've been the reason for doing things the\nway they are. However, this gets rid of tablespace dependencies in\nsome other files that are only marginally higher-level, like\ntuplesort.c, so I'm not sure how strong that objection is.\n\nThere are three functions in fd.c that have a dependency on the\ntemp tablespace info having been set up:\n\tOpenTemporaryFile\n\tGetTempTablespaces\n\tGetNextTempTableSpace\nThis patch makes the first of those automatically set up the info\nif it's not done yet. The second one has always had an assertion\nthat the caller did it already, and now the third one does too.\nAn about equally plausible change would be to make all three\ncall PrepareTempTablespaces, but there are so few callers of the\nsecond and third that I'm not sure that'd be better. Thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 23 Apr 2019 16:05:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> There are three functions in fd.c that have a dependency on the\n> temp tablespace info having been set up:\n> OpenTemporaryFile\n> GetTempTablespaces\n> GetNextTempTableSpace\n> This patch makes the first of those automatically set up the info\n> if it's not done yet. The second one has always had an assertion\n> that the caller did it already, and now the third one does too.\n> An about equally plausible change would be to make all three\n> call PrepareTempTablespaces, but there are so few callers of the\n> second and third that I'm not sure that'd be better. Thoughts?\n>\n>\nI think an assertion is sufficiently clear for GetNextTempTableSpace based\non\nwhat it does and its current callers. The same is probably true for\nGetTempTableSpaces.\n\n-- \nMelanie Plageman\n\nOn Tue, Apr 23, 2019 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThere are three functions in fd.c that have a dependency on the\ntemp tablespace info having been set up:\n OpenTemporaryFile\n GetTempTablespaces\n GetNextTempTableSpace\nThis patch makes the first of those automatically set up the info\nif it's not done yet. The second one has always had an assertion\nthat the caller did it already, and now the third one does too.\nAn about equally plausible change would be to make all three\ncall PrepareTempTablespaces, but there are so few callers of the\nsecond and third that I'm not sure that'd be better. Thoughts?\n\nI think an assertion is sufficiently clear for GetNextTempTableSpace based onwhat it does and its current callers. The same is probably true forGetTempTableSpaces.-- Melanie Plageman",
"msg_date": "Wed, 24 Apr 2019 12:08:24 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "I wrote:\n> Here's a draft patch for that.\n>\n> It's slightly ugly that this adds a dependency on commands/tablespace\n> to fd.c, which is a pretty low-level module. I think wanting to avoid\n> that layering violation might've been the reason for doing things the\n> way they are. However, this gets rid of tablespace dependencies in\n> some other files that are only marginally higher-level, like\n> tuplesort.c, so I'm not sure how strong that objection is.\n>\n> There are three functions in fd.c that have a dependency on the\n> temp tablespace info having been set up:\n> \tOpenTemporaryFile\n> \tGetTempTablespaces\n> \tGetNextTempTableSpace\n> This patch makes the first of those automatically set up the info\n> if it's not done yet. The second one has always had an assertion\n> that the caller did it already, and now the third one does too.\n\nAfter a bit more thought it seemed like another answer would be to\nmake all three of those functions assert that the caller did the\nright thing, as per attached. This addresses the layering-violation\ncomplaint, but might be more of a pain in the rear for developers.\n\nNot really sure which way I like better.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 24 Apr 2019 15:17:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After a bit more thought it seemed like another answer would be to\n> make all three of those functions assert that the caller did the\n> right thing, as per attached. This addresses the layering-violation\n> complaint, but might be more of a pain in the rear for developers.\n\nIn what sense is it not already a layering violation to call\nPrepareTempTablespaces() as often as we do? PrepareTempTablespaces()\nparses and validates the GUC variable and passes it to fd.c, but to me\nthat seems almost the same as calling the fd.c function\nSetTempTablespaces() directly. PrepareTempTablespaces() allocates\nmemory that it won't free itself within TopTransactionContext. I'm not\nseeing why the context that the PrepareTempTablespaces() catalog\naccess occurs in actually matters.\n\nLike you, I find it hard to prefer one of the approaches over the\nother, though I don't really know how to assess this layering\nbusiness. I'm glad that either approach will prevent oversights,\nthough.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 17:47:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> ... I'm not\n> seeing why the context that the PrepareTempTablespaces() catalog\n> access occurs in actually matters.\n\nThe point there is that a catalog access might leak some amount of\nmemory. Probably not enough to be a big deal, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 20:55:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 5:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, Apr 24, 2019 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > After a bit more thought it seemed like another answer would be to\n> > make all three of those functions assert that the caller did the\n> > right thing, as per attached. This addresses the layering-violation\n> > complaint, but might be more of a pain in the rear for developers.\n>\n> In what sense is it not already a layering violation to call\n> PrepareTempTablespaces() as often as we do? PrepareTempTablespaces()\n> parses and validates the GUC variable and passes it to fd.c, but to me\n> that seems almost the same as calling the fd.c function\n> SetTempTablespaces() directly. PrepareTempTablespaces() allocates\n> memory that it won't free itself within TopTransactionContext. I'm not\n> seeing why the context that the PrepareTempTablespaces() catalog\n> access occurs in actually matters.\n>\n> Like you, I find it hard to prefer one of the approaches over the\n> other, though I don't really know how to assess this layering\n> business. I'm glad that either approach will prevent oversights,\n> though.\n>\n\nJust to provide my opinion, since we are at intersection and can go\neither way on this. Second approach (just adding assert) only helps\nif the code path for ALL future callers gets excersied and test exist for\nthe\nsame, to expose potential breakage. But with first approach fixes the issue\nfor current and future users, plus excersicing the same just with a single\ntest\nalready tests it for future callers as well. So, that way first approach\nsounds\nmore promising if we are fetch between the two.\n\nOn Wed, Apr 24, 2019 at 5:48 PM Peter Geoghegan <pg@bowt.ie> wrote:On Wed, Apr 24, 2019 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After a bit more thought it seemed like another answer would be to\n> make all three of those functions assert that the caller did the\n> right thing, as per attached. This addresses the layering-violation\n> complaint, but might be more of a pain in the rear for developers.\n\nIn what sense is it not already a layering violation to call\nPrepareTempTablespaces() as often as we do? PrepareTempTablespaces()\nparses and validates the GUC variable and passes it to fd.c, but to me\nthat seems almost the same as calling the fd.c function\nSetTempTablespaces() directly. PrepareTempTablespaces() allocates\nmemory that it won't free itself within TopTransactionContext. I'm not\nseeing why the context that the PrepareTempTablespaces() catalog\naccess occurs in actually matters.\n\nLike you, I find it hard to prefer one of the approaches over the\nother, though I don't really know how to assess this layering\nbusiness. I'm glad that either approach will prevent oversights,\nthough.Just to provide my opinion, since we are at intersection and can go either way on this. Second approach (just adding assert) only helps if the code path for ALL future callers gets excersied and test exist for thesame, to expose potential breakage. But with first approach fixes the issuefor current and future users, plus excersicing the same just with a single testalready tests it for future callers as well. So, that way first approach soundsmore promising if we are fetch between the two.",
"msg_date": "Thu, 25 Apr 2019 09:19:41 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> On Wed, Apr 24, 2019 at 5:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Like you, I find it hard to prefer one of the approaches over the\n>> other, though I don't really know how to assess this layering\n>> business. I'm glad that either approach will prevent oversights,\n>> though.\n\n> Just to provide my opinion, since we are at intersection and can go\n> either way on this. Second approach (just adding assert) only helps\n> if the code path for ALL future callers gets excersied and test exist for\n> the same, to expose potential breakage.\n\nIn view of the fact that the existing regression tests fail to expose the\nneed for gistInitBuildBuffers to worry about this [1], that's a rather\nstrong point. It's hard to believe that somebody writing new code would\nfail to notice such an assertion, but it's more plausible that later\nrearrangements could break things and not notice due to lack of coverage.\n\nHowever, by that argument we should change all 3 of these functions to\nset up the data. If we're eating the layering violation to the extent\nof letting OpenTemporaryFile call into commands/tablespace, then there's\nlittle reason for the other 2 not to do likewise.\n\nI still remain concerned that invoking catalog lookups from fd.c is a darn\nbad idea, even if we have a fallback for it to work (for some value of\n\"work\") in non-transactional states. It's not really hard to envision\nthat kind of thing leading to infinite recursion. I think it's safe\nright now, because catalog fetches shouldn't lead to any temp-file\naccess, but that's sort of a rickety assumption isn't it?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/24954.1556130678@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 25 Apr 2019 12:45:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 9:19 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n> Just to provide my opinion, since we are at intersection and can go\n> either way on this. Second approach (just adding assert) only helps\n> if the code path for ALL future callers gets excersied and test exist for\n> the\n> same, to expose potential breakage. But with first approach fixes the issue\n> for current and future users, plus excersicing the same just with a single\n> test\n> already tests it for future callers as well. So, that way first approach\n> sounds\n> more promising if we are fetch between the two.\n>\n> Would an existing test cover the code after moving PrepareTempTablespaces\ninto\nOpenTemporaryFile?\n\n\n-- \nMelanie Plageman\n\nOn Thu, Apr 25, 2019 at 9:19 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:Just to provide my opinion, since we are at intersection and can go either way on this. Second approach (just adding assert) only helps if the code path for ALL future callers gets excersied and test exist for thesame, to expose potential breakage. But with first approach fixes the issuefor current and future users, plus excersicing the same just with a single testalready tests it for future callers as well. So, that way first approach soundsmore promising if we are fetch between the two.Would an existing test cover the code after moving PrepareTempTablespaces intoOpenTemporaryFile? -- Melanie Plageman",
"msg_date": "Thu, 25 Apr 2019 10:27:05 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> However, by that argument we should change all 3 of these functions to\n> set up the data. If we're eating the layering violation to the extent\n> of letting OpenTemporaryFile call into commands/tablespace, then there's\n> little reason for the other 2 not to do likewise.\n>\n\nI agree to that point, same logic should be used for all three calls\nirrespective of the approach we pick.\n\nI still remain concerned that invoking catalog lookups from fd.c is a darn\n> bad idea, even if we have a fallback for it to work (for some value of\n> \"work\") in non-transactional states. It's not really hard to envision\n> that kind of thing leading to infinite recursion. I think it's safe\n> right now, because catalog fetches shouldn't lead to any temp-file\n> access, but that's sort of a rickety assumption isn't it?\n>\n\nIs there (easy) way to assert for that assumption? If yes, then can add the\nsame and make it not rickety.\n\nThough I agree any exceptions/violations coded generally bites in long run\nsomewhere later.\n\nOn Thu, Apr 25, 2019 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nHowever, by that argument we should change all 3 of these functions to\nset up the data. If we're eating the layering violation to the extent\nof letting OpenTemporaryFile call into commands/tablespace, then there's\nlittle reason for the other 2 not to do likewise.I agree to that point, same logic should be used for all three calls irrespective of the approach we pick.\nI still remain concerned that invoking catalog lookups from fd.c is a darn\nbad idea, even if we have a fallback for it to work (for some value of\n\"work\") in non-transactional states. It's not really hard to envision\nthat kind of thing leading to infinite recursion. I think it's safe\nright now, because catalog fetches shouldn't lead to any temp-file\naccess, but that's sort of a rickety assumption isn't it?Is there (easy) way to assert for that assumption? If yes, then can add the same and make it not rickety.Though I agree any exceptions/violations coded generally bites in long run somewhere later.",
"msg_date": "Thu, 25 Apr 2019 10:40:01 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:45:03PM -0400, Tom Lane wrote:\n> I still remain concerned that invoking catalog lookups from fd.c is a darn\n> bad idea, even if we have a fallback for it to work (for some value of\n> \"work\") in non-transactional states. It's not really hard to envision\n> that kind of thing leading to infinite recursion. I think it's safe\n> right now, because catalog fetches shouldn't lead to any temp-file\n> access, but that's sort of a rickety assumption isn't it?\n\nIntroducing catalog lookups into fd.c which is not a layer designed\nfor that is a choice that I find strange, and I fear that it may bite\nin the future. I think that the choice proposed upthread to add\nan assertion on TempTablespacesAreSet() when calling a function\nworking on temporary data is just but fine, and that we should just\nmake sure that the gist code calls PrepareTempTablespaces()\ncorrectly. So [1] is a proposal I find much more acceptable than the\nother one.\n\nI think that one piece is missing from the patch. Wouldn't it be\nbetter to add an assertion at the beginning of OpenTemporaryFile() to\nmake sure that PrepareTempTablespaces() has been called when interXact\nis true? We could just go with that:\nAssert(!interXact || TempTablespacesAreSet());\n\nAnd this gives me the attached.\n\n[1]: https://postgr.es/m/11777.1556133426@sss.pgh.pa.us\n--\nMichael",
"msg_date": "Fri, 26 Apr 2019 15:53:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 10:40:01AM -0700, Ashwin Agrawal wrote:\n> Is there (easy) way to assert for that assumption? If yes, then can add the\n> same and make it not rickety.\n\nIsTransactionState() would be enough?\n--\nMichael",
"msg_date": "Fri, 26 Apr 2019 15:54:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I think that one piece is missing from the patch. Wouldn't it be\n> better to add an assertion at the beginning of OpenTemporaryFile() to\n> make sure that PrepareTempTablespaces() has been called when interXact\n> is true? We could just go with that:\n> Assert(!interXact || TempTablespacesAreSet());\n\nThe version that I posted left it to GetNextTempTableSpace to assert\nthat. That seemed cleaner to me than an Assert that has to depend\non interXact.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 11:05:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 11:05:11AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> The version that I posted left it to GetNextTempTableSpace to assert\n> that. That seemed cleaner to me than an Assert that has to depend\n> on interXact.\n\nOkay, no objections for that approach as well. Are you planning to do\nsomething about this thread for v12? It seems like the direction to take\nis pretty clear, at least from my perspective.\n--\nMichael",
"msg_date": "Sat, 27 Apr 2019 09:50:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 11:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 25, 2019 at 12:45:03PM -0400, Tom Lane wrote:\n> > I still remain concerned that invoking catalog lookups from fd.c is a darn\n> > bad idea, even if we have a fallback for it to work (for some value of\n> > \"work\") in non-transactional states. It's not really hard to envision\n> > that kind of thing leading to infinite recursion. I think it's safe\n> > right now, because catalog fetches shouldn't lead to any temp-file\n> > access, but that's sort of a rickety assumption isn't it?\n>\n> Introducing catalog lookups into fd.c which is not a layer designed\n> for that is a choice that I find strange, and I fear that it may bite\n> in the future. I think that the choice proposed upthread to add\n> an assertion on TempTablespacesAreSet() when calling a function\n> working on temporary data is just but fine, and that we should just\n> make sure that the gist code calls PrepareTempTablespaces()\n> correctly. So [1] is a proposal I find much more acceptable than the\n> other one.\n\nWell the one thing I wish to point out explicitly is just taking fd.c\nchanges from [1], and running make check hits no assertions and\ndoesn't flag issue exist for gistbuildbuffers.c. Means its missing\ncoverage and in future same can happen as well.\n\n[1]: https://postgr.es/m/11777.1556133426@sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 29 Apr 2019 12:31:38 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 12:31 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Well the one thing I wish to point out explicitly is just taking fd.c\n> changes from [1], and running make check hits no assertions and\n> doesn't flag issue exist for gistbuildbuffers.c. Means its missing\n> coverage and in future same can happen as well.\n\nI believe that the test coverage of GiST index builds is something\nthat is being actively worked on right now. It's a recognized problem\n[1].\n\n[1] https://postgr.es/m/24954.1556130678@sss.pgh.pa.us\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 12:35:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The version that I posted left it to GetNextTempTableSpace to assert\n> that. That seemed cleaner to me than an Assert that has to depend\n> on interXact.\n>\n> Running `make check` with [1] applied and one of the calls to\nPrepareTempTablespaces commented out, I felt like I deserved more as a\ndeveloper than the assertion in this case.\n\nAssertions are especially good to protect against regressions, but, in this\ncase, I'm just trying to use an API that is being provided.\n\nAssertions don't give me a nice, easy-to-understand test failure. I see that\nthere was a crash halfway through make check and now I have to figure out\nwhy.\n\nIf that is the default way for developers to find out that they are missing\nsomething when using the API, it would be nice if it gave me some sort of\nunderstandable diff or error message.\n\nI also think that if there is a step that a caller should always take before\ncalling a function, then there needs to be a very compelling reason not to\nmove\nthat step into the function itself.\n\nSo, just to make sure I understand this case:\n\nPrepareTempTablespaces should not be called in BufFileCreateTemp because it\nis\nnot concerned with temp tablespaces.\n\nOpenTemporaryFile is concerned with temp tablespaces, so any reference to\nthose\nshould be there.\n\nHowever, PrepareTempTablespaces should not be called in OpenTemporaryFile\nbecause it is in fd.c and no functions that make up part of the file\ndescriptor\nAPI should do catalog lookups.\nIs this correct?\n\n[1] https://www.postgresql.org/message-id/11777.1556133426%40sss.pgh.pa.us\n\n-- \nMelanie Plageman\n\nOn Fri, Apr 26, 2019 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThe version that I posted left it to GetNextTempTableSpace to assert\nthat. That seemed cleaner to me than an Assert that has to depend\non interXact.\n\nRunning `make check` with [1] applied and one of the calls toPrepareTempTablespaces commented out, I felt like I deserved more as adeveloper than the assertion in this case.Assertions are especially good to protect against regressions, but, in thiscase, I'm just trying to use an API that is being provided.Assertions don't give me a nice, easy-to-understand test failure. I see thatthere was a crash halfway through make check and now I have to figure out why.If that is the default way for developers to find out that they are missingsomething when using the API, it would be nice if it gave me some sort ofunderstandable diff or error message.I also think that if there is a step that a caller should always take beforecalling a function, then there needs to be a very compelling reason not to movethat step into the function itself.So, just to make sure I understand this case:PrepareTempTablespaces should not be called in BufFileCreateTemp because it isnot concerned with temp tablespaces.OpenTemporaryFile is concerned with temp tablespaces, so any reference to thoseshould be there.However, PrepareTempTablespaces should not be called in OpenTemporaryFilebecause it is in fd.c and no functions that make up part of the file descriptorAPI should do catalog lookups.Is this correct?[1] https://www.postgresql.org/message-id/11777.1556133426%40sss.pgh.pa.us-- Melanie Plageman",
"msg_date": "Tue, 30 Apr 2019 14:25:41 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> I also think that if there is a step that a caller should always take before\n> calling a function, then there needs to be a very compelling reason not to\n> move that step into the function itself.\n\nFair complaint.\n\n> PrepareTempTablespaces should not be called in BufFileCreateTemp because\n> it is not concerned with temp tablespaces.\n\nActually, my reason for thinking that was mostly \"that won't fix the\nproblem, because what about other callers of OpenTemporaryFile?\"\n\nHowever, looking around, there aren't any others --- buffile.c is it.\n\nSo maybe a reasonable compromise is to add the Assert(s) in fd.c as\nper previous patch, but *also* add PrepareTempTablespaces in\nBufFileCreateTemp, so that at least users of buffile.c are insulated\nfrom the issue. buffile.c is still kind of low-level, but it's not\npart of core infrastructure in the same way as fd.c, so probably I could\nhold my nose for this solution from the system-structural standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 17:36:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "I wrote:\n> So maybe a reasonable compromise is to add the Assert(s) in fd.c as\n> per previous patch, but *also* add PrepareTempTablespaces in\n> BufFileCreateTemp, so that at least users of buffile.c are insulated\n> from the issue. buffile.c is still kind of low-level, but it's not\n> part of core infrastructure in the same way as fd.c, so probably I could\n> hold my nose for this solution from the system-structural standpoint.\n\nActually, after digging around in the related code some more, I'm having\nsecond thoughts about those Asserts. PrepareTempTablespaces is pretty\nclear about what it thinks the contract is:\n\n /*\n * Can't do catalog access unless within a transaction. This is just a\n * safety check in case this function is called by low-level code that\n * could conceivably execute outside a transaction. Note that in such a\n * scenario, fd.c will fall back to using the current database's default\n * tablespace, which should always be OK.\n */\n if (!IsTransactionState())\n return;\n\nIf we just add the discussed assertions and leave this bit alone,\nthe net effect would be that any tempfile usage outside a transaction\nwould suffer an assertion failure, *even if* it had called\nPrepareTempTablespaces. There doesn't seem to be any such usage in\nthe core code, but do we really want to forbid the case? It seems\nlike fd.c shouldn't be imposing such a restriction, if it never has\nbefore.\n\nSo now I'm feeling more favorable about the idea of adding a\nPrepareTempTablespaces call to BufFileCreateTemp, and just stopping\nwith that. If we want to do more, I feel like it requires a\nsignificant amount of rethinking about what the expectations are for\nfd.c, and some rejiggering of PrepareTempTablespaces's API too.\nI'm not sufficiently excited about this issue to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 18:22:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 3:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So now I'm feeling more favorable about the idea of adding a\n> PrepareTempTablespaces call to BufFileCreateTemp, and just stopping\n> with that. If we want to do more, I feel like it requires a\n> significant amount of rethinking about what the expectations are for\n> fd.c, and some rejiggering of PrepareTempTablespaces's API too.\n> I'm not sufficiently excited about this issue to do that.\n\n+1. Let's close this one out.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 May 2019 17:52:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Apr 30, 2019 at 3:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So now I'm feeling more favorable about the idea of adding a\n>> PrepareTempTablespaces call to BufFileCreateTemp, and just stopping\n>> with that. If we want to do more, I feel like it requires a\n>> significant amount of rethinking about what the expectations are for\n>> fd.c, and some rejiggering of PrepareTempTablespaces's API too.\n>> I'm not sufficiently excited about this issue to do that.\n\n> +1. Let's close this one out.\n\nWill do so tomorrow. Should we back-patch this?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 21:36:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Fri, May 17, 2019 at 6:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Will do so tomorrow. Should we back-patch this?\n\nI wouldn't, because I see no reason to. Somebody else might.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 May 2019 18:37:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
},
{
"msg_contents": "On Fri, May 17, 2019 at 06:37:22PM -0700, Peter Geoghegan wrote:\n> On Fri, May 17, 2019 at 6:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Will do so tomorrow. Should we back-patch this?\n> \n> I wouldn't, because I see no reason to. Somebody else might.\n\nFWIW, I see no reason either for a back-patch.\n--\nMichael",
"msg_date": "Mon, 20 May 2019 10:41:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Calling PrepareTempTablespaces in BufFileCreateTemp"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Hackers,\n\nWhat is the standard memory leak checking policy for the PostgreSQL\ncodebase? I know there is some support for valgrind -- is the test suite\nbeing run continuously with valgrind on the build farm?\n\nIs there any plan to support clang's AddressSanitizer?\n\nI've seen a thread that memory leaks are allowed in initdb, because it is a\nshort-lived process. Obviously they are not allowed in the database server.\nAre memory leaks allowed in the psql tool?\n\nRegards,\nMikhail\n\nHello PostgreSQL Hackers,What is the standard memory leak checking policy for the PostgreSQL codebase? I know there is some support for valgrind -- is the test suite being run continuously with valgrind on the build farm?Is there any plan to support clang's AddressSanitizer?I've seen a thread that memory leaks are allowed in initdb, because it is a short-lived process. Obviously they are not allowed in the database server. Are memory leaks allowed in the psql tool?Regards,Mikhail",
"msg_date": "Mon, 22 Apr 2019 16:50:25 -0700",
"msg_from": "Mikhail Bautin <mbautinpgsql@gmail.com>",
"msg_from_op": true,
"msg_subject": "memory leak checking"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 16:50:25 -0700, Mikhail Bautin wrote:\n> What is the standard memory leak checking policy for the PostgreSQL\n> codebase? I know there is some support for valgrind -- is the test suite\n> being run continuously with valgrind on the build farm?\n\nThere's continuous use of valgrind on the buildfarm - but those animals\nhave leak checking disabled. Postgres for nearly all server allocations\nuses memory contexts, which allows to bulk-free memory. There's also\nplenty memory that's intentionally allocated till the end of the backend\nlifetime (e.g. the various caches over the system catalogs). Due to\nthat checks like valgrinds are not particularly meaningful.\n\n\n> Is there any plan to support clang's AddressSanitizer?\n\nNot for the leak portion. I use asan against the backend, after\ndisabling the leak check, and that's useful. Should probably set up a\nproper buildfarm animal for that.\n\n\n> I've seen a thread that memory leaks are allowed in initdb, because it is a\n> short-lived process. Obviously they are not allowed in the database server.\n> Are memory leaks allowed in the psql tool?\n\nLeaks are allowed if they are once-per-backend type things. There's no\npoint in e.g. freeing information for timezone metadata, given that\nit'll be used for the whole server lifetime. And there's such things in\npsql too, IIRC.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 17:05:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: memory leak checking"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-22 16:50:25 -0700, Mikhail Bautin wrote:\n>> What is the standard memory leak checking policy for the PostgreSQL\n>> codebase? I know there is some support for valgrind -- is the test suite\n>> being run continuously with valgrind on the build farm?\n\n> Leaks are allowed if they are once-per-backend type things. There's no\n> point in e.g. freeing information for timezone metadata, given that\n> it'll be used for the whole server lifetime. And there's such things in\n> psql too, IIRC.\n\nI would not call the timezone data a \"leak\", since it's still useful, and\naccessible from static pointers, right up to exit. A true leak for this\npurpose is memory that's allocated but not usefully accessible, and I'd\nsay we discourage that; though small one-time leaks may not be worth the\ntrouble to get rid of.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 20:29:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: memory leak checking"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 20:29:17 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-22 16:50:25 -0700, Mikhail Bautin wrote:\n> >> What is the standard memory leak checking policy for the PostgreSQL\n> >> codebase? I know there is some support for valgrind -- is the test suite\n> >> being run continuously with valgrind on the build farm?\n> \n> > Leaks are allowed if they are once-per-backend type things. There's no\n> > point in e.g. freeing information for timezone metadata, given that\n> > it'll be used for the whole server lifetime. And there's such things in\n> > psql too, IIRC.\n> \n> I would not call the timezone data a \"leak\", since it's still useful, and\n> accessible from static pointers, right up to exit. A true leak for this\n> purpose is memory that's allocated but not usefully accessible, and I'd\n> say we discourage that; though small one-time leaks may not be worth the\n> trouble to get rid of.\n\nRight. I was only referring to it that way because the various leak\nchecking tools do, should've been more careful in wording...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 18:30:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: memory leak checking"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-22 20:29:17 -0400, Tom Lane wrote:\n>> I would not call the timezone data a \"leak\", since it's still useful, and\n>> accessible from static pointers, right up to exit. A true leak for this\n>> purpose is memory that's allocated but not usefully accessible, and I'd\n>> say we discourage that; though small one-time leaks may not be worth the\n>> trouble to get rid of.\n\n> Right. I was only referring to it that way because the various leak\n> checking tools do, should've been more careful in wording...\n\nFWIW, I just did a simple test with valgrind's --leak-check=full,\nand I can't find any clear evidence of *any* real leaks in a normal\nbackend run. The things that valgrind thinks are leaks seem to be\nmostly that it doesn't understand what we're doing. For example,\n(1) it seems to be fooled by pass-by-reference Datums, probably\nbecause the underlying type declaration is an integer type not void*.\n(2) it doesn't seem to understand how we manage the element arrays\nfor dynahash tables, because it claims they're all possibly leaked;\n(3) it claims strings passed to putenv() have been leaked;\n... etc etc.\n\nAdmittedly, this is with RHEL6's valgrind which isn't too modern,\nbut the net result doesn't really motivate me to spend more time here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 21:38:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: memory leak checking"
}
] |
[
{
"msg_contents": "Folks,\n\nI noticed that there wasn't a bulk way to see table logged-ness in\npsql, so I made it part of \\dt+.\n\nWhat say?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 23 Apr 2019 02:56:42 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "\nHello David,\n\n> I noticed that there wasn't a bulk way to see table logged-ness in psql, \n> so I made it part of \\dt+.\n\nApplies, compiles, works for me.\n\nISTM That temporary-ness is not shown either. Maybe the persistence column \nshould be shown as is?\n\nAlso I'd suggest that the column should be displayed before the \n\"description\" column to keep the length-varying one last?\n\n> What say?\n\nTests? Doc?\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 23 Apr 2019 07:03:58 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Tue, Apr 23, 2019 at 07:03:58AM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> > I noticed that there wasn't a bulk way to see table logged-ness in psql,\n> > so I made it part of \\dt+.\n> \n> Applies, compiles, works for me.\n> \n> ISTM That temporary-ness is not shown either. Maybe the persistence column\n> should be shown as is?\n\nTemporariness added, but not raw.\n\n> Also I'd suggest that the column should be displayed before the\n> \"description\" column to keep the length-varying one last?\n\nDone.\n\n> > What say?\n> \n> Tests?\n\nIncluded, but they're not stable for temp tables. I'm a little stumped\nas to how to either stabilize them or test some other way.\n\n> Doc?\n\nWhat further documentation does it need?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 24 Apr 2019 08:26:23 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "\nHello David,\n\n>>> I noticed that there wasn't a bulk way to see table logged-ness in psql,\n>>> so I made it part of \\dt+.\n>>\n>> Applies, compiles, works for me.\n>>\n>> ISTM That temporary-ness is not shown either. Maybe the persistence column\n>> should be shown as is?\n>\n> Temporariness added, but not raw.\n\nOk, it is better like this way.\n\n>> Tests?\n>\n> Included, but they're not stable for temp tables. I'm a little stumped\n> as to how to either stabilize them or test some other way.\n\nHmmm. First there is the username which appears, so there should be a \ndedicated user for the test.\n\nI'm unsure how to work around the temporary schema number, which is \nundeterministic with parallel execution it. I'm afraid the only viable \napproach is not to show temporary tables, too bad:-(\n\n>> Doc?\n>\n> What further documentation does it need?\n\nIndeed, there is no precise doc, so nothing to update :-)/:-(\n\n\nMaybe you could consider adding a case for prior 9.1 version, something \nlike:\n ... case c.relistemp then 'temporary' else 'permanent' end as ...\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 24 Apr 2019 10:29:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Wed, 24 Apr 2019 at 10:30, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello David,\n>\n> >>> I noticed that there wasn't a bulk way to see table logged-ness in psql,\n> >>> so I made it part of \\dt+.\n> >>\n> >> Applies, compiles, works for me.\n> >>\n> >> ISTM That temporary-ness is not shown either. Maybe the persistence column\n> >> should be shown as is?\n> >\n> > Temporariness added, but not raw.\n>\n> Ok, it is better like this way.\n>\n> >> Tests?\n> >\n> > Included, but they're not stable for temp tables. I'm a little stumped\n> > as to how to either stabilize them or test some other way.\n>\n> Hmmm. First there is the username which appears, so there should be a\n> dedicated user for the test.\n>\n> I'm unsure how to work around the temporary schema number, which is\n> undeterministic with parallel execution it. I'm afraid the only viable\n> approach is not to show temporary tables, too bad:-(\n>\n> >> Doc?\n> >\n> > What further documentation does it need?\n>\n> Indeed, there is no precise doc, so nothing to update :-)/:-(\n>\n>\n> Maybe you could consider adding a case for prior 9.1 version, something\n> like:\n> ... case c.relistemp then 'temporary' else 'permanent' end as ...\n>\n>\nI was reviewing this patch and found a bug,\n\ncreate table t (i int);\ncreate index idx on t(i);\n\\di+\npsql: print.c:3452: printQuery: Assertion `opt->translate_columns ==\n((void *)0) || opt->n_translate_columns >= cont.ncolumns' failed.\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 26 Apr 2019 14:49:46 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Fri, 26 Apr 2019 at 14:49, Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Wed, 24 Apr 2019 at 10:30, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >\n> >\n> > Hello David,\n> >\n> > >>> I noticed that there wasn't a bulk way to see table logged-ness in psql,\n> > >>> so I made it part of \\dt+.\n> > >>\n> > >> Applies, compiles, works for me.\n> > >>\n> > >> ISTM That temporary-ness is not shown either. Maybe the persistence column\n> > >> should be shown as is?\n> > >\n> > > Temporariness added, but not raw.\n> >\n> > Ok, it is better like this way.\n> >\n> > >> Tests?\n> > >\n> > > Included, but they're not stable for temp tables. I'm a little stumped\n> > > as to how to either stabilize them or test some other way.\n> >\n> > Hmmm. First there is the username which appears, so there should be a\n> > dedicated user for the test.\n> >\n> > I'm unsure how to work around the temporary schema number, which is\n> > undeterministic with parallel execution it. I'm afraid the only viable\n> > approach is not to show temporary tables, too bad:-(\n> >\n> > >> Doc?\n> > >\n> > > What further documentation does it need?\n> >\n> > Indeed, there is no precise doc, so nothing to update :-)/:-(\n> >\n> >\n> > Maybe you could consider adding a case for prior 9.1 version, something\n> > like:\n> > ... case c.relistemp then 'temporary' else 'permanent' end as ...\n> >\n> >\n> I was reviewing this patch and found a bug,\n>\n> create table t (i int);\n> create index idx on t(i);\n> \\di+\n> psql: print.c:3452: printQuery: Assertion `opt->translate_columns ==\n> ((void *)0) || opt->n_translate_columns >= cont.ncolumns' failed.\n\nLooking into this further, apparently the position of\n\n if (verbose)\n {\n+ /*\n+ * Show whether the table is permanent, temporary, or unlogged.\n+ */\n+ if (pset.sversion >= 91000)\n+ appendPQExpBuffer(&buf,\n+ \",\\n case c.relpersistence when 'p' then 'permanent' when 't'\nthen 'temporary' when 'u' then 'unlogged' else 'unknown' end as\n\\\"%s\\\"\",\n+ gettext_noop(\"Persistence\"));\n\nis not right, it is being called for indexes with verbose option also.\nThere should be an extra check for it being not called for index case.\nSomething like,\nif (verbose)\n{\n/*\n* Show whether the table is permanent, temporary, or unlogged.\n*/\n if (!showIndexes)\nif (pset.sversion >= 91000)\nappendPQExpBuffer(&buf,\n \",\\n case c.relpersistence when 'p' then 'permanent' when 't' then\n'temporary' when 'u' then 'unlogged' else 'unknown' end as \\\"%s\\\"\",\n gettext_noop(\"Persistence\"));\n\nNot sure, how do modify it in a more neat way.\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 26 Apr 2019 16:22:18 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 04:22:18PM +0200, Rafia Sabih wrote:\n> On Fri, 26 Apr 2019 at 14:49, Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> >\n> > On Wed, 24 Apr 2019 at 10:30, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > >\n> > >\n> > > Hello David,\n> > >\n> > > >>> I noticed that there wasn't a bulk way to see table logged-ness in psql,\n> > > >>> so I made it part of \\dt+.\n> > > >>\n> > > >> Applies, compiles, works for me.\n> > > >>\n> > > >> ISTM That temporary-ness is not shown either. Maybe the persistence column\n> > > >> should be shown as is?\n> > > >\n> > > > Temporariness added, but not raw.\n> > >\n> > > Ok, it is better like this way.\n> > >\n> > > >> Tests?\n> > > >\n> > > > Included, but they're not stable for temp tables. I'm a little stumped\n> > > > as to how to either stabilize them or test some other way.\n> > >\n> > > Hmmm. First there is the username which appears, so there should be a\n> > > dedicated user for the test.\n> > >\n> > > I'm unsure how to work around the temporary schema number, which is\n> > > undeterministic with parallel execution it. I'm afraid the only viable\n> > > approach is not to show temporary tables, too bad:-(\n> > >\n> > > >> Doc?\n> > > >\n> > > > What further documentation does it need?\n> > >\n> > > Indeed, there is no precise doc, so nothing to update :-)/:-(\n> > >\n> > >\n> > > Maybe you could consider adding a case for prior 9.1 version, something\n> > > like:\n> > > ... case c.relistemp then 'temporary' else 'permanent' end as ...\n> > >\n> > >\n> > I was reviewing this patch and found a bug,\n> >\n> > create table t (i int);\n> > create index idx on t(i);\n> > \\di+\n> > psql: print.c:3452: printQuery: Assertion `opt->translate_columns ==\n> > ((void *)0) || opt->n_translate_columns >= cont.ncolumns' failed.\n> \n> Looking into this further, apparently the position of\n> \n> if (verbose)\n> {\n> + /*\n> + * Show whether the table is permanent, temporary, or unlogged.\n> + */\n> + if (pset.sversion >= 91000)\n> + appendPQExpBuffer(&buf,\n> + \",\\n case c.relpersistence when 'p' then 'permanent' when 't'\n> then 'temporary' when 'u' then 'unlogged' else 'unknown' end as\n> \\\"%s\\\"\",\n> + gettext_noop(\"Persistence\"));\n> \n> is not right, it is being called for indexes with verbose option also.\n> There should be an extra check for it being not called for index case.\n> Something like,\n> if (verbose)\n> {\n> /*\n> * Show whether the table is permanent, temporary, or unlogged.\n> */\n> if (!showIndexes)\n> if (pset.sversion >= 91000)\n> appendPQExpBuffer(&buf,\n> \",\\n case c.relpersistence when 'p' then 'permanent' when 't' then\n> 'temporary' when 'u' then 'unlogged' else 'unknown' end as \\\"%s\\\"\",\n> gettext_noop(\"Persistence\"));\n> \n> Not sure, how do modify it in a more neat way.\n\nI suspect that as this may get a little messier, but I've made it\nfairly neat short of a major refactor.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 27 Apr 2019 06:18:49 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "Hello David,\n\nPatch v3 applies, but compiles for me with a warning because the \nindentation of the following size block has been changed:\n\ndescribe.c: In function οΏ½listTablesοΏ½:\ndescribe.c:3705:7: warning: this οΏ½ifοΏ½ clause does not guard... \n[-Wmisleading-indentation]\n else if (pset.sversion >= 80100)\n ^~\ndescribe.c:3710:3: note: ...this statement, but the latter is misleadingly \nindented as if it were guarded by the οΏ½ifοΏ½\n appendPQExpBuffer(&buf,\n ^~~~~~~~~~~~~~~~~\n\nMake check fails because of my temp schema was numbered 4 instead of 3, \nand I'm \"fabien\" rather than \"shackle\".\n\n>>>>> Included, but they're not stable for temp tables. I'm a little stumped\n>>>>> as to how to either stabilize them or test some other way.\n>>>>\n>>>> Hmmm. First there is the username which appears, so there should be a\n>>>> dedicated user for the test.\n>>>>\n>>>> I'm unsure how to work around the temporary schema number, which is\n>>>> undeterministic with parallel execution it. I'm afraid the only viable\n>>>> approach is not to show temporary tables, too bad:-(\n\nThe tests have not been fixed.\n\nI think that they need a dedicated user to replace \"shackle\", and I'm \nafraid that there temporary test schema instability cannot be easily fixed \nat the \"psql\" level, but would require some kind of TAP tests instead if \nit is to be checked. In the short term, do not.\n\nI checked that the \\di+ works, though. I've played with temporary views \nand \\dv as well.\n\nI discovered that you cannot have temporary unlogged objects, nor \ntemporary or unlogged materialized views. Intuitively I'd have thought \nthat these features would be orthogonal, but they are not. Also I created \nan unlogged table with a SERIAL which created a sequence. The table is \nunlogged but the sequence is permanent, which is probably ok.\n\nI only have packages down to pg 9.3, so I could not test prior 9.1. By \nlooking at the online documentation, is seems that relistemp appears in pg \n8.4, so the corresponding extraction should be guarded by this version. \nBefore that, temporary objects existed but were identified indirectly, \npossibly because they were stored in a temporary schema. I suggest not to \ntry to address cases prior 8.4.\n\n-- \nFabien.",
"msg_date": "Sat, 27 Apr 2019 09:19:57 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "nRe: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 09:19:57AM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> Patch v3 applies, but compiles for me with a warning because the indentation\n> of the following size block has been changed:\n> \n> describe.c: In function ‘listTables’:\n> describe.c:3705:7: warning: this ‘if’ clause does not guard...\n> [-Wmisleading-indentation]\n> else if (pset.sversion >= 80100)\n> ^~\n> describe.c:3710:3: note: ...this statement, but the latter is misleadingly\n> indented as if it were guarded by the ‘if’\n> appendPQExpBuffer(&buf,\n> ^~~~~~~~~~~~~~~~~\n\nFixed.\n\n> Make check fails because of my temp schema was numbered 4 instead of 3, and\n> I'm \"fabien\" rather than \"shackle\".\n\nI think the way forward is to test this with TAP rather than the\nfixed-string method.\n\n> > > > > > Included, but they're not stable for temp tables. I'm a little stumped\n> > > > > > as to how to either stabilize them or test some other way.\n> > > > > \n> > > > > Hmmm. First there is the username which appears, so there should be a\n> > > > > dedicated user for the test.\n> > > > > \n> > > > > I'm unsure how to work around the temporary schema number, which is\n> > > > > undeterministic with parallel execution it. I'm afraid the only viable\n> > > > > approach is not to show temporary tables, too bad:-(\n> \n> The tests have not been fixed.\n> \n> I think that they need a dedicated user to replace \"shackle\", and I'm afraid\n> that there temporary test schema instability cannot be easily fixed at the\n> \"psql\" level, but would require some kind of TAP tests instead if it is to\n> be checked. In the short term, do not.\n\nChecks removed while I figure out a new TAP test.\n\n> I checked that the \\di+ works, though. I've played with temporary views and\n> \\dv as well.\n\nGreat!\n\n> I discovered that you cannot have temporary unlogged objects, nor\n> temporary or unlogged materialized views. Intuitively I'd have\n> thought that these features would be orthogonal, but they are not.\n\nThis seems like material for a different patch.\n\n> Also I created an unlogged table with a SERIAL which created a\n> sequence. The table is unlogged but the sequence is permanent, which\n> is probably ok.\n\n> I only have packages down to pg 9.3, so I could not test prior 9.1.\n> By looking at the online documentation, is seems that relistemp\n> appears in pg 8.4, so the corresponding extraction should be guarded\n> by this version. Before that, temporary objects existed but were\n> identified indirectly, possibly because they were stored in a\n> temporary schema. I suggest not to try to address cases prior 8.4.\n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 27 Apr 2019 19:43:31 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: nRe: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "Hello David,\n\nPatch applies. There seems to be a compilation issue:\n\n describe.c:5974:1: error: expected declaration or statement at end of\n input\n }\n\nAlso there is an added indentation problem: the size & description stuff \nhave been moved left but it should still in the verbose case, and a } is \nmissing after them, which fixes the compilation.\n\n>> Make check fails because of my temp schema was numbered 4 instead of 3, and\n>> I'm \"fabien\" rather than \"shackle\".\n>\n> I think the way forward is to test this with TAP rather than the\n> fixed-string method.\n\nOk.\n\n> Checks removed while I figure out a new TAP test.\n\n>> I only have packages down to pg 9.3, so I could not test prior 9.1.\n>> By looking at the online documentation, is seems that relistemp\n>> appears in pg 8.4, so the corresponding extraction should be guarded\n>> by this version. Before that, temporary objects existed but were\n>> identified indirectly, possibly because they were stored in a\n>> temporary schema. I suggest not to try to address cases prior 8.4.\n>\n> Done.\n\nAfter some checking, I think that there is an issue with the version \nnumbers:\n - 9.1 is 90100, not 91000\n - 8.4 is 80400, not 84000\n\nAlso, it seems that describes builds queries with uppercase keywords, so \nprobably the new additions should follow that style: case -> CASE (and \nalso when then else end as…)\n\n-- \nFabien.",
"msg_date": "Sat, 27 Apr 2019 22:38:50 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: nRe: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 10:38:50PM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> Patch applies. There seems to be a compilation issue:\n> \n> describe.c:5974:1: error: expected declaration or statement at end of\n> input\n> }\n\nThis is in brown paper bag territory. Fixed.\n\n> > I think the way forward is to test this with TAP rather than the\n> > fixed-string method.\n> \n> Ok.\n\nI've sent a separate patch extracted from the one you sent which adds\nstdin to our TAP testing infrastructure. I hope it lands so it'll be\nsimpler to add these tests in a future version of the patch.\n\n> > Checks removed while I figure out a new TAP test.\n> \n> > > I only have packages down to pg 9.3, so I could not test prior 9.1.\n> > > By looking at the online documentation, is seems that relistemp\n> > > appears in pg 8.4, so the corresponding extraction should be guarded\n> > > by this version. Before that, temporary objects existed but were\n> > > identified indirectly, possibly because they were stored in a\n> > > temporary schema. I suggest not to try to address cases prior 8.4.\n> > \n> > Done.\n> \n> After some checking, I think that there is an issue with the version\n> numbers:\n> - 9.1 is 90100, not 91000\n> - 8.4 is 80400, not 84000\n\nAnother brown paper bag, now fixed.\n\n> Also, it seems that describes builds queries with uppercase\n> keywords, so probably the new additions should follow that style:\n> case -> CASE (and also when then else end as…)\n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 28 Apr 2019 17:15:41 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "[PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "Not particularly on topic, but: including a patch version number in your\nsubject headings is pretty unfriendly IMO, because it breaks threading\nfor people whose MUAs do threading by matching up subject lines.\n\nI don't actually see the point of the [PATCH] annotation at all, because\nthe thread is soon going to contain lots of messages with the same subject\nline but no embedded patch. Like this one. So it's just noise with no\ninformation content worth noticing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Apr 2019 13:14:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "\nHello David,\n\n>> Patch applies. There seems to be a compilation issue:\n>>\n>> describe.c:5974:1: error: expected declaration or statement at end of\n>> input\n>> }\n>\n> This is in brown paper bag territory. Fixed.\n\nI do not understand why you move both size and description out of the \nverbose mode, it should be there only when under verbose?\n\n> I've sent a separate patch extracted from the one you sent which adds\n> stdin to our TAP testing infrastructure. I hope it lands so it'll be\n> simpler to add these tests in a future version of the patch.\n\nWhy not. As I'm the one who wrote the modified function, probably I could \nhave thought of providing an input. I'm not sure it is worth a dedicated \nsubmission, could go together with any commit that would use it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 28 Apr 2019 19:26:55 +0200 (CEST)",
"msg_from": "Fabien COELHO <fabien.coelho@mines-paristech.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 01:14:01PM -0400, Tom Lane wrote:\n> Not particularly on topic, but: including a patch version number in your\n> subject headings is pretty unfriendly IMO, because it breaks threading\n> for people whose MUAs do threading by matching up subject lines.\n\nThanks for letting me know about those MUAs.\n\n> I don't actually see the point of the [PATCH] annotation at all,\n> because the thread is soon going to contain lots of messages with\n> the same subject line but no embedded patch. Like this one. So\n> it's just noise with no information content worth noticing.\n\nOK\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 29 Apr 2019 05:58:47 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 07:26:55PM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> > > Patch applies. There seems to be a compilation issue:\n> > > \n> > > describe.c:5974:1: error: expected declaration or statement at end of\n> > > input\n> > > }\n> > \n> > This is in brown paper bag territory. Fixed.\n> \n> I do not understand why you move both size and description out of the\n> verbose mode, it should be there only when under verbose?\n\nMy mistake. Fixed.\n\n> > I've sent a separate patch extracted from the one you sent which adds\n> > stdin to our TAP testing infrastructure. I hope it lands so it'll be\n> > simpler to add these tests in a future version of the patch.\n> \n> Why not. As I'm the one who wrote the modified function, probably I could\n> have thought of providing an input. I'm not sure it is worth a dedicated\n> submission, could go together with any commit that would use it.\n\nMy hope is that this is seen as a bug fix and gets back-patched.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 29 Apr 2019 06:19:02 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "Hello David,\n\n> My mistake. Fixed.\n\nAbout v6: applies, compiles, make check ok.\n\nCode is ok.\n\nMaybe there could be a comment to tell that prior version are not \naddressed, something like:\n\n ...\n }\n /* else do not bother guessing the temporary status on old version */\n\nNo tests, pending an added TAP test infrastructure for psql.\n\nI have a question a out the index stuff: indexes seem to appear as entries \nin pg_class, and ISTM that they can be temporary/unlogged/permanent as \nattached to corresponding objects. So the guard is not very useful and it \ncould make sense to show the information on indexes as well.\n\nAfter removing the guard:\n\n postgres=# \\dtmv+ *foo*\n List of relations\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n О©╫ Schema О©╫ Name О©╫ Type О©╫ Owner О©╫ Persistence О©╫ Size О©╫ Description О©╫\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n О©╫ pg_temp_3 О©╫ foo О©╫ table О©╫ fabien О©╫ temporary О©╫ 0 bytes О©╫ О©╫\n О©╫ public О©╫ mfoo О©╫ materialized view О©╫ fabien О©╫ permanent О©╫ 0 bytes О©╫ О©╫\n О©╫ public О©╫ ufoo О©╫ table О©╫ fabien О©╫ unlogged О©╫ 0 bytes О©╫ О©╫\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n (3 rows)\n\n postgres=# \\di+ *foo*\n List of relations\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n О©╫ Schema О©╫ Name О©╫ Type О©╫ Owner О©╫ Table О©╫ Persistence О©╫ Size О©╫ Description О©╫\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n О©╫ pg_temp_3 О©╫ foo_pkey О©╫ index О©╫ fabien О©╫ foo О©╫ temporary О©╫ 8192 bytes О©╫ О©╫\n О©╫ public О©╫ ufoo_pkey О©╫ index О©╫ fabien О©╫ ufoo О©╫ unlogged О©╫ 16 kB О©╫ О©╫\n О©╫ public О©╫ ufoou О©╫ index О©╫ fabien О©╫ ufoo О©╫ unlogged О©╫ 16 kB О©╫ О©╫\n О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫О©╫\n (3 rows)\n\nIs there a special reason not to show it?\n\n-- \nFabien.",
"msg_date": "Mon, 29 Apr 2019 08:48:17 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 08:48:17AM +0200, Fabien COELHO wrote:\n> \n> Hello David,\n> \n> > My mistake. Fixed.\n> \n> About v6: applies, compiles, make check ok.\n> \n> Code is ok.\n> \n> Maybe there could be a comment to tell that prior version are not addressed,\n> something like:\n> \n> ...\n> }\n> /* else do not bother guessing the temporary status on old version */\n\nDid something like this.\n\n> No tests, pending an added TAP test infrastructure for psql.\n\nRight.\n\n> I have a question a out the index stuff: indexes seem to appear as entries\n> in pg_class, and ISTM that they can be temporary/unlogged/permanent as\n> attached to corresponding objects. So the guard is not very useful and it\n> could make sense to show the information on indexes as well.\n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 29 Apr 2019 16:23:58 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "\nHello David,\n\nPatch v7 applies, compiles, make check ok. No docs needed.\nNo tests, pending some TAP infrastructure.\n\nI could no test with a version between 8.4 & 9.1.\n\nNo further comments. Marked as ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 29 Apr 2019 22:03:56 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On Sat, 27 Apr 2019 at 06:18, David Fetter <david@fetter.org> wrote:\n>\n> On Fri, Apr 26, 2019 at 04:22:18PM +0200, Rafia Sabih wrote:\n> > On Fri, 26 Apr 2019 at 14:49, Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > >\n> > > On Wed, 24 Apr 2019 at 10:30, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > > >\n> > > >\n> > > > Hello David,\n> > > >\n> > > > >>> I noticed that there wasn't a bulk way to see table logged-ness in psql,\n> > > > >>> so I made it part of \\dt+.\n> > > > >>\n> > > > >> Applies, compiles, works for me.\n> > > > >>\n> > > > >> ISTM That temporary-ness is not shown either. Maybe the persistence column\n> > > > >> should be shown as is?\n> > > > >\n> > > > > Temporariness added, but not raw.\n> > > >\n> > > > Ok, it is better like this way.\n> > > >\n> > > > >> Tests?\n> > > > >\n> > > > > Included, but they're not stable for temp tables. I'm a little stumped\n> > > > > as to how to either stabilize them or test some other way.\n> > > >\n> > > > Hmmm. First there is the username which appears, so there should be a\n> > > > dedicated user for the test.\n> > > >\n> > > > I'm unsure how to work around the temporary schema number, which is\n> > > > undeterministic with parallel execution it. I'm afraid the only viable\n> > > > approach is not to show temporary tables, too bad:-(\n> > > >\n> > > > >> Doc?\n> > > > >\n> > > > > What further documentation does it need?\n> > > >\n> > > > Indeed, there is no precise doc, so nothing to update :-)/:-(\n> > > >\n> > > >\n> > > > Maybe you could consider adding a case for prior 9.1 version, something\n> > > > like:\n> > > > ... case c.relistemp then 'temporary' else 'permanent' end as ...\n> > > >\n> > > >\n> > > I was reviewing this patch and found a bug,\n> > >\n> > > create table t (i int);\n> > > create index idx on t(i);\n> > > \\di+\n> > > psql: print.c:3452: printQuery: Assertion `opt->translate_columns ==\n> > > ((void *)0) || opt->n_translate_columns >= cont.ncolumns' failed.\n> >\n> > Looking into this further, apparently the position of\n> >\n> > if (verbose)\n> > {\n> > + /*\n> > + * Show whether the table is permanent, temporary, or unlogged.\n> > + */\n> > + if (pset.sversion >= 91000)\n> > + appendPQExpBuffer(&buf,\n> > + \",\\n case c.relpersistence when 'p' then 'permanent' when 't'\n> > then 'temporary' when 'u' then 'unlogged' else 'unknown' end as\n> > \\\"%s\\\"\",\n> > + gettext_noop(\"Persistence\"));\n> >\n> > is not right, it is being called for indexes with verbose option also.\n> > There should be an extra check for it being not called for index case.\n> > Something like,\n> > if (verbose)\n> > {\n> > /*\n> > * Show whether the table is permanent, temporary, or unlogged.\n> > */\n> > if (!showIndexes)\n> > if (pset.sversion >= 91000)\n> > appendPQExpBuffer(&buf,\n> > \",\\n case c.relpersistence when 'p' then 'permanent' when 't' then\n> > 'temporary' when 'u' then 'unlogged' else 'unknown' end as \\\"%s\\\"\",\n> > gettext_noop(\"Persistence\"));\n> >\n> > Not sure, how do modify it in a more neat way.\n>\n> I suspect that as this may get a little messier, but I've made it\n> fairly neat short of a major refactor.\n>\nI found the following warning on the compilation,\ndescribe.c: In function ‘listTables’:\ndescribe.c:3705:7: warning: this ‘if’ clause does not guard...\n[-Wmisleading-indentation]\n else if (pset.sversion >= 80100)\n ^~\ndescribe.c:3710:3: note: ...this statement, but the latter is\nmisleadingly indented as if it were guarded by the ‘if’\n appendPQExpBuffer(&buf,\n\nTalking of indentation, you might want to run pgindent once. Other\nthan that the patch looks good to me.\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Fri, 3 May 2019 09:44:54 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Show whether tables are logged in \\dt+"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> [ v7-0001-Show-detailed-relation-persistence-in-dt.patch ]\n\nI looked this over and had a few suggestions, as per attached v8:\n\n* The persistence description values ought to be translatable, as\nis the usual practice in describe.c. This is slightly painful\nbecause it requires tweaking the translate_columns[] values in a\nnon-constant way, but it's not that bad.\n\n* I dropped the \"ELSE 'unknown'\" bit in favor of just emitting NULL\nif the persistence isn't recognized. This is the same way that the\ntable-type CASE just above does it, and I see no reason to be different.\nMoreover, there are message-style-guidelines issues with what to print\nif you do want to print something; \"unknown\" doesn't cut it.\n\n* I also dropped the logic for pre-9.1 servers, because the existing\nprecedent in describeOneTableDetails() is that we only consider\nrelpersistence for >= 9.1, and I don't see a real good reason to\ndeviate from that. 9.0 and before are long out of support anyway.\n\nIf there aren't objections, I think v8 is committable.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 02 Jul 2019 15:10:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On 2019-Jul-02, Tom Lane wrote:\n\n> * The persistence description values ought to be translatable, as\n> is the usual practice in describe.c. This is slightly painful\n> because it requires tweaking the translate_columns[] values in a\n> non-constant way, but it's not that bad.\n\nLGTM. I only fear that the cols_so_far thing is easy to break, and the\nbreakage will be easy to miss.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 15:56:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-02, Tom Lane wrote:\n>> * The persistence description values ought to be translatable, as\n>> is the usual practice in describe.c. This is slightly painful\n>> because it requires tweaking the translate_columns[] values in a\n>> non-constant way, but it's not that bad.\n\n> LGTM. I only fear that the cols_so_far thing is easy to break, and the\n> breakage will be easy to miss.\n\nYeah, but that's pretty true of all the translatability stuff in\ndescribe.c. I wonder if there's any way to set up tests for that.\nThe fact that the .po files lag so far behind the source code seems\nlike an impediment --- even if we made a test case that presumed\n--enable-nls and tried to exercise this, the lack of translations\nfor the new words would get in the way for a long while.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 16:16:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "> On 2 Jul 2019, at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> even if we made a test case that presumed\n> --enable-nls and tried to exercise this, the lack of translations\n> for the new words would get in the way for a long while.\n\nFor testing though, couldn’t we have an autogenerated .po which has a unique\nand predictable dummy value translation for every string (the string backwards\nor something), which can be used for testing? This is all hand-wavy since I\nhaven’t tried actually doing it, but it seems a better option than waiting for\n.po files to be available. Or am I missing the point of the value of the\ndiscussed test?\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 2 Jul 2019 22:29:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "On 2019-Jul-02, Daniel Gustafsson wrote:\n\n> > On 2 Jul 2019, at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > even if we made a test case that presumed\n> > --enable-nls and tried to exercise this, the lack of translations\n> > for the new words would get in the way for a long while.\n> \n> For testing though, couldn’t we have an autogenerated .po which has a unique\n> and predictable dummy value translation for every string (the string backwards\n> or something), which can be used for testing? This is all hand-wavy since I\n> haven’t tried actually doing it, but it seems a better option than waiting for\n> .po files to be available. Or am I missing the point of the value of the\n> discussed test?\n\nHmm, no, I think that's precisely it, and that sounds like a pretty good\nstarter idea ... but I wouldn't want to be the one to have to set this\nup -- it seems pretty laborious.\n\nAnyway I'm not objecting to the patch -- I agree that we're already not\ntesting translatability and that this patch shouldn't be forced to start\ndoing it.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 16:35:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "> On 2 Jul 2019, at 22:35, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Anyway I'm not objecting to the patch -- I agree that we're already not\n> testing translatability and that this patch shouldn't be forced to start\n> doing it.\n\nI forgot to add that to my previous email, the patch as it stands in v8 looks\ngood to me. I’ve missed having this on many occasions.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 2 Jul 2019 22:38:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 2 Jul 2019, at 22:35, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Anyway I'm not objecting to the patch -- I agree that we're already not\n>> testing translatability and that this patch shouldn't be forced to start\n>> doing it.\n\n> I forgot to add that to my previous email, the patch as it stands in v8 looks\n> good to me. I’ve missed having this on many occasions.\n\nOK, pushed.\n\nFor the record, I did verify that the translatability logic worked\nby adding some bogus entries to psql/po/es.po and seeing that the\ndisplay changed to match.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jul 2019 11:49:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v5] Show detailed table persistence in \\dt+"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWhen I compile PostgreSQL-11.2 on SmartOS, I find the following errors:\r\n\r\nUndefined first referenced\r\n symbol in file\r\nper_MultiFuncCall adminpack.o\r\nend_MultiFuncCall adminpack.o\r\nBuildTupleFromCStrings adminpack.o\r\nDecodeDateTime adminpack.o\r\nTupleDescGetAttInMetadata adminpack.o\r\npath_is_prefix_of_path adminpack.o\r\ncanonicalize_path adminpack.o\r\ntext_to_cstring adminpack.o\r\nerrmsg adminpack.o\r\nsuperuser adminpack.o\r\nerrcode_for_file_access adminpack.o\r\npalloc adminpack.o\r\nCurrentMemoryContext adminpack.o\r\npstrdup adminpack.o\r\nReadDir adminpack.o\r\nFreeFile adminpack.o\r\nerrfinish adminpack.o\r\ninit_MultiFuncCall adminpack.o\r\nerrstart adminpack.o\r\nAllocateDir adminpack.o\r\nGetUserId adminpack.o\r\nis_member_of_role adminpack.o\r\npsprintf adminpack.o\r\nDataDir adminpack.o\r\nLog_filename adminpack.o\r\nLog_directory adminpack.o\r\nAllocateFile adminpack.o\r\npath_is_relative_and_below_cwd adminpack.o\r\nHeapTupleHeaderGetDatum adminpack.o\r\nerrcode adminpack.o\r\nFreeDir adminpack.o\r\nParseDateTime adminpack.o\r\npath_contains_parent_reference adminpack.o\r\npg_detoast_datum_packed adminpack.o\r\nCreateTemplateTupleDesc adminpack.o\r\nTupleDescInitEntry adminpack.o\r\nld: warning: symbol referencing errors\r\nmake[1]: Leaving directory \r\n'/home/postgres/postgresql-11.2/contrib/adminpack'\r\n\r\n\r\nMy environment is:\r\n\r\n# cat /etc/release\r\n SmartOS x86_64\r\n Copyright 2010 Sun Microsystems, Inc. All Rights Reserved.\r\n Copyright 2015 Joyent, Inc. All Rights Reserved.\r\n Use is subject to license terms.\r\n See joyent_20161108T160947Z for assembly date and time.\r\n# $ pg_config\r\nBINDIR = /home/postgres/pg11.2/bin\r\nDOCDIR = /home/postgres/pg11.2/share/doc\r\nHTMLDIR = /home/postgres/pg11.2/share/doc\r\nINCLUDEDIR = /home/postgres/pg11.2/include\r\nPKGINCLUDEDIR = /home/postgres/pg11.2/include\r\nINCLUDEDIR-SERVER = /home/postgres/pg11.2/include/server\r\nLIBDIR = /home/postgres/pg11.2/lib\r\nPKGLIBDIR = /home/postgres/pg11.2/lib\r\nLOCALEDIR = /home/postgres/pg11.2/share/locale\r\nMANDIR = /home/postgres/pg11.2/share/man\r\nSHAREDIR = /home/postgres/pg11.2/share\r\nSYSCONFDIR = /home/postgres/pg11.2/etc\r\nPGXS = /home/postgres/pg11.2/lib/pgxs/src/makefiles/pgxs.mk\r\nCONFIGURE = '--prefix=/home/postgres/pg11.2' 'CFLAGS=-g -O0'\r\nCC = gcc\r\nCPPFLAGS =\r\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith \r\n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \r\n-Wformat-security -fno-strict-aliasing -fwrapv \r\n-fexcess-precision=standard -g -O0\r\nCFLAGS_SL = -fPIC\r\nLDFLAGS = -Wl,-R'/home/postgres/pg11.2/lib'\r\nLDFLAGS_EX =\r\nLDFLAGS_SL =\r\nLIBS = -lpgcommon -lpgport -lz -lreadline -lnsl -lsocket -lm\r\nVERSION = PostgreSQL 11.2\r\n\r\nCan anyone help me out? Thanks!\r\n\r\nBest regards!\r\n\r\nJapin Li\r\n",
"msg_date": "Tue, 23 Apr 2019 03:56:06 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Symbol referencing errors"
},
{
"msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> When I compile PostgreSQL-11.2 on SmartOS, I find the following errors:\n> ...\n> ld: warning: symbol referencing errors\n\nYeah, our SmartOS buildfarm members show those warnings too, eg\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=damselfly&dt=2019-04-22%2010%3A00%3A15&stg=make-contrib\n\nAFAICT they're harmless, so my advice is just ignore them.\n\nIf you're sufficiently annoyed by them to find the cause\nand try to fix it, go ahead, but I haven't heard anyone\nelse worried about it. It might be that SmartOS wants\nsomething like what we have to do on macOS and AIX,\nie provide the core postgres executable in some sort of\nlinker switch while linking shlibs that will be loaded\nby that executable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 00:09:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "\r\n\r\nOn 4/23/19 12:09 PM, Tom Lane wrote:\r\n> AFAICT they're harmless, so my advice is just ignore them.\r\n>\r\n> If you're sufficiently annoyed by them to find the cause\r\n> and try to fix it, go ahead, but I haven't heard anyone\r\n> else worried about it. It might be that SmartOS wants\r\n> something like what we have to do on macOS and AIX,\r\n> ie provide the core postgres executable in some sort of\r\n> linker switch while linking shlibs that will be loaded\r\n> by that executable.\r\nYes, those errors does not impact the postgresql, but when\r\nI use oracle_fdw extension, I couldn't startup the postgresql,\r\nand I find that the dlopen throw an error which lead postmaster\r\nexit, and there is not more information.\r\n\r\n\r\nregards,\r\n\r\nJapin Li\r\n\r\n",
"msg_date": "Tue, 23 Apr 2019 04:26:12 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> When I compile PostgreSQL-11.2 on SmartOS, I find the following errors:\n >> ...\n >> ld: warning: symbol referencing errors\n\n Tom> Yeah, our SmartOS buildfarm members show those warnings too, eg\n\n Tom> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=damselfly&dt=2019-04-22%2010%3A00%3A15&stg=make-contrib\n\n Tom> AFAICT they're harmless, so my advice is just ignore them.\n\n Tom> If you're sufficiently annoyed by them to find the cause\n Tom> and try to fix it, go ahead, but I haven't heard anyone\n Tom> else worried about it. It might be that SmartOS wants\n Tom> something like what we have to do on macOS and AIX,\n Tom> ie provide the core postgres executable in some sort of\n Tom> linker switch while linking shlibs that will be loaded\n Tom> by that executable.\n\nI wonder if it's the use of -Bsymbolic that causes this (buildfarm logs\ndon't seem to go back far enough to check). (Note to original poster:\n-Bsymbolic is there for a reason, you can't just remove it - but see\nbelow.)\n\nSince this is an ELF platform - arguably the closest thing to the\noriginal reference ELF platform, at least by descent - it should not\nrequire the kinds of tricks used on macOS and AIX; but we haven't done\nthe work needed to test using version scripts in place of -Bsymbolic for\nfixing the symbol conflict problems. That ought to be a relatively\nstraightforward project for someone with access to a system to test on\n(and I'm happy to advise on it).\n\nThe thing to do would be to try and copy the changes made to the *BSD\nports in commit e3d77ea6b instead of the change made in 4fa3741d1. The\ncontrib/postgres_fdw tests should show whether it worked or not.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 23 Apr 2019 06:23:13 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "On Tue, 2019-04-23 at 04:26 +0000, Li Japin wrote:\n> Yes, those errors does not impact the postgresql, but when\n> I use oracle_fdw extension, I couldn't startup the postgresql,\n> and I find that the dlopen throw an error which lead postmaster\n> exit, and there is not more information.\n\nThat may wall be a bug in oracle_fdw, since I have no reports of\nanybody running it on that operating system.\n\nMaybe you should open an oracle_fdw issue, but I don't know how\nmuch I can help you, since this is the first time I have heard\nof SmartOS.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 23 Apr 2019 09:09:46 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "On 2019-Apr-23, Laurenz Albe wrote:\n\n> Maybe you should open an oracle_fdw issue, but I don't know how\n> much I can help you, since this is the first time I have heard\n> of SmartOS.\n\nSmartOS is just the continuation of OpenSolaris, AFAIU.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:23:45 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-23, Laurenz Albe wrote:\n>> Maybe you should open an oracle_fdw issue, but I don't know how\n>> much I can help you, since this is the first time I have heard\n>> of SmartOS.\n\n> SmartOS is just the continuation of OpenSolaris, AFAIU.\n\nYeah. You can see these same link warnings on castoroides and\nprotosciurus, though they're no longer building HEAD for\nlack of C99-compliant system headers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 13:27:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "Hi,\r\n\r\nFinally, I find this crash is caused by shmget_osm, which does not support SmartOS (maybe,\r\nI am not sure). When I install Oracle Instant Client 12.2.0.1.0, it works.\r\n\r\nhttps://github.com/laurenz/oracle_fdw/issues/313\r\n\r\n\r\nOn 4/23/19 3:09 PM, Laurenz Albe wrote:\r\n\r\nOn Tue, 2019-04-23 at 04:26 +0000, Li Japin wrote:\r\n\r\n\r\nYes, those errors does not impact the postgresql, but when\r\nI use oracle_fdw extension, I couldn't startup the postgresql,\r\nand I find that the dlopen throw an error which lead postmaster\r\nexit, and there is not more information.\r\n\r\n\r\nThat may wall be a bug in oracle_fdw, since I have no reports of\r\nanybody running it on that operating system.\r\n\r\nMaybe you should open an oracle_fdw issue, but I don't know how\r\nmuch I can help you, since this is the first time I have heard\r\nof SmartOS.\r\n\r\n\n\n\n\n\n\nHi,\n\r\nFinally, I find this crash is caused by shmget_osm, which does not support SmartOS (maybe,\r\nI am not sure). When I install Oracle Instant Client 12.2.0.1.0, it works.\n\nhttps://github.com/laurenz/oracle_fdw/issues/313\n\n\nOn 4/23/19 3:09 PM, Laurenz Albe wrote:\n\n\nOn Tue, 2019-04-23 at 04:26 +0000, Li Japin wrote:\r\n\n\nYes, those errors does not impact the postgresql, but when\r\nI use oracle_fdw extension, I couldn't startup the postgresql,\r\nand I find that the dlopen throw an error which lead postmaster\r\nexit, and there is not more information.\r\n\n\nThat may wall be a bug in oracle_fdw, since I have no reports of\r\nanybody running it on that operating system.\r\n\r\nMaybe you should open an oracle_fdw issue, but I don't know how\r\nmuch I can help you, since this is the first time I have heard\r\nof SmartOS.",
"msg_date": "Wed, 24 Apr 2019 07:33:44 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-23 06:23:13 +0100, Andrew Gierth wrote:\n> I wonder if it's the use of -Bsymbolic that causes this (buildfarm logs\n> don't seem to go back far enough to check). (Note to original poster:\n> -Bsymbolic is there for a reason, you can't just remove it - but see\n> below.)\n\nFor the record, yes, the \"ld: warning: symbol referencing errors\" warnings are\ndue to -Bsymbolic while linking extensions. The man page says:\n\"The link-editor issues warnings for undefined symbols unless -z defs overrides\"\n\n\n> Since this is an ELF platform - arguably the closest thing to the\n> original reference ELF platform, at least by descent - it should not\n> require the kinds of tricks used on macOS and AIX; but we haven't done\n> the work needed to test using version scripts in place of -Bsymbolic for\n> fixing the symbol conflict problems. That ought to be a relatively\n> straightforward project for someone with access to a system to test on\n> (and I'm happy to advise on it).\n\nIt's indeed trivial - the only change needed from linux is to replace\n-Wl,--version-script=... with -Wl,-M...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 01:34:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 01:34:36 -0700, Andres Freund wrote:\n> On 2019-04-23 06:23:13 +0100, Andrew Gierth wrote:\n> > I wonder if it's the use of -Bsymbolic that causes this (buildfarm logs\n> > don't seem to go back far enough to check). (Note to original poster:\n> > -Bsymbolic is there for a reason, you can't just remove it - but see\n> > below.)\n> \n> For the record, yes, the \"ld: warning: symbol referencing errors\" warnings are\n> due to -Bsymbolic while linking extensions. The man page says:\n> \"The link-editor issues warnings for undefined symbols unless -z defs overrides\"\n> \n> \n> > Since this is an ELF platform - arguably the closest thing to the\n> > original reference ELF platform, at least by descent - it should not\n> > require the kinds of tricks used on macOS and AIX; but we haven't done\n> > the work needed to test using version scripts in place of -Bsymbolic for\n> > fixing the symbol conflict problems. That ought to be a relatively\n> > straightforward project for someone with access to a system to test on\n> > (and I'm happy to advise on it).\n> \n> It's indeed trivial - the only change needed from linux is to replace\n> -Wl,--version-script=... with -Wl,-M...\n\nPatch attached. Passed check-world (without tap tests, didn't install the perl\nmods) on solaris. Does anybody see a reason not to apply? Even just having\nless noisy build logs seem like an advantage.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 23 Aug 2022 19:04:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Symbol referencing errors"
}
] |
[
{
"msg_contents": "Hello All,\n\nPlease find attached a patch that explains Log-Shipping standby server\nmajor upgrades.\nWe agreed to do this in\nhttps://www.postgresql.org/message-id/CAA4eK1%2Bo6ErVAh484VtE91wow1-uOysohSvb0TS52Ei76PzOKg%40mail.gmail.com\n\nThanks,\nKonstantin Evteev",
"msg_date": "Tue, 23 Apr 2019 21:31:39 +0300",
"msg_from": "Konstantin Evteev <konst583@gmail.com>",
"msg_from_op": true,
"msg_subject": "patch that explains Log-Shipping standby server major upgrades"
}
] |
[
{
"msg_contents": "Per my comment at https://postgr.es/m/20190422225129.GA6126@alvherre.pgsql\nI think that pg_dump can possibly cause bogus partition definitions,\nwhen the users explicitly decide to join tables as partitions that have\ndifferent column ordering than the parent table. Any COPY or INSERT\ncommand without an explicit column list that tries to put tuples in the\ntable will fail after the restore.\n\nTom Lane said:\n\n> I haven't looked at the partitioning code, but I am quite sure that that's\n> always happened for old-style inheritance children, and I imagine pg_dump\n> is just duplicating that old behavior.\n\nActually, the new code is unrelated to the old one; for legacy\ninheritance, the children are always created exactly as they were\ncreated at definition time. If you use ALTER TABLE ... INHERITS\n(attach a table as a children after creation) then obviously the child\ntable cannot be modified to match its new parent; and pg_dump reproduces\nthe exact column ordering that the table originally had. If you use\n\"CREATE TABLE ... INHERITS (parent)\" then the child columns are reordered\n*at that point* (creation time); the dump will, again, reproduce the\nexact same definition.\n\n\nI think failing to reproduce the exact same definition is a pg_dump bug\nthat should be fixed and backpatched to pg10. It's just sheer luck that\nnobody has complained of being bitten by it.\n\n-- \n�lvaro Herrera http://www.twitter.com/alvherre\n\n\n",
"msg_date": "Tue, 23 Apr 2019 14:50:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pg_dump partitions can lead to inconsistent state after restore"
},
{
"msg_contents": "On Wed, 24 Apr 2019 at 06:50, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Per my comment at https://postgr.es/m/20190422225129.GA6126@alvherre.pgsql\n> I think that pg_dump can possibly cause bogus partition definitions,\n> when the users explicitly decide to join tables as partitions that have\n> different column ordering than the parent table. Any COPY or INSERT\n> command without an explicit column list that tries to put tuples in the\n> table will fail after the restore.\n\nYeah, pg_dump itself is broken here, never mind dreaming up some other\nuser command.\n\nWe do use a column list when doing COPY, but with --inserts (not\n--column-inserts) we don't include a column list.\n\nAll it takes is:\n\npostgres=# create table listp (a int, b text) partition by list(a);\nCREATE TABLE\npostgres=# create table listp1 (b text, a int);\nCREATE TABLE\npostgres=# alter table listp attach partition listp1 for values in(1);\nALTER TABLE\npostgres=# insert into listp values(1,'One');\nINSERT 0 1\npostgres=# \\q\n\n$ createdb test1\n$ pg_dump --inserts postgres | psql test1\n...\nERROR: invalid input syntax for type integer: \"One\"\nLINE 1: INSERT INTO public.listp1 VALUES ('One', 1);\n\nThat settles the debate on the other thread...\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 13:19:03 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump partitions can lead to inconsistent state after restore"
},
{
"msg_contents": "On 2019/04/24 10:19, David Rowley wrote:\n> On Wed, 24 Apr 2019 at 06:50, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Per my comment at https://postgr.es/m/20190422225129.GA6126@alvherre.pgsql\n>> I think that pg_dump can possibly cause bogus partition definitions,\n>> when the users explicitly decide to join tables as partitions that have\n>> different column ordering than the parent table. Any COPY or INSERT\n>> command without an explicit column list that tries to put tuples in the\n>> table will fail after the restore.\n> \n> Yeah, pg_dump itself is broken here, never mind dreaming up some other\n> user command.\n> \n> We do use a column list when doing COPY, but with --inserts (not\n> --column-inserts) we don't include a column list.\n> \n> All it takes is:\n> \n> postgres=# create table listp (a int, b text) partition by list(a);\n> CREATE TABLE\n> postgres=# create table listp1 (b text, a int);\n> CREATE TABLE\n> postgres=# alter table listp attach partition listp1 for values in(1);\n> ALTER TABLE\n> postgres=# insert into listp values(1,'One');\n> INSERT 0 1\n> postgres=# \\q\n> \n> $ createdb test1\n> $ pg_dump --inserts postgres | psql test1\n> ...\n> ERROR: invalid input syntax for type integer: \"One\"\n> LINE 1: INSERT INTO public.listp1 VALUES ('One', 1);\n> \n> That settles the debate on the other thread...\n\n+1 to fixing this, although +0.5 to back-patching.\n\nThe reason no one has complained so far of being bitten by this may be\nthat, as each of one us has said at least once on the other thread, users\nare not very likely to create partitions with different column orders to\nbegin with. Maybe, that isn't a reason to leave it as is though.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:53:20 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump partitions can lead to inconsistent state after restore"
},
{
"msg_contents": "On Wed, 24 Apr 2019 at 14:53, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/04/24 10:19, David Rowley wrote:\n> > ERROR: invalid input syntax for type integer: \"One\"\n> > LINE 1: INSERT INTO public.listp1 VALUES ('One', 1);\n> >\n> > That settles the debate on the other thread...\n>\n> +1 to fixing this, although +0.5 to back-patching.\n>\n> The reason no one has complained so far of being bitten by this may be\n> that, as each of one us has said at least once on the other thread, users\n> are not very likely to create partitions with different column orders to\n> begin with. Maybe, that isn't a reason to leave it as is though.\n\nWell, you could probably class most of the bugs that make their way\nthrough feature freeze, alpha and beta as unlikely. I don't think\nthat gives us an excuse to leave them as bugs. If someone reported it\nwe'd most likely go and fix it then anyway, so I really don't see the\npoint in waiting until then.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 19:36:27 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump partitions can lead to inconsistent state after restore"
},
{
"msg_contents": "So, while testing this I noticed that pg_restore fails with deadlocks if\nyou do a parallel restore if the --load-via-partition-root switch was\ngiven to pg_dump. Is that a known bug?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 15:01:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump partitions can lead to inconsistent state after restore"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 4:01 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> So, while testing this I noticed that pg_restore fails with deadlocks if\n> you do a parallel restore if the --load-via-partition-root switch was\n> given to pg_dump. Is that a known bug?\n\nWas investigating --load-via-partition-root with a coworker and came\nacross the following note in the documentation:\n\nhttps://www.postgresql.org/docs/11/app-pgdump.html\n\n\"It is best not to use parallelism when restoring from an archive made\nwith this option, because pg_restore will not know exactly which\npartition(s) a given archive data item will load data into. This could\nresult in inefficiency due to lock conflicts between parallel jobs, or\nperhaps even reload failures due to foreign key constraints being set\nup before all the relevant data is loaded.\"\n\nApparently, this note was added as a result of the following discussion:\n\nhttps://www.postgresql.org/message-id/flat/13624.1535486019%40sss.pgh.pa.us\n\nSo, while the documentation doesn't explicitly list deadlocks as\npossible risk, Tom hinted in the first email that it's possible.\n\nI set out to reproduce one and was able to, although I'm not sure if\nit's the same deadlock as seen by Alvaro. Steps I used to reproduce:\n\n# in the source database\n\ncreate table foo (a int primary key);\ninsert into foo select generate_series(1, 1000000);\ncreate table ht (a int) partition by hash (a);\nselect 'create table ht' || i || ' partition of ht for values with\n(modulus 100, remainder ' || i -1 || ');' from generate_series(1, 100)\ni;\n\\gexec\ninsert into ht select generate_series(1, 1000000);\nalter table ht add foreign key (a) references foo (a);\n\n# in shell\npg_dump --load-via-partition-root -Fd -f /tmp/dump\ncreatedb targetdb\npg_restore -d targetdb -j 2 /tmp/dump\n\nThe last step reports deadlocks; in the server log:\n\nERROR: deadlock detected\nDETAIL: Process 14213 waits for RowExclusiveLock on relation 17447 of\ndatabase 17443; blocked by process 14212.\n Process 14212 waits for ShareRowExclusiveLock on relation\n17507 of database 17443; blocked by process 14213.\n Process 14213: COPY public.ht (a) FROM stdin;\n\n Process 14212: ALTER TABLE public.ht\n ADD CONSTRAINT ht_a_fkey FOREIGN KEY (a) REFERENCES public.foo(a);\n\nHere, the process adding the foreign key has got the lock on the\nparent and trying to lock a partition to add the foreign key to it.\nThe process doing COPY (via root) has apperently locked the partition\nand waiting for the lock on the parent to do actual copying. Looking\ninto why the latter had got a lock on the partition at all if it\nhasn't started the copying yet, I noticed that it was locked when\nTRUNCATE was executed on it earlier in the same transaction as part of\nsome WAL-related optimization, which is something that only happens in\nthe parallel restore mode. I was under the impression that the TABLE\nDATA archive item (its TocEntry) would have no trace of the partition\nif it was dumped with --load-via-partition-root, but that's not the\ncase. --load-via-partition-root only dictates that the command that\nwill be dumped for the item will use the root parent as COPY target,\neven though the TocEntry itself is owned by the partition.\n\nMaybe, a way to prevent the deadlock would be for the process that\nwill do copy-into-given-partition-via-root-parent to do a `LOCK TABLE\nroot_parent` before `TRUNCATE the_partition`, but we'll need to get\nhold of the root parent from the TocEntry somehow. Turns out it's\nonly present in the TocEntry.copyStmt, from where it will have to\nparsed out. Maybe that's the only thing we could do without breaking\nthe archive format though.\n\nThoughts?\n\nBy the way, I couldn't think of ways to reproduce any of the hazards\nmentioned in the documentations of using parallel mode to restore an\narchive written with pg_dump --load-via-root-parent, but maybe I just\nhaven't tried hard enough.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 21 Jun 2019 14:54:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump partitions can lead to inconsistent state after restore"
}
] |
[
{
"msg_contents": "In pursuit of the problem with standby servers sometimes not responding\nto fast shutdowns [1], I spent awhile staring at the postmaster's\nstate-machine logic. I have not found a cause for that problem,\nbut I have identified some other things that seem like bugs:\n\n\n1. sigusr1_handler ignores PMSIGNAL_ADVANCE_STATE_MACHINE unless the\ncurrent state is PM_WAIT_BACKUP or PM_WAIT_BACKENDS. This restriction\nseems useless and shortsighted: PostmasterStateMachine should behave\nsanely regardless of our state, and sigusr1_handler really has no\nbusiness assuming anything about why a child is asking for a state\nmachine reconsideration. But it's not just not future-proof, it's a\nlive bug even for the one existing use-case, which is that a new\nwalsender sends this signal after it's re-marked itself as being a\nwalsender rather than a normal backend. Consider this sequence of\nevents:\n* System is running as a hot standby and allowing cascaded replication.\nThere are no live backends.\n* New replication connection request is received and forked off.\n(At this point the postmaster thinks this child is a normal session\nbackend.)\n* SIGTERM (Smart Shutdown) is received. Postmaster will transition\nto PM_WAIT_READONLY. I don't think it would have autovac or bgworker\nor bgwriter or walwriter children, but if so, assume they all exit\nbefore the next step. Postmaster will continue to sleep, waiting for\nits one \"normal\" child backend to finish.\n* Replication connection request completes, so child re-marks itself\nas a walsender and sends PMSIGNAL_ADVANCE_STATE_MACHINE.\n* Postmaster ignores signal because it's in the \"wrong\" state, so it\ndoesn't realize it now has no normal backend children.\n* Postmaster waits forever, or at least till DBA loses patience and\nsends a stronger signal.\n\nThis scenario doesn't explain the buildfarm failures since those don't\ninvolve smart shutdowns (and I think they don't involve cascaded\nreplication either). Still, it's clearly a bug, which I think\nwe should fix by removing the pointless restriction on whether\nPostmasterStateMachine can be called.\n\nAlso, I'm inclined to think that that should be the *last* step in\nsigusr1_handler, not randomly somewhere in the middle. As coded,\nit's basically assuming that no later action in sigusr1_handler\ncould affect anything that PostmasterStateMachine cares about, which\neven if it's true today is another highly not-future-proof assumption.\n\n\n2. MaybeStartWalReceiver will clear the WalReceiverRequested flag\neven if it fails to launch a child process for some reason. This\nis just dumb; it should leave the flag set so that we'll try again\nnext time through the postmaster's idle loop.\n\n\n3. PostmasterStateMachine's handling of PM_SHUTDOWN_2 is:\n\n if (pmState == PM_SHUTDOWN_2)\n {\n /*\n * PM_SHUTDOWN_2 state ends when there's no other children than\n * dead_end children left. There shouldn't be any regular backends\n * left by now anyway; what we're really waiting for is walsenders and\n * archiver.\n *\n * Walreceiver should normally be dead by now, but not when a fast\n * shutdown is performed during recovery.\n */\n if (PgArchPID == 0 && CountChildren(BACKEND_TYPE_ALL) == 0 &&\n WalReceiverPID == 0)\n {\n pmState = PM_WAIT_DEAD_END;\n }\n }\n\nThe comment about walreceivers is confusing, and it's also wrong. Maybe\nit was valid when written, but today it's easy to trace the logic and see\nthat we can only get to PM_SHUTDOWN_2 state from PM_SHUTDOWN state, and\nwe can only get to PM_SHUTDOWN state when there is no live walreceiver\n(cf processing of PM_WAIT_BACKENDS state), and we won't attempt to launch\na new walreceiver while in PM_SHUTDOWN or PM_SHUTDOWN_2 state, so it's\nimpossible for there to be any walreceiver here. I think we should just\nremove that comment and the WalReceiverPID == 0 test.\n\n\nComments? I think at least the first two points need to be back-patched.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20190416070119.GK2673@paquier.xyz\n\n\n",
"msg_date": "Tue, 23 Apr 2019 15:11:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Minor postmaster state machine bugs"
},
{
"msg_contents": "Hi,\n\nI am working on an FDW where the database does not support any operator\nother than \"=\" in JOIN condition. Some queries are genrating the plan with\nJOIN having \"<\" operator. How and at what stage I can stop FDW to not make\nsuch a plan. Here is my sample query.\n\n\n\ntpch=# select\n\n l_orderkey,\n\n sum(l_extendedprice * (1 - l_discount)) as revenue,\n\n o_orderdate,\n\n o_shippriority\n\nfrom\n\n customer,\n\n orders,\n\n lineitem\n\nwhere\n\n c_mktsegment = 'BUILDING'\n\n and c_custkey = o_custkey\n\n and l_orderkey = o_orderkey\n\n and o_orderdate < date '1995-03-22'\n\n and l_shipdate > date '1995-03-22'\n\ngroup by\n\n l_orderkey,\n\n o_orderdate,\n\n o_shippriority\n\norder by\n\n revenue,\n\n o_orderdate\n\nLIMIT 10;\n\n\n\n QUERY PLAN\n\n\n...\n\nMerge Cond: (orders.o_orderkey = lineitem.l_orderkey)\n\n-> Foreign Scan (cost=1.00..-1.00 rows=1000 width=50)\n\nOutput: orders.o_orderdate, orders.o_shippriority, orders.o_orderkey\n\nRelations: (customer) INNER JOIN (orders)\n\nRemote SQL: SELECT r2.o_orderdate, r2.o_shippriority, r2.o_orderkey\nFROM db.customer\nr1 ALL INNER JOIN db.orders r2 ON (((r1.c_custkey = r2.o_custkey)) AND\n((r2.o_orderdate < '1995-03-22')) AND ((r1.c_mktsegment = 'BUILDING')))\nORDER BY r2.o_orderkey, r2.o_orderdate, r2.o_shippriority\n\n...\n\n\n--\n\nIbrar Ahmed\n\n\nHi,I am working on an FDW where the database does not support any operator other than \"=\" in JOIN condition. Some queries are genrating the plan with JOIN having \"<\" operator. How and at what stage I can stop FDW to not make such a plan. Here is my sample query.tpch=# select\n l_orderkey,\n sum(l_extendedprice * (1 - l_discount)) as revenue,\n o_orderdate,\n o_shippriority\nfrom\n customer,\n orders,\n lineitem\nwhere\n c_mktsegment = 'BUILDING'\n and c_custkey = o_custkey\n and l_orderkey = o_orderkey\n and o_orderdate < date '1995-03-22'\n and l_shipdate > date '1995-03-22'\ngroup by\n l_orderkey,\n o_orderdate,\n o_shippriority\norder by\n revenue,\n o_orderdate\nLIMIT 10; QUERY PLAN ...Merge Cond: (orders.o_orderkey = lineitem.l_orderkey)-> Foreign Scan (cost=1.00..-1.00 rows=1000 width=50)Output: orders.o_orderdate, orders.o_shippriority, orders.o_orderkeyRelations: (customer) INNER JOIN (orders)Remote SQL: SELECT r2.o_orderdate, r2.o_shippriority, r2.o_orderkey FROM db.customer r1 ALL INNER JOIN db.orders r2 ON (((r1.c_custkey = r2.o_custkey)) AND ((r2.o_orderdate < '1995-03-22')) AND ((r1.c_mktsegment = 'BUILDING'))) ORDER BY r2.o_orderkey, r2.o_orderdate, r2.o_shippriority...--Ibrar Ahmed",
"msg_date": "Wed, 24 Apr 2019 00:22:25 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "How and at what stage to stop FDW to generate plan with JOIN."
},
{
"msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> I am working on an FDW where the database does not support any operator\n> other than \"=\" in JOIN condition. Some queries are genrating the plan with\n> JOIN having \"<\" operator. How and at what stage I can stop FDW to not make\n> such a plan. Here is my sample query.\n\nWhat exactly do you think should happen instead? You can't just tell\nusers not to ask such a query. (Well, you can try, but they'll probably\ngo looking for a less broken FDW.)\n\nIf what you really mean is you don't want to generate pushed-down\nforeign join paths containing non-equality conditions, the answer is\nto just not do that. That'd be the FDW's own fault, not that of\nthe core planner, if it creates a path representing a join it\ncan't actually implement. You'll end up running the join locally,\nwhich might not be great, but if you have no other alternative\nthen that's what you gotta do.\n\nIf what you mean is you don't know how to inspect the join quals\nto see if you can implement them, take a look at postgres_fdw\nto see how it handles the same issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Apr 2019 16:15:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: How and at what stage to stop FDW to generate plan with JOIN."
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> > I am working on an FDW where the database does not support any operator\n> > other than \"=\" in JOIN condition. Some queries are genrating the plan\n> with\n> > JOIN having \"<\" operator. How and at what stage I can stop FDW to not\n> make\n> > such a plan. Here is my sample query.\n>\n> What exactly do you think should happen instead? You can't just tell\n> users not to ask such a query. (Well, you can try, but they'll probably\n> go looking for a less broken FDW.)\n>\n> I know that.\n\n\n> If what you really mean is you don't want to generate pushed-down\n> foreign join paths containing non-equality conditions, the answer is\n> to just not do that. That'd be the FDW's own fault, not that of\n> the core planner, if it creates a path representing a join it\n> can't actually implement. You'll end up running the join locally,\n> which might not be great, but if you have no other alternative\n> then that's what you gotta do.\n>\n> Yes, that's what I am thinking. In case of non-equality condition join\nthem locally is\nthe only solution. I was just confirming.\n\n\n> If what you mean is you don't know how to inspect the join quals\n> to see if you can implement them, take a look at postgres_fdw\n> to see how it handles the same issue.\n>\n> I really don't know postgres_fdw have the same issue, but yes postgres_fdw\nis always my starting point.\n\n\n> regards, tom lane\n>\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Apr 24, 2019 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> I am working on an FDW where the database does not support any operator\n> other than \"=\" in JOIN condition. Some queries are genrating the plan with\n> JOIN having \"<\" operator. How and at what stage I can stop FDW to not make\n> such a plan. Here is my sample query.\n\nWhat exactly do you think should happen instead? You can't just tell\nusers not to ask such a query. (Well, you can try, but they'll probably\ngo looking for a less broken FDW.)\nI know that. \nIf what you really mean is you don't want to generate pushed-down\nforeign join paths containing non-equality conditions, the answer is\nto just not do that. That'd be the FDW's own fault, not that of\nthe core planner, if it creates a path representing a join it\ncan't actually implement. You'll end up running the join locally,\nwhich might not be great, but if you have no other alternative\nthen that's what you gotta do.\nYes, that's what I am thinking. In case of non-equality condition join them locally isthe only solution. I was just confirming. \nIf what you mean is you don't know how to inspect the join quals\nto see if you can implement them, take a look at postgres_fdw\nto see how it handles the same issue.\nI really don't know postgres_fdw have the same issue, but yes postgres_fdw is always my starting point. \n regards, tom lane\n-- Ibrar Ahmed",
"msg_date": "Wed, 24 Apr 2019 01:22:27 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How and at what stage to stop FDW to generate plan with JOIN."
}
] |
[
{
"msg_contents": "Hello, Postgres hackers.\n\nI happened to see a hang issue when running a simple copy query. The root\ncause and repro way are quite simple.\n\nmkfifo /tmp/a\n\nrun sql:\ncopy (select generate_series(1, 10)) to '/tmp/a';\n\nIt hangs at AllocateFile()->fopen() because that file was previously\ncreated as a fifo file, and it is not ctrl+c cancellable (on Linux).\n\n#0 0x00007f52df1c45a0 in __open_nocancel () at\n../sysdeps/unix/syscall-template.S:81\n#1 0x00007f52df151f20 in _IO_file_open (is32not64=4, read_write=4,\nprot=438, posix_mode=<optimized out>, filename=0x7ffe64199a10\n\"\\360\\303[\\001\", fp=0x1649c40) at fileops.c:229\n#2 _IO_new_file_fopen (fp=fp@entry=0x1649c40,\nfilename=filename@entry=0x15bc3f0\n\"/tmp/a\", mode=<optimized out>, mode@entry=0xaa0bb7 \"w\",\nis32not64=is32not64@entry=1) at fileops.c:339\n#3 0x00007f52df1465e4 in __fopen_internal (filename=0x15bc3f0 \"/tmp/a\",\nmode=0xaa0bb7 \"w\", is32=1) at iofopen.c:90\n#4 0x00000000007a0e90 in AllocateFile (name=0x15bc3f0 \"/tmp/a\",\nmode=mode@entry=0xaa0bb7 \"w\") at fd.c:2229\n#5 0x00000000005c51b4 in BeginCopyTo (pstate=pstate@entry=0x15b95f0,\nrel=rel@entry=0x0, query=<optimized out>, queryRelId=queryRelId@entry=0,\nfilename=<optimized out>, is_program=<optimized out>, attnamelist=0x0,\noptions=0x0) at copy.c:1919\n#6 0x00000000005c8999 in DoCopy (pstate=pstate@entry=0x15b95f0,\nstmt=stmt@entry=0x1596b60, stmt_location=0, stmt_len=48,\nprocessed=processed@entry=0x7ffe64199cd8) at copy.c:1078\n#7 0x00000000007d717a in standard_ProcessUtility (pstmt=0x1596ea0,\nqueryString=0x1595dc0 \"copy (select generate_series(1, 10)) to '/tmp/a';\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1596f80,\ncompletionTag=0x7ffe64199f90 \"\") at utility.c:551\n\nThis is, in theory, not a 100% bug, but it is probably not unusual to see\nconflicts of files between postgresql sqls and other applications on the\nsame node so I think the fix is needed. I checked all code that calls\nAllocateFile() and wrote a simple patch to do sanity check (if the file\nexists it must be a regular file) for those files which are probably out of\nthe postgres data directories which we probably want to ignore. This is\nactually not a perfect fix since it is not atomic (check and open), but it\nshould fix most of the scenarios. To be perfect, we might want to refactor\nAllocateFile() to allow atomic check&create using either 'x' in fopen()\nor O_EXCL in open(), also it seems that we might not want to create temp\nfile for AllocateFile() with fixed filenames. This is beyond of this patch\nof course.\n\nThanks.",
"msg_date": "Wed, 24 Apr 2019 12:46:15 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "[Patch] Check file type before calling AllocateFile() for files out\n of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "Hi,\n\nOn 2019-04-24 12:46:15 +0800, Paul Guo wrote:\n> This is, in theory, not a 100% bug, but it is probably not unusual to see\n> conflicts of files between postgresql sqls and other applications on the\n> same node so I think the fix is needed. I checked all code that calls\n> AllocateFile() and wrote a simple patch to do sanity check (if the file\n> exists it must be a regular file) for those files which are probably out of\n> the postgres data directories which we probably want to ignore. This is\n> actually not a perfect fix since it is not atomic (check and open), but it\n> should fix most of the scenarios. To be perfect, we might want to refactor\n> AllocateFile() to allow atomic check&create using either 'x' in fopen()\n> or O_EXCL in open(), also it seems that we might not want to create temp\n> file for AllocateFile() with fixed filenames. This is beyond of this patch\n> of course.\n\nThis seems like a bad idea to me. IMO we want to support using a pipe\netc here. If the admin creates a fifo like this without attaching a\nprogram it seems like it's their fault.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2019 21:49:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-04-24 12:46:15 +0800, Paul Guo wrote:\n> > This is, in theory, not a 100% bug, but it is probably not unusual to see\n> > conflicts of files between postgresql sqls and other applications on the\n> > same node so I think the fix is needed. I checked all code that calls\n> > AllocateFile() and wrote a simple patch to do sanity check (if the file\n> > exists it must be a regular file) for those files which are probably out\n> of\n> > the postgres data directories which we probably want to ignore. This is\n> > actually not a perfect fix since it is not atomic (check and open), but\n> it\n> > should fix most of the scenarios. To be perfect, we might want to\n> refactor\n> > AllocateFile() to allow atomic check&create using either 'x' in fopen()\n> > or O_EXCL in open(), also it seems that we might not want to create temp\n> > file for AllocateFile() with fixed filenames. This is beyond of this\n> patch\n> > of course.\n>\n> This seems like a bad idea to me. IMO we want to support using a pipe\n> etc here. If the admin creates a fifo like this without attaching a\n> program it seems like it's their fault.\n>\n\nOh, I never know this application scenario before. So yes, for this, we\nneed to keep the current code logic in copy code.\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-24 12:46:15 +0800, Paul Guo wrote:\n> This is, in theory, not a 100% bug, but it is probably not unusual to see\n> conflicts of files between postgresql sqls and other applications on the\n> same node so I think the fix is needed. I checked all code that calls\n> AllocateFile() and wrote a simple patch to do sanity check (if the file\n> exists it must be a regular file) for those files which are probably out of\n> the postgres data directories which we probably want to ignore. This is\n> actually not a perfect fix since it is not atomic (check and open), but it\n> should fix most of the scenarios. To be perfect, we might want to refactor\n> AllocateFile() to allow atomic check&create using either 'x' in fopen()\n> or O_EXCL in open(), also it seems that we might not want to create temp\n> file for AllocateFile() with fixed filenames. This is beyond of this patch\n> of course.\n\nThis seems like a bad idea to me. IMO we want to support using a pipe\netc here. If the admin creates a fifo like this without attaching a\nprogram it seems like it's their fault.Oh, I never know this application scenario before. So yes, for this, we need to keep the current code logic in copy code. \n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 24 Apr 2019 13:11:55 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "On 2019-Apr-24, Paul Guo wrote:\n\n> On Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > This seems like a bad idea to me. IMO we want to support using a pipe\n> > etc here. If the admin creates a fifo like this without attaching a\n> > program it seems like it's their fault.\n> \n> Oh, I never know this application scenario before. So yes, for this, we\n> need to keep the current code logic in copy code.\n\nBut the pgstat.c patch seems reasonable to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 08:54:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-24, Paul Guo wrote:\n>> On Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n>>> This seems like a bad idea to me. IMO we want to support using a pipe\n>>> etc here. If the admin creates a fifo like this without attaching a\n>>> program it seems like it's their fault.\n\n>> Oh, I never know this application scenario before. So yes, for this, we\n>> need to keep the current code logic in copy code.\n\n> But the pgstat.c patch seems reasonable to me.\n\nNah, I don't buy that one either. Nobody has any business creating any\nnon-Postgres files in the stats directory ... and if somebody does want\nto stick a FIFO in there, perhaps for debug purposes, why should we stop\nthem?\n\nThe case with COPY is a bit different, since there it's reasonable to be\nworried about collisions with other users' files --- but I agree with\nAndres that this change would eliminate too many valid use-cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 10:36:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 10:36:03AM -0400, Tom Lane wrote:\n> Nah, I don't buy that one either. Nobody has any business creating any\n> non-Postgres files in the stats directory ... and if somebody does want\n> to stick a FIFO in there, perhaps for debug purposes, why should we stop\n> them?\n\nI have never used a FIFO in Postgres for debugging purposes, but that\nsounds plausible. I am not sure either the changes proposed in the\npatch are a good idea.\n--\nMichael",
"msg_date": "Thu, 25 Apr 2019 09:20:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 10:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Apr-24, Paul Guo wrote:\n> >> On Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >>> This seems like a bad idea to me. IMO we want to support using a pipe\n> >>> etc here. If the admin creates a fifo like this without attaching a\n> >>> program it seems like it's their fault.\n>\n> >> Oh, I never know this application scenario before. So yes, for this, we\n> >> need to keep the current code logic in copy code.\n>\n> > But the pgstat.c patch seems reasonable to me.\n>\n> Nah, I don't buy that one either. Nobody has any business creating any\n> non-Postgres files in the stats directory ... and if somebody does want\n> to stick a FIFO in there, perhaps for debug purposes, why should we stop\n> them?\n>\n\nFor the pgstat case, the files for AllocateFile() are actually temp files\nwhich\nare soon renamed to other file names. Users might not want to set them as\nfifo files.\nFor developers 'tail -f' might be sufficient for debugging purpose.\n\n const char *tmpfile = permanent ? PGSTAT_STAT_PERMANENT_TMPFILE :\npgstat_stat_tmpname;\n fpout = AllocateFile(tmpfile, PG_BINARY_W);\n fwrite(fpout, ...);\n rename(tmpfile, statfile);\n\nI'm not sure if those hardcoded temp filenames (not just those in pgstat)\nare used across postgres reboot.\nIf no, we should instead call glibc function to create unique temp files\nand also remove those hardcode temp\nfilename variables, else we also might want them to be regular files.\n\n\n> The case with COPY is a bit different, since there it's reasonable to be\n> worried about collisions with other users' files --- but I agree with\n> Andres that this change would eliminate too many valid use-cases.\n>\n> regards, tom lane\n>\n\nOn Wed, Apr 24, 2019 at 10:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-24, Paul Guo wrote:\n>> On Wed, Apr 24, 2019 at 12:49 PM Andres Freund <andres@anarazel.de> wrote:\n>>> This seems like a bad idea to me. IMO we want to support using a pipe\n>>> etc here. If the admin creates a fifo like this without attaching a\n>>> program it seems like it's their fault.\n\n>> Oh, I never know this application scenario before. So yes, for this, we\n>> need to keep the current code logic in copy code.\n\n> But the pgstat.c patch seems reasonable to me.\n\nNah, I don't buy that one either. Nobody has any business creating any\nnon-Postgres files in the stats directory ... and if somebody does want\nto stick a FIFO in there, perhaps for debug purposes, why should we stop\nthem?For the pgstat case, the files for AllocateFile() are actually temp files whichare soon renamed to other file names. Users might not want to set them as fifo files.For developers 'tail -f' might be sufficient for debugging purpose. const char *tmpfile = permanent ? PGSTAT_STAT_PERMANENT_TMPFILE : pgstat_stat_tmpname; fpout = AllocateFile(tmpfile, PG_BINARY_W); fwrite(fpout, ...); rename(tmpfile, statfile);I'm not sure if those hardcoded temp filenames (not just those in pgstat) are used across postgres reboot.If no, we should instead call glibc function to create unique temp files and also remove those hardcode tempfilename variables, else we also might want them to be regular files.\n\nThe case with COPY is a bit different, since there it's reasonable to be\nworried about collisions with other users' files --- but I agree with\nAndres that this change would eliminate too many valid use-cases.\n\n regards, tom lane",
"msg_date": "Thu, 25 Apr 2019 10:41:31 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
},
{
"msg_contents": "Paul Guo <pguo@pivotal.io> writes:\n> On Wed, Apr 24, 2019 at 10:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> But the pgstat.c patch seems reasonable to me.\n\n>> Nah, I don't buy that one either. Nobody has any business creating any\n>> non-Postgres files in the stats directory ... and if somebody does want\n>> to stick a FIFO in there, perhaps for debug purposes, why should we stop\n>> them?\n\n> I'm not sure if those hardcoded temp filenames (not just those in pgstat)\n> are used across postgres reboot.\n> If no, we should instead call glibc function to create unique temp files\n> and also remove those hardcode temp\n> filename variables, else we also might want them to be regular files.\n\nI do not see any actual need to change anything here.\n\nNote that the whole business might look quite different by next year or\nso, if the shmem-based stats collector patch gets merged. So I'm hesitant\nto throw unnecessary changes into that code right now anyway --- it'd just\nbreak that WIP patch. But in any case, the stats directory is a PG\nprivate directory, and just like everything else inside $PGDATA, it is\nvery much \"no user-serviceable parts inside\". Anybody sticking a FIFO\n(or anything else) in there had better be a developer with some quite\nspecific debugging purpose in mind. So I don't see a reason for file\ntype checks in pgstat, any more than we have them for, say, relation\ndata files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 09:49:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Check file type before calling AllocateFile() for files\n out of pg data directory to avoid potential issues (e.g. hang)."
}
] |
[
{
"msg_contents": "The cursor means something like declare c cursor for select * from t;\nThe holdable cursor means declare c cursor WITH HOLD for select * from t;\n\nHoldable cursor is good at transaction, user can still access it after the\ntransaction is commit. But it is bad at it have to save all the record to\ntuple store before we fetch 1 row.\n\nwhat I want is:\n1. The cursor is still be able to fetch after the transaction is\ncommitted.\n2. the cursor will not fetch the data when fetch statement is issue (just\nlike non-holdable cursor).\n\nI called this as with X cursor..\n\nI check the current implementation and think it would be possible with the\nfollowing methods:\n1. allocate the memory in a {LongerMemoryContext}, like EState to\nprevent they are\n2. allocate a more bigger resource owner to prevent the LockReleaseAll\nduring CommitTransaction.\n3. add the \"with X\" option to cursor so that Precommit_portals will not\ndrop it during CommitTransaction.\n\nBefore I implement it, could you give some suggestions?\n\nThanks!\n\nThe cursor means something like declare c cursor for select * from t;The holdable cursor means declare c cursor WITH HOLD for select * from t;Holdable cursor is good at transaction, user can still access it after the transaction is commit. But it is bad at it have to save all the record to tuple store before we fetch 1 row.what I want is:1. The cursor is still be able to fetch after the transaction is committed. 2. the cursor will not fetch the data when fetch statement is issue (just like non-holdable cursor).I called this as with X cursor.. I check the current implementation and think it would be possible with the following methods:1. allocate the memory in a {LongerMemoryContext}, like EState to prevent they are 2. allocate a more bigger resource owner to prevent the LockReleaseAll during CommitTransaction.3. add the \"with X\" option to cursor so that Precommit_portals will not drop it during CommitTransaction.Before I implement it, could you give some suggestions? Thanks!",
"msg_date": "Wed, 24 Apr 2019 21:26:41 +0800",
"msg_from": "alex lock <alock303@gmail.com>",
"msg_from_op": true,
"msg_subject": "Help to review the with X cursor option."
},
{
"msg_contents": "alex lock <alock303@gmail.com> writes:\n> The cursor means something like declare c cursor for select * from t;\n> The holdable cursor means declare c cursor WITH HOLD for select * from t;\n\n> Holdable cursor is good at transaction, user can still access it after the\n> transaction is commit. But it is bad at it have to save all the record to\n> tuple store before we fetch 1 row.\n\n> what I want is:\n> 1. The cursor is still be able to fetch after the transaction is\n> committed.\n> 2. the cursor will not fetch the data when fetch statement is issue (just\n> like non-holdable cursor).\n\n> I called this as with X cursor..\n\n> I check the current implementation and think it would be possible with the\n> following methods:\n> 1. allocate the memory in a {LongerMemoryContext}, like EState to\n> prevent they are\n> 2. allocate a more bigger resource owner to prevent the LockReleaseAll\n> during CommitTransaction.\n> 3. add the \"with X\" option to cursor so that Precommit_portals will not\n> drop it during CommitTransaction.\n\n> Before I implement it, could you give some suggestions?\n\nYou don't actually understand the problem.\n\nThe reason a holdable cursor forcibly reads all the data before commit is\nthat the data might not be there to read any later than that. Once we end\nthe transaction and release its snapshot (specifically, advance the\nbackend's advertised global xmin), it's possible and indeed desirable for\nobsoleted row versions to be vacuumed. The only way to avoid that would\nbe to not advance xmin, which is pretty much just as bad as not committing\nthe transaction. Not releasing the transaction's locks is also bad.\nSo it doesn't seem like there's anything to be gained here that you don't\nhave today by just not committing yet.\n\nIf you're concerned about not losing work due to possible errors later in\nthe transaction, you could prevent those from causing problems through\nsubtransactions (savepoints).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:30:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Help to review the with X cursor option."
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 11:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> alex lock <alock303@gmail.com> writes:\n> > The cursor means something like declare c cursor for select * from t;\n> > The holdable cursor means declare c cursor WITH HOLD for select * from t;\n>\n> > Holdable cursor is good at transaction, user can still access it after\n> the\n> > transaction is commit. But it is bad at it have to save all the record\n> to\n> > tuple store before we fetch 1 row.\n>\n> > what I want is:\n> > 1. The cursor is still be able to fetch after the transaction is\n> > committed.\n> > 2. the cursor will not fetch the data when fetch statement is issue\n> (just\n> > like non-holdable cursor).\n>\n> > I called this as with X cursor..\n>\n> > I check the current implementation and think it would be possible with\n> the\n> > following methods:\n> > 1. allocate the memory in a {LongerMemoryContext}, like EState to\n> > prevent they are\n> > 2. allocate a more bigger resource owner to prevent the LockReleaseAll\n> > during CommitTransaction.\n> > 3. add the \"with X\" option to cursor so that Precommit_portals will not\n> > drop it during CommitTransaction.\n>\n> > Before I implement it, could you give some suggestions?\n>\n> You don't actually understand the problem.\n\n\n>\nThanks tones. I know that and that's just something I want to change.\n\n\n> The reason a holdable cursor forcibly reads all the data before commit is\n> that the data might not be there to read any later than that.\n\n\nI think this can be done with snapshot read, like we want the data at time\n1, even the data is not there at time 2, we provide the snapshot, we can\nread the data. Oracle has a similar function called flashback query\nhttps://docs.oracle.com/cd/B14117_01/appdev.101/b10795/adfns_fl.htm#1008580\n .\n\n\n> Once we end\n> the transaction and release its snapshot (specifically, advance the\n> backend's advertised global xmin), it's possible and indeed desirable for\n> obsoleted row versions to be vacuumed.\n\n\nthat's something I want to change, as I said at the beginning. include\navoid some memory release (like the EState and so on), snapshot release.\n\n\n\n> The only way to avoid that would\n> be to not advance xmin, which is pretty much just as bad as not committing\n> the transaction.\n\n\nthere is something different between \"not advance xmin\" or \"not committing\nthe transaction\" for me. \"not commit the transaction\" will take up the\nconnection, but \"not advance xmin\" one not. without this reason,\nnon-holdable cursor is good for me.\n\n\n> Not releasing the transaction's locks is also bad.\n\n\nAssume that if the table was dropped among the fetches, we can just raise\nerror, we can releasing the lock? I am still not sure about this part,\nbut keep the lock is still acceptable for me since it will not take up the\nconnection already(my purpose). but releasing the lock can be better.\n\n\n> So it doesn't seem like there's anything to be gained here that you don't\n> have today by just not committing yet.\n>\n\nit is connection:) I want to run dml or other stuff on the current\nconnection.\n\n\n>\n> If you're concerned about not losing work due to possible errors later in\n> the transaction, you could prevent those from causing problems through\n> subtransactions (savepoints).\n>\n> Thanks for your tip, I have thought the possibility but I can think\nmore. the business model is a bit of complex and I don't want to talk more\nhere.\n\n\n> regards, tom lane\n>\n\nOn Wed, Apr 24, 2019 at 11:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:alex lock <alock303@gmail.com> writes:\n> The cursor means something like declare c cursor for select * from t;\n> The holdable cursor means declare c cursor WITH HOLD for select * from t;\n\n> Holdable cursor is good at transaction, user can still access it after the\n> transaction is commit. But it is bad at it have to save all the record to\n> tuple store before we fetch 1 row.\n\n> what I want is:\n> 1. The cursor is still be able to fetch after the transaction is\n> committed.\n> 2. the cursor will not fetch the data when fetch statement is issue (just\n> like non-holdable cursor).\n\n> I called this as with X cursor..\n\n> I check the current implementation and think it would be possible with the\n> following methods:\n> 1. allocate the memory in a {LongerMemoryContext}, like EState to\n> prevent they are\n> 2. allocate a more bigger resource owner to prevent the LockReleaseAll\n> during CommitTransaction.\n> 3. add the \"with X\" option to cursor so that Precommit_portals will not\n> drop it during CommitTransaction.\n\n> Before I implement it, could you give some suggestions?\n\nYou don't actually understand the problem. Thanks tones. I know that and that's just something I want to change. \nThe reason a holdable cursor forcibly reads all the data before commit is\nthat the data might not be there to read any later than that. I think this can be done with snapshot read, like we want the data at time 1, even the data is not there at time 2, we provide the snapshot, we can read the data. Oracle has a similar function called flashback query https://docs.oracle.com/cd/B14117_01/appdev.101/b10795/adfns_fl.htm#1008580 . Once we end\nthe transaction and release its snapshot (specifically, advance the\nbackend's advertised global xmin), it's possible and indeed desirable for\nobsoleted row versions to be vacuumed. that's something I want to change, as I said at the beginning. include avoid some memory release (like the EState and so on), snapshot release. The only way to avoid that would\nbe to not advance xmin, which is pretty much just as bad as not committing\nthe transaction. there is something different between \"not advance xmin\" or \"not committing the transaction\" for me. \"not commit the transaction\" will take up the connection, but \"not advance xmin\" one not. without this reason, non-holdable cursor is good for me. Not releasing the transaction's locks is also bad.Assume that if the table was dropped among the fetches, we can just raise error, we can releasing the lock? I am still not sure about this part, but keep the lock is still acceptable for me since it will not take up the connection already(my purpose). but releasing the lock can be better. \nSo it doesn't seem like there's anything to be gained here that you don't\nhave today by just not committing yet.it is connection:) I want to run dml or other stuff on the current connection. \n\nIf you're concerned about not losing work due to possible errors later in\nthe transaction, you could prevent those from causing problems through\nsubtransactions (savepoints).\nThanks for your tip, I have thought the possibility but I can think more. the business model is a bit of complex and I don't want to talk more here. \n regards, tom lane",
"msg_date": "Thu, 25 Apr 2019 09:53:11 +0800",
"msg_from": "alex lock <alock303@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Help to review the with X cursor option."
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 9:53 AM alex lock <alock303@gmail.com> wrote:\n\n>\n>\n> that's something I want to change, as I said at the beginning. include\n> avoid some memory release (like the EState and so on), snapshot release.\n>\n>\n\nI check my original statement, I found \"snapshot release\" was missed, that\nobviously is a key point..\n\nOn Thu, Apr 25, 2019 at 9:53 AM alex lock <alock303@gmail.com> wrote:that's something I want to change, as I said at the beginning. include avoid some memory release (like the EState and so on), snapshot release. I check my original statement, I found \"snapshot release\" was missed, that obviously is a key point..",
"msg_date": "Thu, 25 Apr 2019 09:57:04 +0800",
"msg_from": "alex lock <alock303@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Help to review the with X cursor option."
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems that DefineIndex() is forgetting to update_relispartition()\non a partition's index when it's attached to an index being added to\nthe parent. That results in unexpected behavior when adding a foreign\nkey referencing the parent.\n\ncreate table foo (a int) partition by list (a);\ncreate table foo1 partition of foo for values in (1);\nalter table foo1 add primary key (a);\nalter table foo add primary key (a);\nselect relname, relispartition from pg_class where relname = 'foo1_pkey';\n relname | relispartition\n-----------+----------------\n foo1_pkey | f\n(1 row)\n\ncreate table bar (a int references foo);\nERROR: index for 24683 not found in partition foo1\n\nAttached patch fixes that, but I haven't added any new tests.\n\nPS: Came to know that that's the case when reading this blog on the\nnew foreign key feature:\nhttps://www.depesz.com/2019/04/24/waiting-for-postgresql-12-support-foreign-keys-that-reference-partitioned-tables/\n\nThanks,\nAmit",
"msg_date": "Thu, 25 Apr 2019 00:31:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "set relispartition when attaching child index"
},
{
"msg_contents": "On 2019-Apr-25, Amit Langote wrote:\n\n> It seems that DefineIndex() is forgetting to update_relispartition()\n> on a partition's index when it's attached to an index being added to\n> the parent. That results in unexpected behavior when adding a foreign\n> key referencing the parent.\n\nAh, thanks for fixing. I also read Depesz's post this morning and was\nto see what was going on after I push the pg_dump fix.\n\nI'll get this pushed later.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:33:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On 2019-Apr-25, Amit Langote wrote:\n\n> It seems that DefineIndex() is forgetting to update_relispartition()\n> on a partition's index when it's attached to an index being added to\n> the parent. That results in unexpected behavior when adding a foreign\n> key referencing the parent.\n\nBTW, maybe IndexSetParentIndex ought to be the one calling\nupdate_relispartition() ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 11:35:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:35 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Apr-25, Amit Langote wrote:\n>\n> > It seems that DefineIndex() is forgetting to update_relispartition()\n> > on a partition's index when it's attached to an index being added to\n> > the parent. That results in unexpected behavior when adding a foreign\n> > key referencing the parent.\n>\n> BTW, maybe IndexSetParentIndex ought to be the one calling\n> update_relispartition() ...\n\nI thought so too, but other sites are doing what I did in the patch.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 25 Apr 2019 00:38:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:38 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Apr 25, 2019 at 12:35 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Apr-25, Amit Langote wrote:\n> >\n> > > It seems that DefineIndex() is forgetting to update_relispartition()\n> > > on a partition's index when it's attached to an index being added to\n> > > the parent. That results in unexpected behavior when adding a foreign\n> > > key referencing the parent.\n> >\n> > BTW, maybe IndexSetParentIndex ought to be the one calling\n> > update_relispartition() ...\n>\n> I thought so too, but other sites are doing what I did in the patch.\n\nAlthough, we wouldn't have this bug if it was IndexSetParentIndex\ncalling it. Maybe a good idea to do that now.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 25 Apr 2019 00:39:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Apr 25, 2019 at 12:38 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Apr 25, 2019 at 12:35 AM Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n> > > On 2019-Apr-25, Amit Langote wrote:\n> > >\n> > > > It seems that DefineIndex() is forgetting to update_relispartition()\n> > > > on a partition's index when it's attached to an index being added to\n> > > > the parent. That results in unexpected behavior when adding a foreign\n> > > > key referencing the parent.\n> > >\n> > > BTW, maybe IndexSetParentIndex ought to be the one calling\n> > > update_relispartition() ...\n> >\n> > I thought so too, but other sites are doing what I did in the patch.\n>\n> Although, we wouldn't have this bug if it was IndexSetParentIndex\n> calling it. Maybe a good idea to do that now.\n\nI tried that in the attached.\n\nThanks,\nAmit",
"msg_date": "Thu, 25 Apr 2019 00:55:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On 2019/04/25 0:55, Amit Langote wrote:\n> On Thu, Apr 25, 2019 at 12:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Thu, Apr 25, 2019 at 12:38 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> On Thu, Apr 25, 2019 at 12:35 AM Alvaro Herrera\n>>> <alvherre@2ndquadrant.com> wrote:\n>>>> On 2019-Apr-25, Amit Langote wrote:\n>>>>\n>>>>> It seems that DefineIndex() is forgetting to update_relispartition()\n>>>>> on a partition's index when it's attached to an index being added to\n>>>>> the parent. That results in unexpected behavior when adding a foreign\n>>>>> key referencing the parent.\n>>>>\n>>>> BTW, maybe IndexSetParentIndex ought to be the one calling\n>>>> update_relispartition() ...\n>>>\n>>> I thought so too, but other sites are doing what I did in the patch.\n>>\n>> Although, we wouldn't have this bug if it was IndexSetParentIndex\n>> calling it. Maybe a good idea to do that now.\n> \n> I tried that in the attached.\n\nBTW, this will need to be back-patched to 11.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Thu, 25 Apr 2019 10:11:04 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On 2019-Apr-25, Amit Langote wrote:\n\n> BTW, this will need to be back-patched to 11.\n\nDone, thanks for the patch. I added the test in master, but obviously\nit doesn't work in pg11, so I just verified manually that relispartition\nis set correctly. I don't think it's worth doing more, though there are\nother things that are affected by a bogus relispartition marking for an\nindex (example: creating the index in the last partition that didn't\nhave it, should mark the index on parent valid; I think that would fail\nto propagate to upper levels correctly.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Apr 2019 10:12:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: set relispartition when attaching child index"
},
{
"msg_contents": "On 2019/04/26 23:12, Alvaro Herrera wrote:\n> On 2019-Apr-25, Amit Langote wrote:\n> \n>> BTW, this will need to be back-patched to 11.\n> \n> Done, thanks for the patch. I added the test in master, but obviously\n> it doesn't work in pg11, so I just verified manually that relispartition\n> is set correctly.\n\nThank you.\n\n> I don't think it's worth doing more, though there are\n> other things that are affected by a bogus relispartition marking for an\n> index (example: creating the index in the last partition that didn't\n> have it, should mark the index on parent valid; I think that would fail\n> to propagate to upper levels correctly.)\n\nHmm, I couldn't see any misbehavior for this example:\n\ncreate table p (a int, b int) partition by list (a);\ncreate table p1 partition of p for values in (1) partition by list (b);\ncreate table p11 partition of p1 for values in (1);\ncreate index on only p (a);\ncreate index on only p1 (a);\nalter index p_a_idx attach partition p1_a_idx ;\n\nselect relname, relispartition from pg_class where relname like 'p%idx';\n relname │ relispartition\n──────────┼────────────────\n p_a_idx │ f\n p1_a_idx │ t\n(2 rows)\n\n\n\\d p\n Table \"public.p\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\n b │ integer │ │ │\nPartition key: LIST (a)\nIndexes:\n \"p_a_idx\" btree (a) INVALID\nNumber of partitions: 1 (Use \\d+ to list them.)\n\n\ncreate index on p11 (a);\nalter index p1_a_idx attach partition p11_a_idx ;\nselect relname, relispartition from pg_class where relname like 'p%idx';\n relname │ relispartition\n───────────┼────────────────\n p_a_idx │ f\n p1_a_idx │ t\n p11_a_idx │ t\n(3 rows)\n\n\\d p\n Table \"public.p\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\n b │ integer │ │ │\nPartition key: LIST (a)\nIndexes:\n \"p_a_idx\" btree (a)\nNumber of partitions: 1 (Use \\d+ to list them.)\n\nMaybe, because the code path we fixed has nothing to do with this case?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 7 May 2019 17:57:12 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: set relispartition when attaching child index"
}
] |
[
{
"msg_contents": "This tells a pretty scary story:\n\nhttps://coverage.postgresql.org/src/backend/access/gist/index.html\n\nIn particular, gistbuildbuffers.c is not entered *at all*, and\ngistbuild.c is only 21% covered.\n\nI noticed this after adding an assertion that I expected\ngistInitBuildBuffers to fail on, and nonetheless getting\nthrough check-world just fine.\n\nWhy is this so bad? It's not like the gist regression test isn't\nridiculously expensive already; I'd have expected it to provide\ndarn near 100% coverage for what it's costing in runtime.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 14:31:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Regression test coverage of GiST index build is awful"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 9:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This tells a pretty scary story:\n>\n> https://coverage.postgresql.org/src/backend/access/gist/index.html\n>\n> In particular, gistbuildbuffers.c is not entered *at all*, and\n> gistbuild.c is only 21% covered.\n>\n> I noticed this after adding an assertion that I expected\n> gistInitBuildBuffers to fail on, and nonetheless getting\n> through check-world just fine.\n>\n> Why is this so bad? It's not like the gist regression test isn't\n> ridiculously expensive already; I'd have expected it to provide\n> darn near 100% coverage for what it's costing in runtime.\n\nI don't think there is any idea behind this. Seems to be just oversight.\n\nDo you like me to write a patch improving coverage here?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 24 Apr 2019 21:38:56 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Regression test coverage of GiST index build is awful"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Wed, Apr 24, 2019 at 9:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Why is this so bad? It's not like the gist regression test isn't\n>> ridiculously expensive already; I'd have expected it to provide\n>> darn near 100% coverage for what it's costing in runtime.\n\n> I don't think there is any idea behind this. Seems to be just oversight.\n\nAfter poking at it a bit, the answer seems to be that the gist buffering\ncode isn't invoked till we get to an index size of effective_cache_size/4,\nwhich by default would be way too much for any regression test index.\n\n> Do you like me to write a patch improving coverage here?\n\nSomebody needs to... that's an awful lot of code to not be testing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 15:23:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Regression test coverage of GiST index build is awful"
}
] |
[
{
"msg_contents": "I just noticed that TRACE_SORT is defined by default (since 2005\napparently). It seems odd since it is the only debugging code enabled by\ndefault.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 24 Apr 2019 17:07:13 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 2:07 PM Joe Conway <mail@joeconway.com> wrote:\n> I just noticed that TRACE_SORT is defined by default (since 2005\n> apparently). It seems odd since it is the only debugging code enabled by\n> default.\n\nI think that we should get rid of the #ifdef stuff, so that it isn't\npossible to disable the trace_sort instrumentation my commenting out\nthe TRACE_SORT entry in pg_config_manual.h. I recall being opposed on\nthis point by Robert Haas. Possibly because he just didn't want to\ndeal with it at the time.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 14:10:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On 4/24/19 5:10 PM, Peter Geoghegan wrote:\n> On Wed, Apr 24, 2019 at 2:07 PM Joe Conway <mail@joeconway.com> wrote:\n>> I just noticed that TRACE_SORT is defined by default (since 2005\n>> apparently). It seems odd since it is the only debugging code enabled by\n>> default.\n> \n> I think that we should get rid of the #ifdef stuff, so that it isn't\n> possible to disable the trace_sort instrumentation my commenting out\n> the TRACE_SORT entry in pg_config_manual.h. I recall being opposed on\n> this point by Robert Haas. Possibly because he just didn't want to\n> deal with it at the time.\n\n\nHas anyone ever (or recently) measured the impact on performance to have\nthis enabled? Is it that generically useful for debugging of production\ninstances of Postgres that we really want it always enabled despite the\nperformance impact?\n\nMaybe the answer to both is \"yes\", but if so I would agree that we ought\nto remove the define and ifdef's and just bake it in.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 24 Apr 2019 17:15:25 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 2:15 PM Joe Conway <mail@joeconway.com> wrote:\n> Has anyone ever (or recently) measured the impact on performance to have\n> this enabled? Is it that generically useful for debugging of production\n> instances of Postgres that we really want it always enabled despite the\n> performance impact?\n\nIt is disabled by default, in the sense that the trace_sort GUC\ndefaults to off. I believe that the overhead of building in the\ninstrumentation without enabling it is indistinguishable from zero. In\nany case the current status quo is that it's built by default. I have\nused it in production, though not very often. It's easy to turn it on\nand off.\n\n> Maybe the answer to both is \"yes\", but if so I would agree that we ought\n> to remove the define and ifdef's and just bake it in.\n\nWe're only talking about removing the option of including the\ninstrumentation in binaries when Postgres is built. I'm not aware that\nanyone is doing that. It nobody was doing that, then nobody could be\naffected by removing the #ifdef crud.\n\nI suspect that the reason that this hasn't happened already is because\nit leaves trace_sort/TRACE_SORT in the slightly awkward position of no\nlonger quite meeting the traditional definition of a \"developer\noption\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 14:23:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On 2019-Apr-24, Peter Geoghegan wrote:\n\n> I suspect that the reason that this hasn't happened already is because\n> it leaves trace_sort/TRACE_SORT in the slightly awkward position of no\n> longer quite meeting the traditional definition of a \"developer\n> option\".\n\nThis is a really strange argument. You're saying that somebody thought\nabout it: \"Hmm, well, I can remove this preprocessor symbol but then\ntrace_sort would no longer resemble a developer option. So I'm going to\nleave the symbol alone\". I don't think that's what happened. It seems\nmore likely to me that nobody has gone to the trouble of deciding that\nthe symbol is worth removing, let alone actually doing it.\n\nIf the instrumentation is good, and you seem to be saying that it is, I\nthink we should just remove the symbol and be done with it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 17:29:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 2:29 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This is a really strange argument. You're saying that somebody thought\n> about it: \"Hmm, well, I can remove this preprocessor symbol but then\n> trace_sort would no longer resemble a developer option. So I'm going to\n> leave the symbol alone\". I don't think that's what happened. It seems\n> more likely to me that nobody has gone to the trouble of deciding that\n> the symbol is worth removing, let alone actually doing it.\n\nIt doesn't seem very important now.\n\n> If the instrumentation is good, and you seem to be saying that it is, I\n> think we should just remove the symbol and be done with it.\n\nSounds like a plan. Do you want to take care of it, Joe?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 14:31:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 24, 2019 at 2:15 PM Joe Conway <mail@joeconway.com> wrote:\n>> Has anyone ever (or recently) measured the impact on performance to have\n>> this enabled? Is it that generically useful for debugging of production\n>> instances of Postgres that we really want it always enabled despite the\n>> performance impact?\n\n> It is disabled by default, in the sense that the trace_sort GUC\n> defaults to off. I believe that the overhead of building in the\n> instrumentation without enabling it is indistinguishable from zero.\n\nIt would probably be useful to actually prove that rather than just\nassuming it. I do see some code under the symbol that is executed\neven when !trace_sort, and in any case Andres keeps complaining that\neven always-taken branches are expensive ...\n\n> In\n> any case the current status quo is that it's built by default. I have\n> used it in production, though not very often. It's easy to turn it on\n> and off.\n\nWould any non-wizard really have a use for it?\n\nIt seems like we should either make this really a developer option\n(and hence not enabled by default) or else move it into some other\ncategory than DEVELOPER_OPTIONS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Apr 2019 18:04:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It is disabled by default, in the sense that the trace_sort GUC\n> > defaults to off. I believe that the overhead of building in the\n> > instrumentation without enabling it is indistinguishable from zero.\n>\n> It would probably be useful to actually prove that rather than just\n> assuming it.\n\nThe number of individual trace_sort LOG messages is proportionate to\nthe number of runs produced.\n\n> I do see some code under the symbol that is executed\n> even when !trace_sort, and in any case Andres keeps complaining that\n> even always-taken branches are expensive ...\n\nI think that you're referring to the stuff needed for the D-Trace\nprobes. It's a pity that there isn't better support for that, since\nLinux has a lot for options around static userspace probes these days\n(SystemTap is very much out of favor, and never was very popular).\nThere seems to be a recognition among the Linux people that the\ndistinction between users and backend experts is blurred. The kernel\nsupport for things like eBPF and BCC is still patchy, but that will\nchange.\n\n> Would any non-wizard really have a use for it?\n\nThat depends on what the cut-off point is for wizard. I recognize that\nthere is a need to draw the line somewhere. I suspect that a fair\nnumber of people could intuit problems in a real-world scenario using\ntrace_sort, without having any grounding in the theory, and without\nmuch knowledge of tuplesort.c specifically.\n\n> It seems like we should either make this really a developer option\n> (and hence not enabled by default) or else move it into some other\n> category than DEVELOPER_OPTIONS.\n\nThe information that it makes available is approximately the same as\nthe information made available by the new\npg_stat_progress_create_index view, but with getrusage() stats.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Apr 2019 15:40:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Geoghegan <pg@bowt.ie> writes:\n>\n> > In\n> > any case the current status quo is that it's built by default. I have\n> > used it in production, though not very often. It's easy to turn it on\n> > and off.\n>\n> Would any non-wizard really have a use for it?\n>\n\nI've had people use it to get some insight into the operation and memory\nusage of Aggregate nodes, since those nodes offer nothing useful via\nEXPLAIN ANALYZE. It would be a shame to lose that ability on\npackage-installed PostgreSQL unless we fix Aggregate node reporting first.\n\nCheers,\n\nJeff\n\nOn Wed, Apr 24, 2019 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Geoghegan <pg@bowt.ie> writes:\n> In\n> any case the current status quo is that it's built by default. I have\n> used it in production, though not very often. It's easy to turn it on\n> and off.\n\nWould any non-wizard really have a use for it?I've had people use it to get some insight into the operation and memory usage of Aggregate nodes, since those nodes offer nothing useful via EXPLAIN ANALYZE. It would be a shame to lose that ability on package-installed PostgreSQL unless we fix Aggregate node reporting first.Cheers,Jeff",
"msg_date": "Thu, 25 Apr 2019 11:53:16 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On 2019-Apr-25, Jeff Janes wrote:\n\n> I've had people use it to get some insight into the operation and memory\n> usage of Aggregate nodes, since those nodes offer nothing useful via\n> EXPLAIN ANALYZE. It would be a shame to lose that ability on\n> package-installed PostgreSQL unless we fix Aggregate node reporting first.\n\nBut the proposal is not to remove the _code_. The proposal is just to\nremove that \"#ifdef\" lines that would make it conditionally compilable,\n*if* the symbol that they test weren't always enabled. In other words,\nturn it from \"always compiled, but you can turn it off although nobody\ndoes\" into \"always compiled\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:52:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 1:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> But the proposal is not to remove the _code_. The proposal is just to\n> remove that \"#ifdef\" lines that would make it conditionally compilable,\n> *if* the symbol that they test weren't always enabled. In other words,\n> turn it from \"always compiled, but you can turn it off although nobody\n> does\" into \"always compiled\".\n\nTom suggested that we might want to remove the code as an alternative.\nWe have two almost-opposite proposals here.\n\nAs I said, I think that the way that we think about or define\ndeveloper options frames this discussion.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 13:54:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-25, Jeff Janes wrote:\n>> I've had people use it to get some insight into the operation and memory\n>> usage of Aggregate nodes, since those nodes offer nothing useful via\n>> EXPLAIN ANALYZE. It would be a shame to lose that ability on\n>> package-installed PostgreSQL unless we fix Aggregate node reporting first.\n\n> But the proposal is not to remove the _code_. The proposal is just to\n> remove that \"#ifdef\" lines that would make it conditionally compilable,\n> *if* the symbol that they test weren't always enabled. In other words,\n> turn it from \"always compiled, but you can turn it off although nobody\n> does\" into \"always compiled\".\n\nWell, I was suggesting that we ought to consider the alternative of\nmaking it *not* always compiled, and Jeff was pushing back on that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:56:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 1:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, I was suggesting that we ought to consider the alternative of\n> making it *not* always compiled, and Jeff was pushing back on that.\n\nRight. Sorry.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 13:56:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 06:04:41PM -0400, Tom Lane wrote:\n>Peter Geoghegan <pg@bowt.ie> writes:\n>> On Wed, Apr 24, 2019 at 2:15 PM Joe Conway <mail@joeconway.com> wrote:\n>>> Has anyone ever (or recently) measured the impact on performance to have\n>>> this enabled? Is it that generically useful for debugging of production\n>>> instances of Postgres that we really want it always enabled despite the\n>>> performance impact?\n>\n>> It is disabled by default, in the sense that the trace_sort GUC\n>> defaults to off. I believe that the overhead of building in the\n>> instrumentation without enabling it is indistinguishable from zero.\n>\n>It would probably be useful to actually prove that rather than just\n>assuming it. I do see some code under the symbol that is executed\n>even when !trace_sort, and in any case Andres keeps complaining that\n>even always-taken branches are expensive ...\n>\n\nDid I hear the magical word \"benchmark\" over here?\n\nI suppose it'd be useful to have some actual numbers showing what\noverhead this actually has, and whether disabling it would make any\ndifference. I can't run anything right away, but I could get us some\nnumbers in a couple of days, assuming there is some agreement on which\ncases we need to test.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 25 Apr 2019 23:49:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Thu, 25 Apr 2019 at 06:41, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 24, 2019 at 3:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > It is disabled by default, in the sense that the trace_sort GUC\n> > > defaults to off. I believe that the overhead of building in the\n> > > instrumentation without enabling it is indistinguishable from zero.\n> >\n> > It would probably be useful to actually prove that rather than just\n> > assuming it.\n>\n> The number of individual trace_sort LOG messages is proportionate to\n> the number of runs produced.\n>\n> > I do see some code under the symbol that is executed\n> > even when !trace_sort, and in any case Andres keeps complaining that\n> > even always-taken branches are expensive ...\n>\n> I think that you're referring to the stuff needed for the D-Trace\n> probes. It's a pity that there isn't better support for that, since\n> Linux has a lot for options around static userspace probes these days\n> (SystemTap is very much out of favor, and never was very popular).\n\nWhich is a real shame. I got into it last week and I cannot believe\nI've wasted time and effort trying to get anything useful out of perf\nwhen it comes to tracing. There's just no comparison.\n\nAs usual, the incoming replacement tools are broken, incompatible and\nincomplete, especially for userspace. Because really, bah, users, who\ncares about users? But yet issues with systemtap are being dismissed\nwith variants of \"that's obsolete, use perf or eBPF-tools\". Right,\njust like a CnC router can be easily replaced with a chisel.\n\nI expect eBPF-tools will be pretty amazing once it matures. But for\nthose of us stuck in \"working with actual user applications\" land, on\nRHEL6 and RHEL7 and the like, it won't be doing so in a hurry.\n\nWith that said, static probes are useful but frustrating. The\ndtrace-alike model SystemTap adopted, and that we use in Pg via\nSystemTap's 'dtrace' script, generates quite dtrace-alike static probe\npoints, complete with the frustrating deficiencies of those probe\npoints. Most importantly, they don't record the probe argument names\nor data types, and if you want to get a string value you need to\nhandle each string probe argument individually.\n\nStatic probes are fantastic as labels and they let you see the program\nstate in places that are often impractical for debuginfo-based DWARF\nprobing since they give you a stable, consistent way to see something\nother than function args and return values. But they're frustratingly\ndeficient compared to DWARF-based probes in other ways.\n\n> There seems to be a recognition among the Linux people that the\n> distinction between users and backend experts is blurred. The kernel\n> support for things like eBPF and BCC is still patchy, but that will\n> change.\n\nJust in time for it to be deemed obsolete and replaced with another\ntool that only works with the kernel for the first couple of years,\nprobably. Like perf was.\n\n> > Would any non-wizard really have a use for it?\n\nExtremely strongly yes.\n\nIf nothing else you can wrap these tools up into toolsets and scripts\nthat give people insights into running systems just by running the\nscript. Non-invasively, without code changes, on an existing running\nsystem.\n\nI wrote a systemtap script script last week that tracks each\ntransaction in a Pg backend from xid allocation though to\ncommit/rollback/prepare/commitprepared/rollbackprepared and takes\nstats on xid allocations, txn durations, etc. Then figures out if the\ncommitted txn needs logical decoding by any existing slots and tracks\nhow long each txn takes between commit until a given logical walsender\nfinishes decoding it and sending it. Collects stats on\ninserts/updates/deletes etc in each txn, txn size, etc. Then follows\nthe txn and observes it being received and applied by a logical\nreplication client (pglogical). So it can report latencies and\nthroughputs in every step through the logical replication pipeline.\n\nRight now that script isn't pretty, and it's not something easily\nreused. But I could wrap it up in a script that turned on/off parts\nbased on what a user needed, wrote the stats to csv for\npostprocessing, etc.\n\nThe underlying tool isn't for end users. But the end result sure can\nbe. After all, we don't expect users to mess with xact.c and\ntransam.c, we just expect them to run SQL, but we don't say PostgreSQL\nis only for wizards not end users.\n\nFor that reason I'm trying to find time to add a large pile more probe\npoints to PostgreSQL. Informed in part by what I learned writing the\nabove script. I want probes for WaitEventSetWait, transam activities\n(especially xid allocation, commit, rollback), 2pc, walsender\nactivity, reorderbuffer, walreceiver, slots, global xmin/catalog_xmin\nchanges, writes, file flushes, buffer access, and lots more. (Pg's\nexisting probes on transaction start and finish are almost totally\nuseless as you can't tell if the txn then gets an xid allocated,\nwhether the commit generates an xlog record or not, etc).\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 29 Oct 2019 16:54:47 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
},
{
"msg_contents": "On Fri, 26 Apr 2019 at 05:49, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Apr 24, 2019 at 06:04:41PM -0400, Tom Lane wrote:\n> >Peter Geoghegan <pg@bowt.ie> writes:\n> >> On Wed, Apr 24, 2019 at 2:15 PM Joe Conway <mail@joeconway.com> wrote:\n> >>> Has anyone ever (or recently) measured the impact on performance to have\n> >>> this enabled? Is it that generically useful for debugging of production\n> >>> instances of Postgres that we really want it always enabled despite the\n> >>> performance impact?\n> >\n> >> It is disabled by default, in the sense that the trace_sort GUC\n> >> defaults to off. I believe that the overhead of building in the\n> >> instrumentation without enabling it is indistinguishable from zero.\n> >\n> >It would probably be useful to actually prove that rather than just\n> >assuming it. I do see some code under the symbol that is executed\n> >even when !trace_sort, and in any case Andres keeps complaining that\n> >even always-taken branches are expensive ...\n> >\n>\n> Did I hear the magical word \"benchmark\" over here?\n>\n> I suppose it'd be useful to have some actual numbers showing what\n> overhead this actually has, and whether disabling it would make any\n> difference. I can't run anything right away, but I could get us some\n> numbers in a couple of days, assuming there is some agreement on which\n> cases we need to test.\n\n\nIf you're worried about overheads of dtrace-style probes, you can\n(with systemtap ones like we use) generate a set of semaphores as a\nseparate .so that you link into the final build. Then you can test for\nTRACE_POSTGRESQL_FOO_BAR_ENABLED() and only do any work required to\ngenerate input for the trace call if that returns true. You can\ngenerally unlikely() it since you don't care about the cost of it with\ntracing enabled nearly as much.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 29 Oct 2019 17:00:21 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRACE_SORT defined by default"
}
] |
[
{
"msg_contents": "Hi,\n\nTo me it seems like a noticable layering violation for dbsize.c to\ndirectly stat() files in the filesystem, without going through\nsmgr.c. And now tableam.\n\nThis means there's code knowing about file segments outside of md.c -\nwhich imo shouldn't be the case. We also stat a lot more than\nnecessary, if the relation is already open - when going through smgr.c\nwe'd only need to stat the last segment, rather than all previous\nsegments.\n\nAlways going through smgr would have the disadvantage that we'd probably\nneed a small bit of logic to close the relation's smgr references if it\npreviously wasn't open, to avoid increasing memory usage unnecessarily.\n\n\ndbsize.c directly stat()ing files, also means that if somebody were to\nwrite a tableam that doesn't store data in postgres compatible segmented\nfiles, the size can't be displayed.\n\n\nI think we should change dbsize.c to call\nRelationGetNumberOfBlocksInFork() for relkinds != TABLE/TOAST/MATVIEW,\nand a new AM callback for those. Probably with the aforementioned\nadditional logic of closing smgr references if they weren't open before\nthe size computation.\n\nImo this pretty clearly is v13 work.\n\nI'd assume that pg_database_size*() would continue the same way it does\nright now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Apr 2019 16:09:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Why is it OK for dbsize.c to look at relation files directly?"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 04:09:56PM -0700, Andres Freund wrote:\n> I think we should change dbsize.c to call\n> RelationGetNumberOfBlocksInFork() for relkinds != TABLE/TOAST/MATVIEW,\n> and a new AM callback for those. Probably with the aforementioned\n> additional logic of closing smgr references if they weren't open before\n> the size computation.\n> \n> Imo this pretty clearly is v13 work.\n\nAgreed that this is out of 12's scope. Perhaps you should add a TODO\nitem for that?\n--\nMichael",
"msg_date": "Thu, 25 Apr 2019 09:16:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is it OK for dbsize.c to look at relation files directly?"
},
{
"msg_contents": "On 2019-04-25 09:16:50 +0900, Michael Paquier wrote:\n> Perhaps you should add a TODO item for that?\n\nJust so it's guaranteed that it'll never happen? :)\n\n\n",
"msg_date": "Wed, 24 Apr 2019 17:18:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Why is it OK for dbsize.c to look at relation files directly?"
},
{
"msg_contents": "On Wed, Apr 24, 2019 at 05:18:08PM -0700, Andres Freund wrote:\n> Just so it's guaranteed that it'll never happen? :)\n\nItems get done, from time to time... b0eaa4c is one rare example :p\n\n/me runs and hides.\n--\nMichael",
"msg_date": "Thu, 25 Apr 2019 09:25:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is it OK for dbsize.c to look at relation files directly?"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 09:25:03AM +0900, Michael Paquier wrote:\n>On Wed, Apr 24, 2019 at 05:18:08PM -0700, Andres Freund wrote:\n>> Just so it's guaranteed that it'll never happen? :)\n>\n>Items get done, from time to time... b0eaa4c is one rare example :p\n>\n>/me runs and hides.\n\nIMO this is one of the rare examples of a TODO item that is actually\ndoable and not \"I have no idea how to do this so I'll stick it into the\nTODO list\". And it seems like a fairly well isolated stuff, so it might\nbe a nice work for someone new.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 25 Apr 2019 23:38:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is it OK for dbsize.c to look at relation files directly?"
}
] |
[
{
"msg_contents": "Hello.\n\nI happened to find that several symbols renamed in 3eb77eba5a are\nleft in comments. Please find the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Apr 2019 18:50:28 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Comment fix for renamed functions"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 6:51 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n> Hello.\n>\n> I happened to find that several symbols renamed in 3eb77eba5a are\n> left in comments. Please find the attached.\n\nThanks for the patch! Committed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 25 Apr 2019 23:51:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix for renamed functions"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15781\nLogged by: Sean Johnston\nEmail address: sean.johnston@edgeintelligence.com\nPostgreSQL version: 11.2\nOperating system: docker image postgres:11.2\nDescription: \n\nExample Query:\r\n\r\nselect exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\nmax(c1) from ft4);\r\n\r\nFull Steps (modified from the postgres_fdw regression tests):\r\n\r\nCREATE EXTENSION postgres_fdw;\r\n\r\nCREATE SERVER testserver1 FOREIGN DATA WRAPPER postgres_fdw;\r\n DO $d$\r\n\tBEGIN\r\n\tEXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw\r\n\tOPTIONS (dbname '$$||current_database()||$$',\r\n\t\tport '$$||current_setting('port')||$$'\r\n\t)$$;\r\n\tEND;\r\n$d$;\r\nCREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\r\n\r\nCREATE SCHEMA \"S 1\";\r\n\r\nCREATE TABLE \"S 1\".\"T 3\" (\r\n c1 int NOT NULL,\r\n c2 int NOT NULL,\r\n c3 text,\r\n CONSTRAINT t3_pkey PRIMARY KEY (c1)\r\n);\r\n\r\nCREATE FOREIGN TABLE ft4 (\r\n c1 int NOT NULL,\r\n c2 int NOT NULL,\r\n c3 text\r\n) SERVER loopback OPTIONS (schema_name 'S 1', table_name 'T 3');\r\n\r\nselect exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\nmax(c1) from ft4);",
"msg_date": "Thu, 25 Apr 2019 13:32:31 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Hi\n\nI can reproduce this on REL_11_STABLE and HEAD.\n\nHere is backtrace from REL_11_STABLE:\n\n#0 CheckVarSlotCompatibility (slot=slot@entry=0x0, attnum=1, vartype=16) at execExprInterp.c:1867\n#1 0x00005611db3cb342 in CheckExprStillValid (state=state@entry=0x5611dd0fa368, econtext=econtext@entry=0x5611dd0f9730) at execExprInterp.c:1831\n#2 0x00005611db3cb36e in ExecInterpExprStillValid (state=0x5611dd0fa368, econtext=0x5611dd0f9730, isNull=0x7ffc524ca89f) at execExprInterp.c:1780\n#3 0x00007fc3648bac8d in ExecEvalExpr (isNull=0x7ffc524ca89f, econtext=0x5611dd0f9730, state=<optimized out>)\n at ../../src/include/executor/executor.h:294\n#4 process_query_params (econtext=0x5611dd0f9730, param_flinfo=0x5611dd0fa2d0, param_exprs=<optimized out>, \n param_values=param_values@entry=0x5611dd0fad50) at postgres_fdw.c:4124\n#5 0x00007fc3648baf82 in create_cursor (node=<optimized out>) at postgres_fdw.c:3148\n#6 0x00007fc3648bb041 in postgresIterateForeignScan (node=0x5611dd0f9618) at postgres_fdw.c:1451\n#7 0x00005611db4026c4 in ForeignNext (node=node@entry=0x5611dd0f9618) at nodeForeignscan.c:54\n#8 0x00005611db3db4ff in ExecScanFetch (recheckMtd=0x5611db40256e <ForeignRecheck>, accessMtd=0x5611db402650 <ForeignNext>, node=0x5611dd0f9618)\n at execScan.c:95\n#9 ExecScan (node=0x5611dd0f9618, accessMtd=accessMtd@entry=0x5611db402650 <ForeignNext>, \n recheckMtd=recheckMtd@entry=0x5611db40256e <ForeignRecheck>) at execScan.c:145\n#10 0x00005611db40254d in ExecForeignScan (pstate=<optimized out>) at nodeForeignscan.c:121\n#11 0x00005611db3d9aa2 in ExecProcNodeFirst (node=0x5611dd0f9618) at execProcnode.c:445\n#12 0x00005611db3d2039 in ExecProcNode (node=0x5611dd0f9618) at ../../../src/include/executor/executor.h:247\n#13 ExecutePlan (estate=estate@entry=0x5611dd0b2718, planstate=0x5611dd0f9618, use_parallel_mode=<optimized out>, \n operation=operation@entry=CMD_SELECT, sendTuples=sendTuples@entry=true, numberTuples=numberTuples@entry=0, direction=ForwardScanDirection, \n dest=0x5611dd0df520, execute_once=true) at execMain.c:1723\n#14 0x00005611db3d2c94 in standard_ExecutorRun (queryDesc=0x5611dd0c7be8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>)\n at execMain.c:364\n#15 0x00005611db3d2d4f in ExecutorRun (queryDesc=queryDesc@entry=0x5611dd0c7be8, direction=direction@entry=ForwardScanDirection, \n count=count@entry=0, execute_once=<optimized out>) at execMain.c:307\n#16 0x00005611db53f0ed in PortalRunSelect (portal=portal@entry=0x5611dd054278, forward=forward@entry=true, count=0, \n count@entry=9223372036854775807, dest=dest@entry=0x5611dd0df520) at pquery.c:932\n#17 0x00005611db5407de in PortalRun (portal=portal@entry=0x5611dd054278, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \n run_once=run_once@entry=true, dest=dest@entry=0x5611dd0df520, altdest=altdest@entry=0x5611dd0df520, completionTag=0x7ffc524cad10 \"\")\n at pquery.c:773\n#18 0x00005611db53caa9 in exec_simple_query (\n query_string=query_string@entry=0x5611dcfedac8 \"select exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\\nmax(c1) from ft4);\")\n at postgres.c:1145\n#19 0x00005611db53e9ce in PostgresMain (argc=<optimized out>, argv=argv@entry=0x5611dd018910, dbname=<optimized out>, username=<optimized out>)\n at postgres.c:4182\n#20 0x00005611db4b8d8b in BackendRun (port=port@entry=0x5611dd0115a0) at postmaster.c:4358\n#21 0x00005611db4bbd2f in BackendStartup (port=port@entry=0x5611dd0115a0) at postmaster.c:4030\n#22 0x00005611db4bbf52 in ServerLoop () at postmaster.c:1707\n#23 0x00005611db4bd459 in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1380\n#24 0x00005611db4210c9 in main (argc=3, argv=0x5611dcfe81f0) at main.c:228\n\nSimilar from HEAD:\n\n#0 CheckVarSlotCompatibility (slot=slot@entry=0x0, attnum=1, vartype=16) at execExprInterp.c:1850\n#1 0x00005581fa6011b7 in CheckExprStillValid (state=state@entry=0x5581fba700c0, econtext=econtext@entry=0x5581fba6f4f0) at execExprInterp.c:1814\n#2 0x00005581fa6011e3 in ExecInterpExprStillValid (state=0x5581fba700c0, econtext=0x5581fba6f4f0, isNull=0x7ffcad499ebf) at execExprInterp.c:1763\n#3 0x00007f276130d67c in ExecEvalExpr (isNull=0x7ffcad499ebf, econtext=0x5581fba6f4f0, state=<optimized out>)\n at ../../src/include/executor/executor.h:288\n#4 process_query_params (econtext=0x5581fba6f4f0, param_flinfo=0x5581fba70028, param_exprs=<optimized out>, \n param_values=param_values@entry=0x5581fba70aa8) at postgres_fdw.c:4307\n#5 0x00007f276130d982 in create_cursor (node=<optimized out>) at postgres_fdw.c:3247\n#6 0x00007f276130da3c in postgresIterateForeignScan (node=0x5581fba6f3d8) at postgres_fdw.c:1517\n#7 0x00005581fa63adad in ForeignNext (node=node@entry=0x5581fba6f3d8) at nodeForeignscan.c:54\n#8 0x00005581fa61104b in ExecScanFetch (recheckMtd=0x5581fa63adf1 <ForeignRecheck>, accessMtd=0x5581fa63ad2c <ForeignNext>, node=0x5581fba6f3d8)\n at execScan.c:93\n#9 ExecScan (node=0x5581fba6f3d8, accessMtd=accessMtd@entry=0x5581fa63ad2c <ForeignNext>, \n recheckMtd=recheckMtd@entry=0x5581fa63adf1 <ForeignRecheck>) at execScan.c:143\n#10 0x00005581fa63add0 in ExecForeignScan (pstate=<optimized out>) at nodeForeignscan.c:115\n#11 0x00005581fa60f3e8 in ExecProcNodeFirst (node=0x5581fba6f3d8) at execProcnode.c:445\n#12 0x00005581fa607fdd in ExecProcNode (node=0x5581fba6f3d8) at ../../../src/include/executor/executor.h:239\n#13 ExecutePlan (estate=estate@entry=0x5581fba2abb8, planstate=0x5581fba6f3d8, use_parallel_mode=<optimized out>, \n operation=operation@entry=CMD_SELECT, sendTuples=sendTuples@entry=true, numberTuples=numberTuples@entry=0, direction=ForwardScanDirection, \n dest=0x5581fba5cac0, execute_once=true) at execMain.c:1648\n#14 0x00005581fa608c2a in standard_ExecutorRun (queryDesc=0x5581fba207f8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>)\n at execMain.c:365\n#15 0x00005581fa608ce5 in ExecutorRun (queryDesc=queryDesc@entry=0x5581fba207f8, direction=direction@entry=ForwardScanDirection, \n count=count@entry=0, execute_once=<optimized out>) at execMain.c:309\n#16 0x00005581fa782d65 in PortalRunSelect (portal=portal@entry=0x5581fb9bb168, forward=forward@entry=true, count=0, \n count@entry=9223372036854775807, dest=dest@entry=0x5581fba5cac0) at pquery.c:929\n#17 0x00005581fa78442c in PortalRun (portal=portal@entry=0x5581fb9bb168, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \n run_once=run_once@entry=true, dest=dest@entry=0x5581fba5cac0, altdest=altdest@entry=0x5581fba5cac0, completionTag=0x7ffcad49a330 \"\")\n at pquery.c:770\n#18 0x00005581fa780755 in exec_simple_query (\n query_string=query_string@entry=0x5581fb955ac8 \"select exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\\nmax(c1) from ft4);\")\n at postgres.c:1215\n#19 0x00005581fa78263d in PostgresMain (argc=<optimized out>, argv=argv@entry=0x5581fb981310, dbname=<optimized out>, username=<optimized out>)\n at postgres.c:4249\n#20 0x00005581fa6f7979 in BackendRun (port=port@entry=0x5581fb978d20) at postmaster.c:4426\n#21 0x00005581fa6faa98 in BackendStartup (port=port@entry=0x5581fb978d20) at postmaster.c:4117\n#22 0x00005581fa6facbb in ServerLoop () at postmaster.c:1704\n#23 0x00005581fa6fc1fc in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1377\n#24 0x00005581fa65acf1 in main (argc=3, argv=0x5581fb9501f0) at main.c:228\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 25 Apr 2019 17:20:17 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> [ this crashes if ft4 is a postgres_fdw foreign table: ]\n> select exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\n> max(c1) from ft4);\n\nHm, the max() subquery isn't necessary, this is sufficient:\n\nselect exists(select c1 from ft4), avg(c1) from ft4 where c1 = 42;\n\nThat produces a plan like\n\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Foreign Scan (cost=200.07..246.67 rows=1 width=33)\n Output: ($0), (avg(ft4.c1))\n Relations: Aggregate on (public.ft4)\n Remote SQL: SELECT $1::boolean, avg(c1) FROM \"S 1\".\"T 3\" WHERE ((c1 = 432))\n InitPlan 1 (returns $0)\n -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n(7 rows)\n\nNow one's first observation about that is that it's kind of dumb to send\nthe result of the locally-executed InitPlan over to the far end only to\nread it back. So maybe we should be thinking about how to avoid that.\nWe do avoid it for plain foreign scans:\n\nregression=# explain verbose \n select exists(select c1 from ft4), * from ft4 where c1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Foreign Scan on public.ft4 (cost=200.03..226.15 rows=6 width=41)\n Output: $0, ft4.c1, ft4.c2, ft4.c3\n Remote SQL: SELECT c1, c2, c3 FROM \"S 1\".\"T 3\" WHERE ((c1 = 42))\n InitPlan 1 (returns $0)\n -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n(6 rows)\n\nand also for foreign joins:\n\nregression=# explain verbose \n select exists(select c1 from ft4), * from ft4, ft4 ft4b where ft4.c1 = 42 and ft4b.c1 = 43;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=200.03..252.93 rows=36 width=81)\n Output: $0, ft4.c1, ft4.c2, ft4.c3, ft4b.c1, ft4b.c2, ft4b.c3\n Relations: (public.ft4) INNER JOIN (public.ft4 ft4b)\n Remote SQL: SELECT r1.c1, r1.c2, r1.c3, r2.c1, r2.c2, r2.c3 FROM (\"S 1\".\"T 3\" r1 INNER JOIN \"S 1\".\"T 3\" r2 ON (((r2.c1 = 43)) AND ((r1.c1 = 42))))\n InitPlan 1 (returns $0)\n -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n(7 rows)\n\n\nbut the code for upper-relation scans is apparently stupider than either\nof those cases.\n\nThe proximate cause of the crash is that we have {PARAM 1}\n(representing the output of the InitPlan) in the path's fdw_exprs, and\nalso the identical expression in fdw_scan_tlist, and that means that when\nsetrefs.c processes the ForeignScan node it thinks it should replace the\n{PARAM 1} in fdw_exprs with a Var representing a reference to the\nfdw_scan_tlist entry. That would be fine if the fdw_exprs represented\nexpressions to be evaluated over the output of the foreign scan, but of\ncourse they don't --- postgres_fdw uses fdw_exprs to compute values to be\nsent to the remote end, instead. So we crash at runtime because there's\nno slot to supply such output to the fdw_exprs.\n\nI was able to make the crash go away by removing this statement from\nset_foreignscan_references:\n\n fscan->fdw_exprs = (List *)\n fix_upper_expr(root,\n (Node *) fscan->fdw_exprs,\n itlist,\n INDEX_VAR,\n rtoffset);\n\nand we still pass check-world without that (which means we lack test\ncoverage, because the minimum that should happen to fdw_exprs is\nfix_scan_list :-(). But I do not think that's an acceptable route to\na patch, because it amounts to having the core code know what the FDW\nis using fdw_exprs for, and we explicitly disclaim any assumptions about\nthat. fdw_exprs is specified to be processed the same as other\nexpressions in the same plan node, so I think this fix_upper_expr call\nprobably ought to stay like it is, even though it's not really the right\nthing for postgres_fdw. It might be the right thing for other FDWs.\n\n(One could imagine, perhaps, having some flag in the ForeignPlan\nnode that tells setrefs.c what to do. But that would be an API break\nfor FDWs, so it wouldn't be a back-patchable solution.)\n\n(Actually, it seems to me that set_foreignscan_references is *already*\nassuming too much about the semantics of these expressions in upper\nplan nodes, so maybe we need to have a chat about that anyway.)\n\nIf we do leave it like this, then the only way for postgres_fdw to\navoid trouble is to not have any entries in fdw_exprs that exactly\nmatch entries in fdw_scan_tlist. So that pretty much devolves back\nto what I said before: don't ship values to the far end that are\njust going to be fed back as-is. But now it's a correctness\nrequirement not just an optimization.\n\nI haven't had anything to do with postgres_fdw's upper-relation-pushdown\ncode, so I am not sure why it's stupider than the other cases.\nThoughts anybody?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 14:24:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "> On Thu, Apr 25, 2019 at 8:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> The proximate cause of the crash is that we have {PARAM 1}\n> (representing the output of the InitPlan) in the path's fdw_exprs, and\n> also the identical expression in fdw_scan_tlist, and that means that when\n> setrefs.c processes the ForeignScan node it thinks it should replace the\n> {PARAM 1} in fdw_exprs with a Var representing a reference to the\n> fdw_scan_tlist entry.\n\nI've noticed, that it behaves like that since f9f63ed1f2e5 (originally I found\nit pretty strange, but after this explanation it does make sense). As an\nexperiment, I've changed the position of condition of\n\n if (context->subplan_itlist->has_non_vars)\n\nback - it also made problem to disappear, and what was interesting is that the\ntest case for update (exactly what this commit was fixing) is not crashing\neither. I've checked on the commit right before f9f63ed1f2e5, without mentioned\nreordering there is a crash, but I couldn't reproduce it on the master.\n\n\n",
"msg_date": "Thu, 25 Apr 2019 22:20:59 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n>> On Thu, Apr 25, 2019 at 8:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The proximate cause of the crash is that we have {PARAM 1}\n>> (representing the output of the InitPlan) in the path's fdw_exprs, and\n>> also the identical expression in fdw_scan_tlist, and that means that when\n>> setrefs.c processes the ForeignScan node it thinks it should replace the\n>> {PARAM 1} in fdw_exprs with a Var representing a reference to the\n>> fdw_scan_tlist entry.\n\n> I've noticed, that it behaves like that since f9f63ed1f2e5 (originally I found\n> it pretty strange, but after this explanation it does make sense). As an\n> experiment, I've changed the position of condition of\n> if (context->subplan_itlist->has_non_vars)\n> back - it also made problem to disappear,\n\nWell, that's just coincidental for the case where the problem fdw_expr is\na Param. I haven't tried to figure out exactly what upper-path generation\nthinks it should put into fdw_exprs, but is it really only Params?\n\nAnyway, ideally we'd not have any entries in fdw_scan_tlist that don't\ninclude at least one foreign Var, and then there can't be a false match.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:27:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "I wrote:\n> Well, that's just coincidental for the case where the problem fdw_expr is\n> a Param. I haven't tried to figure out exactly what upper-path generation\n> thinks it should put into fdw_exprs, but is it really only Params?\n\nOh, this is interesting:\n\nregression=# explain verbose \n select exists(select c1 from ft4) as c, avg(c1) from ft4 where ft4.c1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Foreign Scan (cost=200.07..246.67 rows=1 width=33)\n Output: ($0), (avg(ft4.c1))\n Relations: Aggregate on (public.ft4)\n Remote SQL: SELECT $1::boolean, avg(c1) FROM \"S 1\".\"T 3\" WHERE ((c1 = 42))\n InitPlan 1 (returns $0)\n -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n(7 rows)\n\nThat would crash if I tried to execute it, but:\n\nregression=# explain verbose \n select case when exists(select c1 from ft4) then 1 else 2 end as c, avg(c1) from ft4 where ft4.c1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------------\n Foreign Scan (cost=200.07..246.67 rows=1 width=36)\n Output: CASE WHEN $0 THEN 1 ELSE 2 END, (avg(ft4.c1))\n Relations: Aggregate on (public.ft4)\n Remote SQL: SELECT avg(c1) FROM \"S 1\".\"T 3\" WHERE ((c1 = 42))\n InitPlan 1 (returns $0)\n -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n(7 rows)\n\nThat's just fine. So there is something stupid happening in creation\nof the fdw_scan_tlist when a relation tlist item is a bare Param,\nwhich doesn't happen if the same Param is buried in a larger expression.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2019 16:36:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "(2019/04/26 3:24), Tom Lane wrote:\n> PG Bug reporting form<noreply@postgresql.org> writes:\n>> [ this crashes if ft4 is a postgres_fdw foreign table: ]\n>> select exists(select c1 from ft4), avg(c1) from ft4 where c1 = (select\n>> max(c1) from ft4);\n>\n> Hm, the max() subquery isn't necessary, this is sufficient:\n>\n> select exists(select c1 from ft4), avg(c1) from ft4 where c1 = 42;\n>\n> That produces a plan like\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------\n> Foreign Scan (cost=200.07..246.67 rows=1 width=33)\n> Output: ($0), (avg(ft4.c1))\n> Relations: Aggregate on (public.ft4)\n> Remote SQL: SELECT $1::boolean, avg(c1) FROM \"S 1\".\"T 3\" WHERE ((c1 = 432))\n> InitPlan 1 (returns $0)\n> -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n> Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n> (7 rows)\n>\n> Now one's first observation about that is that it's kind of dumb to send\n> the result of the locally-executed InitPlan over to the far end only to\n> read it back. So maybe we should be thinking about how to avoid that.\n> We do avoid it for plain foreign scans:\n>\n> regression=# explain verbose\n> select exists(select c1 from ft4), * from ft4 where c1 = 42;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------\n> Foreign Scan on public.ft4 (cost=200.03..226.15 rows=6 width=41)\n> Output: $0, ft4.c1, ft4.c2, ft4.c3\n> Remote SQL: SELECT c1, c2, c3 FROM \"S 1\".\"T 3\" WHERE ((c1 = 42))\n> InitPlan 1 (returns $0)\n> -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n> Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n> (6 rows)\n>\n> and also for foreign joins:\n>\n> regression=# explain verbose\n> select exists(select c1 from ft4), * from ft4, ft4 ft4b where ft4.c1 = 42 and ft4b.c1 = 43;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Foreign Scan (cost=200.03..252.93 rows=36 width=81)\n> Output: $0, ft4.c1, ft4.c2, ft4.c3, ft4b.c1, ft4b.c2, ft4b.c3\n> Relations: (public.ft4) INNER JOIN (public.ft4 ft4b)\n> Remote SQL: SELECT r1.c1, r1.c2, r1.c3, r2.c1, r2.c2, r2.c3 FROM (\"S 1\".\"T 3\" r1 INNER JOIN \"S 1\".\"T 3\" r2 ON (((r2.c1 = 43)) AND ((r1.c1 = 42))))\n> InitPlan 1 (returns $0)\n> -> Foreign Scan on public.ft4 ft4_1 (cost=100.00..212.39 rows=3413 width=0)\n> Remote SQL: SELECT NULL FROM \"S 1\".\"T 3\"\n> (7 rows)\n>\n>\n> but the code for upper-relation scans is apparently stupider than either\n> of those cases.\n>\n> The proximate cause of the crash is that we have {PARAM 1}\n> (representing the output of the InitPlan) in the path's fdw_exprs, and\n> also the identical expression in fdw_scan_tlist, and that means that when\n> setrefs.c processes the ForeignScan node it thinks it should replace the\n> {PARAM 1} in fdw_exprs with a Var representing a reference to the\n> fdw_scan_tlist entry. That would be fine if the fdw_exprs represented\n> expressions to be evaluated over the output of the foreign scan, but of\n> course they don't --- postgres_fdw uses fdw_exprs to compute values to be\n> sent to the remote end, instead. So we crash at runtime because there's\n> no slot to supply such output to the fdw_exprs.\n>\n> I was able to make the crash go away by removing this statement from\n> set_foreignscan_references:\n>\n> fscan->fdw_exprs = (List *)\n> fix_upper_expr(root,\n> (Node *) fscan->fdw_exprs,\n> itlist,\n> INDEX_VAR,\n> rtoffset);\n>\n> and we still pass check-world without that (which means we lack test\n> coverage, because the minimum that should happen to fdw_exprs is\n> fix_scan_list :-(). But I do not think that's an acceptable route to\n> a patch, because it amounts to having the core code know what the FDW\n> is using fdw_exprs for, and we explicitly disclaim any assumptions about\n> that. fdw_exprs is specified to be processed the same as other\n> expressions in the same plan node, so I think this fix_upper_expr call\n> probably ought to stay like it is, even though it's not really the right\n> thing for postgres_fdw. It might be the right thing for other FDWs.\n>\n> (One could imagine, perhaps, having some flag in the ForeignPlan\n> node that tells setrefs.c what to do. But that would be an API break\n> for FDWs, so it wouldn't be a back-patchable solution.)\n>\n> (Actually, it seems to me that set_foreignscan_references is *already*\n> assuming too much about the semantics of these expressions in upper\n> plan nodes, so maybe we need to have a chat about that anyway.)\n>\n> If we do leave it like this, then the only way for postgres_fdw to\n> avoid trouble is to not have any entries in fdw_exprs that exactly\n> match entries in fdw_scan_tlist. So that pretty much devolves back\n> to what I said before: don't ship values to the far end that are\n> just going to be fed back as-is. But now it's a correctness\n> requirement not just an optimization.\n\nThanks for taking care of this, as usual!\n\n> I haven't had anything to do with postgres_fdw's upper-relation-pushdown\n> code, so I am not sure why it's stupider than the other cases.\n> Thoughts anybody?\n\nI worked on the ORDERED/FINAL-upperrel pushdown for PG12, but I don't \nthink that that's directly related to this issue, because this arises in \nPG11 already. Maybe I'm missing something, but the \nUPPERREL_GROUP_AGG-upperrel pushdown added in PG10 is likely to be \nrelated to this. I'll work on this issue unless somebody wants to. But \nI'll take a 10-day vocation from tomorrow, so I don't think I'll be able \nto fix this in the next minor release...\n\nBest regards,\nEtsuro Fujita\n\n\n\n",
"msg_date": "Fri, 26 Apr 2019 21:39:46 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n> (2019/04/26 3:24), Tom Lane wrote:\n>> If we do leave it like this, then the only way for postgres_fdw to\n>> avoid trouble is to not have any entries in fdw_exprs that exactly\n>> match entries in fdw_scan_tlist. So that pretty much devolves back\n>> to what I said before: don't ship values to the far end that are\n>> just going to be fed back as-is. But now it's a correctness\n>> requirement not just an optimization.\n\n> I worked on the ORDERED/FINAL-upperrel pushdown for PG12, but I don't \n> think that that's directly related to this issue, because this arises in \n> PG11 already. Maybe I'm missing something, but the \n> UPPERREL_GROUP_AGG-upperrel pushdown added in PG10 is likely to be \n> related to this. I'll work on this issue unless somebody wants to. But \n> I'll take a 10-day vocation from tomorrow, so I don't think I'll be able \n> to fix this in the next minor release...\n\nWell, the releases are coming up fast, so I spent some time on this.\nIf we don't want to change what the core code does with fdw_exprs,\nI think the only way to fix it is to hack postgres_fdw so that it\nwon't generate plans involving the problematic case. See attached.\n\nWe end up with slightly weird-looking plans if the troublesome Param\nis actually a GROUP BY expression, but if it's not, I think things\nare fine. Maybe we could do something smarter about the GROUP BY case,\nbut it seems weird enough to maybe not be worth additional trouble.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 26 Apr 2019 13:10:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > (2019/04/26 3:24), Tom Lane wrote:\n> >> If we do leave it like this, then the only way for postgres_fdw to\n> >> avoid trouble is to not have any entries in fdw_exprs that exactly\n> >> match entries in fdw_scan_tlist. So that pretty much devolves back\n> >> to what I said before: don't ship values to the far end that are\n> >> just going to be fed back as-is. But now it's a correctness\n> >> requirement not just an optimization.\n\n> Well, the releases are coming up fast, so I spent some time on this.\n> If we don't want to change what the core code does with fdw_exprs,\n> I think the only way to fix it is to hack postgres_fdw so that it\n> won't generate plans involving the problematic case.\n\nSeems reasonable.\n\n> See attached.\n\nI read the patch. It looks good to me. I didn't test it, though.\n\n> We end up with slightly weird-looking plans if the troublesome Param\n> is actually a GROUP BY expression, but if it's not, I think things\n> are fine. Maybe we could do something smarter about the GROUP BY case,\n> but it seems weird enough to maybe not be worth additional trouble.\n\nAgreed.\n\nThanks for working on this!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 27 Apr 2019 22:30:37 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Sat, Apr 27, 2019 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we don't want to change what the core code does with fdw_exprs,\n>> I think the only way to fix it is to hack postgres_fdw so that it\n>> won't generate plans involving the problematic case.\n\n> Seems reasonable.\n\n>> See attached.\n\n> I read the patch. It looks good to me. I didn't test it, though.\n\nThanks for looking! Have a good vacation ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Apr 2019 10:47:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Not sure if this is the right avenue to follow up on this. The patch works\nfine. However, we're working on a modified version of the postgres_fdw in\nwhich we're trying to push as much as possible to the remote nodes,\nincluding ordering and limits. The patch causes the upper paths for the\nordering and limit to be rejected as they have no relids. I've had a quick\nlook at maybe how to pull in relids in the fdw private data but its not\nobvious. Obviously this isn't mainstream postgres but just wondering if\nanyone has looked into issues with regards to pushing order/limit to remote\nnodes for fdw.\n\nOn Sat, Apr 27, 2019 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Sat, Apr 27, 2019 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> If we don't want to change what the core code does with fdw_exprs,\n> >> I think the only way to fix it is to hack postgres_fdw so that it\n> >> won't generate plans involving the problematic case.\n>\n> > Seems reasonable.\n>\n> >> See attached.\n>\n> > I read the patch. It looks good to me. I didn't test it, though.\n>\n> Thanks for looking! Have a good vacation ...\n>\n> regards, tom lane\n>\n\nNot sure if this is the right avenue to follow up on this. The patch works fine. However, we're working on a modified version of the postgres_fdw in which we're trying to push as much as possible to the remote nodes, including ordering and limits. The patch causes the upper paths for the ordering and limit to be rejected as they have no relids. I've had a quick look at maybe how to pull in relids in the fdw private data but its not obvious. Obviously this isn't mainstream postgres but just wondering if anyone has looked into issues with regards to pushing order/limit to remote nodes for fdw.On Sat, Apr 27, 2019 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Sat, Apr 27, 2019 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we don't want to change what the core code does with fdw_exprs,\n>> I think the only way to fix it is to hack postgres_fdw so that it\n>> won't generate plans involving the problematic case.\n\n> Seems reasonable.\n\n>> See attached.\n\n> I read the patch. It looks good to me. I didn't test it, though.\n\nThanks for looking! Have a good vacation ...\n\n regards, tom lane",
"msg_date": "Fri, 24 May 2019 18:19:30 +0100",
"msg_from": "Sean Johnston <sean.johnston@edgeintelligence.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "Sean Johnston <sean.johnston@edgeintelligence.com> writes:\n> Not sure if this is the right avenue to follow up on this. The patch works\n> fine. However, we're working on a modified version of the postgres_fdw in\n> which we're trying to push as much as possible to the remote nodes,\n> including ordering and limits. The patch causes the upper paths for the\n> ordering and limit to be rejected as they have no relids.\n\nUh, what? If you're speaking of 8cad5adb9, the only case I'm aware of\nwhere it might be a performance issue is if you have \"GROUP BY\nlocal-expr\", which seems like a pretty weird thing to need to push\nto the remote side, since the local expression would be effectively\na constant on the far end.\n\nYou could imagine working around it by discarding such GROUP BY\ncolumns in what's to be sent to the far end, and if you end up with\nan empty GROUP BY clause, sending \"HAVING TRUE\" instead to keep the\nsemantics the same. But I'm uninterested in stacking yet more\nklugery atop 8cad5adb9 so far as the community code is concerned.\nThe right way forward, as noted in the commit message, is to revert\nthat patch in favor of adding some API that will let the FDW control\nhow setrefs.c processes a ForeignScan's expressions. We just can't\ndo that in released branches :-(.\n\nIt's possible that we should treat this issue as an open item for v12\ninstead of letting it slide to v13 or later. But I think people would\nonly be amenable to that if you can point to a non-silly example where\nfailure to push the GROUP BY creates a performance issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 13:59:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
},
{
"msg_contents": "On Sat, May 25, 2019 at 2:19 AM Sean Johnston\n<sean.johnston@edgeintelligence.com> wrote:\n> Obviously this isn't mainstream postgres but just wondering if anyone has looked into issues with regards to pushing order/limit to remote nodes for fdw.\n\nIn PostgreSQL 12 Beta 1 released this week [1], we can push down ORDER\nBY/LIMIT to the remote PostgreSQL server. Give it a try!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/about/news/1943/\n\n\n",
"msg_date": "Sat, 25 May 2019 04:30:41 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15781: subselect on foreign table (postgres_fdw) can crash\n (segfault)"
}
] |
[
{
"msg_contents": "Hi,\n\npg_waldump report no detail information about PREPARE TRANSACTION record,\nas follows.\n\n rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\nlsn: 0/020000A8, prev 0/02000060, desc: PREPARE\n\nI'd like to modify pg_waldump, i.e., xact_desc(), so that it reports\ndetail information like GID, as follows. Attached patch does that.\nThis would be helpful, for example, when diagnosing 2PC-related\ntrouble by checking the status of 2PC transaction with the specified GID.\nThought?\n\n rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\nlsn: 0/020000A8, prev 0/02000060, desc: PREPARE gid abc: 2004-06-17\n05:26:27.500240 JST\n\nI will add this to next CommitFest page.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Fri, 26 Apr 2019 00:36:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_waldump and PREPARE"
},
{
"msg_contents": "On 2019-Apr-26, Fujii Masao wrote:\n\n> Hi,\n> \n> pg_waldump report no detail information about PREPARE TRANSACTION record,\n> as follows.\n> \n> rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\n> lsn: 0/020000A8, prev 0/02000060, desc: PREPARE\n> \n> I'd like to modify pg_waldump, i.e., xact_desc(), so that it reports\n> detail information like GID, as follows. Attached patch does that.\n> This would be helpful, for example, when diagnosing 2PC-related\n> trouble by checking the status of 2PC transaction with the specified GID.\n> Thought?\n> \n> rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\n> lsn: 0/020000A8, prev 0/02000060, desc: PREPARE gid abc: 2004-06-17\n> 05:26:27.500240 JST\n\nI think this is a great change to make.\n\nStrangely, before your patch, ParsePrepareRecord seems completely\nunused.\n\nI'm not sure I like the moving of that routine to xactdesc.c ...\non one hand, it would be side-by-side with ParseCommitRecord, but OTOH\nit seems weird to have twophase.c call xactdesc.c. I also wonder if\ndefining the structs in the way you do is the most sensible arrangement.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 12:04:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 1:04 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Apr-26, Fujii Masao wrote:\n>\n> > Hi,\n> >\n> > pg_waldump report no detail information about PREPARE TRANSACTION record,\n> > as follows.\n> >\n> > rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\n> > lsn: 0/020000A8, prev 0/02000060, desc: PREPARE\n> >\n> > I'd like to modify pg_waldump, i.e., xact_desc(), so that it reports\n> > detail information like GID, as follows. Attached patch does that.\n> > This would be helpful, for example, when diagnosing 2PC-related\n> > trouble by checking the status of 2PC transaction with the specified GID.\n> > Thought?\n> >\n> > rmgr: Transaction len (rec/tot): 250/ 250, tx: 485,\n> > lsn: 0/020000A8, prev 0/02000060, desc: PREPARE gid abc: 2004-06-17\n> > 05:26:27.500240 JST\n>\n> I think this is a great change to make.\n\nThanks!\n\n> Strangely, before your patch, ParsePrepareRecord seems completely\n> unused.\n\nYes, this seems to be the leftover of commit 1eb6d6527a.\n\n> I'm not sure I like the moving of that routine to xactdesc.c ...\n> on one hand, it would be side-by-side with ParseCommitRecord, but OTOH\n> it seems weird to have twophase.c call xactdesc.c.\n\nI moved ParsePrepareRecord() to xactdesc.c because it should be\naccessed in backend (when replaying WAL) and frontend (pg_waldump) code\nand xactdesc.c looked like proper place for that purpose\nParseCommitRecord() is also in xactdesc.c because of the same reason..\n\n> I also wonder if\n> defining the structs in the way you do is the most sensible arrangement.\n\nI did that arrangement because the format of PREPARE TRANSACTION record,\ni.e., that struct, also needs to be accessed in backend and frontend.\nBut, of course, if there is smarter way, I'm happy to adopt that!\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 26 Apr 2019 03:21:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On 2019-Apr-26, Fujii Masao wrote:\n\n> On Fri, Apr 26, 2019 at 1:04 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I also wonder if\n> > defining the structs in the way you do is the most sensible arrangement.\n> \n> I did that arrangement because the format of PREPARE TRANSACTION record,\n> i.e., that struct, also needs to be accessed in backend and frontend.\n> But, of course, if there is smarter way, I'm happy to adopt that!\n\nI don't know. I spent some time staring at the code involved, and it\nseems it'd be possible to improve just a little bit on cleanliness\ngrounds, with a lot of effort, but not enough practical value.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Apr 2019 15:08:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 03:08:36PM -0400, Alvaro Herrera wrote:\n> On 2019-Apr-26, Fujii Masao wrote:\n>> I did that arrangement because the format of PREPARE TRANSACTION record,\n>> i.e., that struct, also needs to be accessed in backend and frontend.\n>> But, of course, if there is smarter way, I'm happy to adopt that!\n> \n> I don't know. I spent some time staring at the code involved, and it\n> seems it'd be possible to improve just a little bit on cleanliness\n> grounds, with a lot of effort, but not enough practical value.\n\nDescribing those records is something we should do. There are other\nparsing routines in xactdesc.c for commit and abort records, so having\nthat extra routine for prepare at the same place does not sound\nstrange to me.\n\n+typedef xl_xact_prepare TwoPhaseFileHeader;\nI find this mapping implementation a bit lazy, and your\nnewly-introduced xl_xact_prepare does not count for all the contents\nof the actual WAL record for PREPARE TRANSACTION. Wouldn't it be\nbetter to put all the contents of the record in the same structure,\nand not only the 2PC header information?\n\nThis is not v12 material of course.\n--\nMichael",
"msg_date": "Fri, 26 Apr 2019 12:37:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 5:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 25, 2019 at 03:08:36PM -0400, Alvaro Herrera wrote:\n> > On 2019-Apr-26, Fujii Masao wrote:\n> >> I did that arrangement because the format of PREPARE TRANSACTION record,\n> >> i.e., that struct, also needs to be accessed in backend and frontend.\n> >> But, of course, if there is smarter way, I'm happy to adopt that!\n> >\n> > I don't know. I spent some time staring at the code involved, and it\n> > seems it'd be possible to improve just a little bit on cleanliness\n> > grounds, with a lot of effort, but not enough practical value.\n>\n> Describing those records is something we should do. There are other\n> parsing routines in xactdesc.c for commit and abort records, so having\n> that extra routine for prepare at the same place does not sound\n> strange to me.\n>\n> +typedef xl_xact_prepare TwoPhaseFileHeader;\n> I find this mapping implementation a bit lazy, and your\n> newly-introduced xl_xact_prepare does not count for all the contents\n> of the actual WAL record for PREPARE TRANSACTION. Wouldn't it be\n> better to put all the contents of the record in the same structure,\n> and not only the 2PC header information?\n\nThis patch doesn't apply anymore, could you send a rebase?\n\n\n",
"msg_date": "Tue, 2 Jul 2019 12:15:50 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 7:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Apr 26, 2019 at 5:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Apr 25, 2019 at 03:08:36PM -0400, Alvaro Herrera wrote:\n> > > On 2019-Apr-26, Fujii Masao wrote:\n> > >> I did that arrangement because the format of PREPARE TRANSACTION record,\n> > >> i.e., that struct, also needs to be accessed in backend and frontend.\n> > >> But, of course, if there is smarter way, I'm happy to adopt that!\n> > >\n> > > I don't know. I spent some time staring at the code involved, and it\n> > > seems it'd be possible to improve just a little bit on cleanliness\n> > > grounds, with a lot of effort, but not enough practical value.\n> >\n> > Describing those records is something we should do. There are other\n> > parsing routines in xactdesc.c for commit and abort records, so having\n> > that extra routine for prepare at the same place does not sound\n> > strange to me.\n> >\n> > +typedef xl_xact_prepare TwoPhaseFileHeader;\n> > I find this mapping implementation a bit lazy, and your\n> > newly-introduced xl_xact_prepare does not count for all the contents\n> > of the actual WAL record for PREPARE TRANSACTION. Wouldn't it be\n> > better to put all the contents of the record in the same structure,\n> > and not only the 2PC header information?\n>\n> This patch doesn't apply anymore, could you send a rebase?\n\nYes, attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 4 Jul 2019 00:20:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Wed, Jul 3, 2019 at 5:21 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Tue, Jul 2, 2019 at 7:16 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Fri, Apr 26, 2019 at 5:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Thu, Apr 25, 2019 at 03:08:36PM -0400, Alvaro Herrera wrote:\n> > > > On 2019-Apr-26, Fujii Masao wrote:\n> > > >> I did that arrangement because the format of PREPARE TRANSACTION record,\n> > > >> i.e., that struct, also needs to be accessed in backend and frontend.\n> > > >> But, of course, if there is smarter way, I'm happy to adopt that!\n> > > >\n> > > > I don't know. I spent some time staring at the code involved, and it\n> > > > seems it'd be possible to improve just a little bit on cleanliness\n> > > > grounds, with a lot of effort, but not enough practical value.\n> > >\n> > > Describing those records is something we should do. There are other\n> > > parsing routines in xactdesc.c for commit and abort records, so having\n> > > that extra routine for prepare at the same place does not sound\n> > > strange to me.\n> > >\n> > > +typedef xl_xact_prepare TwoPhaseFileHeader;\n> > > I find this mapping implementation a bit lazy, and your\n> > > newly-introduced xl_xact_prepare does not count for all the contents\n> > > of the actual WAL record for PREPARE TRANSACTION. Wouldn't it be\n> > > better to put all the contents of the record in the same structure,\n> > > and not only the 2PC header information?\n> >\n> > This patch doesn't apply anymore, could you send a rebase?\n>\n> Yes, attached is the updated version of the patch.\n\nThanks!\n\nSo the patch compiles and works as intended. I don't have much to say\nabout it, it all looks good to me, since the concerns about xactdesc.c\naren't worth the trouble.\n\nI'm not sure that I understand Michael's objection though, as\nxl_xact_prepare is not a new definition and AFAICS it couldn't contain\nthe records anyway. So I'll let him say if he has further objections\nor if it's ready for committer!\n\n\n",
"msg_date": "Wed, 3 Jul 2019 20:23:44 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Wed, Jul 03, 2019 at 08:23:44PM +0200, Julien Rouhaud wrote:\n> So the patch compiles and works as intended. I don't have much to say\n> about it, it all looks good to me, since the concerns about xactdesc.c\n> aren't worth the trouble.\n> \n> I'm not sure that I understand Michael's objection though, as\n> xl_xact_prepare is not a new definition and AFAICS it couldn't contain\n> the records anyway. So I'll let him say if he has further objections\n> or if it's ready for committer!\n\nThis patch provides parsing information only for the header of the 2PC\nrecord. Wouldn't it be interesting to get more information from the\nvarious TwoPhaseRecordOnDisk's callbacks? We could also print much\nmore information in xact_desc_prepare(). Like the subxacts, the XID,\nthe invalidation messages and the delete-on-abort/commit rels.\n--\nMichael",
"msg_date": "Thu, 4 Jul 2019 16:45:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 9:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 03, 2019 at 08:23:44PM +0200, Julien Rouhaud wrote:\n> > So the patch compiles and works as intended. I don't have much to say\n> > about it, it all looks good to me, since the concerns about xactdesc.c\n> > aren't worth the trouble.\n> >\n> > I'm not sure that I understand Michael's objection though, as\n> > xl_xact_prepare is not a new definition and AFAICS it couldn't contain\n> > the records anyway. So I'll let him say if he has further objections\n> > or if it's ready for committer!\n>\n> This patch provides parsing information only for the header of the 2PC\n> record. Wouldn't it be interesting to get more information from the\n> various TwoPhaseRecordOnDisk's callbacks? We could also print much\n> more information in xact_desc_prepare(). Like the subxacts, the XID,\n> the invalidation messages and the delete-on-abort/commit rels.\n\nMost of those are already described in the COMMIT PREPARE message,\nwouldn't that be redundant? abortrels aren't displayed anywhere\nthough, so +1 for adding them.\n\nI also see that the dbid isn't displayed in any of the 2PC message,\nthat'd be useful to have it directly instead of looking for it in\nother messages for the same transaction.\n\n\n",
"msg_date": "Thu, 4 Jul 2019 10:24:50 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 8:25 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Thu, Jul 4, 2019 at 9:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Jul 03, 2019 at 08:23:44PM +0200, Julien Rouhaud wrote:\n> > > So the patch compiles and works as intended. I don't have much to say\n> > > about it, it all looks good to me, since the concerns about xactdesc.c\n> > > aren't worth the trouble.\n> > >\n> > > I'm not sure that I understand Michael's objection though, as\n> > > xl_xact_prepare is not a new definition and AFAICS it couldn't contain\n> > > the records anyway. So I'll let him say if he has further objections\n> > > or if it's ready for committer!\n> >\n> > This patch provides parsing information only for the header of the 2PC\n> > record. Wouldn't it be interesting to get more information from the\n> > various TwoPhaseRecordOnDisk's callbacks? We could also print much\n> > more information in xact_desc_prepare(). Like the subxacts, the XID,\n> > the invalidation messages and the delete-on-abort/commit rels.\n>\n> Most of those are already described in the COMMIT PREPARE message,\n> wouldn't that be redundant? abortrels aren't displayed anywhere\n> though, so +1 for adding them.\n>\n> I also see that the dbid isn't displayed in any of the 2PC message,\n> that'd be useful to have it directly instead of looking for it in\n> other messages for the same transaction.\n\nHello all,\n\nI've moved this to the next CF, and set it to \"Needs review\" since a\nrebase was provided.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 23:05:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 11:05:34PM +1200, Thomas Munro wrote:\n> I've moved this to the next CF, and set it to \"Needs review\" since a\n> rebase was provided.\n\nI may be missing something of course, but in this case we argued about\nadding a couple of more fields. In consequence, the patch should be\nwaiting on its author, no?\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 20:51:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "Hi,\n\nLe jeu. 1 août 2019 à 13:51, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Thu, Aug 01, 2019 at 11:05:34PM +1200, Thomas Munro wrote:\n> > I've moved this to the next CF, and set it to \"Needs review\" since a\n> > rebase was provided.\n>\n> I may be missing something of course, but in this case we argued about\n> adding a couple of more fields. In consequence, the patch should be\n> waiting on its author, no?\n>\n\nThat's also my understanding.\n\n>\n\nHi, Le jeu. 1 août 2019 à 13:51, Michael Paquier <michael@paquier.xyz> a écrit :On Thu, Aug 01, 2019 at 11:05:34PM +1200, Thomas Munro wrote:\n> I've moved this to the next CF, and set it to \"Needs review\" since a\n> rebase was provided.\n\nI may be missing something of course, but in this case we argued about\nadding a couple of more fields. In consequence, the patch should be\nwaiting on its author, no?That's also my understanding.",
"msg_date": "Thu, 1 Aug 2019 13:57:55 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 11:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Aug 01, 2019 at 11:05:34PM +1200, Thomas Munro wrote:\n> > I've moved this to the next CF, and set it to \"Needs review\" since a\n> > rebase was provided.\n>\n> I may be missing something of course, but in this case we argued about\n> adding a couple of more fields. In consequence, the patch should be\n> waiting on its author, no?\n\nOh, OK. Changed. So we're waiting for Fujii-san to respond to the\nsuggestions about new fields.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 23:57:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On 2019-Aug-01, Michael Paquier wrote:\n\n> I may be missing something of course, but in this case we argued about\n> adding a couple of more fields. In consequence, the patch should be\n> waiting on its author, no?\n\nFujii,\n\nAre you in a position to submit an updated version of this patch?\n\nMaybe Vignesh is in a position to help to complete this, since he has\nbeen eyeing this code lately. Vignesh?\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 14:04:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "Sorry for the long delay...\n\nOn Thu, Jul 4, 2019 at 5:25 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Jul 4, 2019 at 9:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Jul 03, 2019 at 08:23:44PM +0200, Julien Rouhaud wrote:\n> > > So the patch compiles and works as intended. I don't have much to say\n> > > about it, it all looks good to me, since the concerns about xactdesc.c\n> > > aren't worth the trouble.\n> > >\n> > > I'm not sure that I understand Michael's objection though, as\n> > > xl_xact_prepare is not a new definition and AFAICS it couldn't contain\n> > > the records anyway. So I'll let him say if he has further objections\n> > > or if it's ready for committer!\n> >\n> > This patch provides parsing information only for the header of the 2PC\n> > record. Wouldn't it be interesting to get more information from the\n> > various TwoPhaseRecordOnDisk's callbacks? We could also print much\n> > more information in xact_desc_prepare(). Like the subxacts, the XID,\n> > the invalidation messages and the delete-on-abort/commit rels.\n>\n> Most of those are already described in the COMMIT PREPARE message,\n> wouldn't that be redundant? abortrels aren't displayed anywhere\n> though, so +1 for adding them.\n\nxact_desc_abort() for ROLLBACK PREPARED describes abortrels. No?\n\n> I also see that the dbid isn't displayed in any of the 2PC message,\n> that'd be useful to have it directly instead of looking for it in\n> other messages for the same transaction.\n\ndbid is not reported even in COMMIT message. So I don't like adding\ndbid into only the PREPARE message.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 3 Sep 2019 09:58:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 3:04 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Aug-01, Michael Paquier wrote:\n>\n> > I may be missing something of course, but in this case we argued about\n> > adding a couple of more fields. In consequence, the patch should be\n> > waiting on its author, no?\n>\n> Fujii,\n>\n> Are you in a position to submit an updated version of this patch?\n\nSorry for the long delay... Yes, I will update the patch if necessary.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 3 Sep 2019 10:00:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Sep 03, 2019 at 10:00:08AM +0900, Fujii Masao wrote:\n> Sorry for the long delay... Yes, I will update the patch if necessary.\n\nFujii-san, are you planning to update this patch then? I have\nswitched it as waiting on author.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 09:41:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 9:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 03, 2019 at 10:00:08AM +0900, Fujii Masao wrote:\n> > Sorry for the long delay... Yes, I will update the patch if necessary.\n>\n> Fujii-san, are you planning to update this patch then? I have\n> switched it as waiting on author.\n\nNo because there has been nothing to update in the latest patch for now\nunless I'm missing something. So I'm just waiting for some new review\ncomments against the latest patch to come :)\nCan I switch the status back to \"Needs review\"?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 8 Nov 2019 09:53:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "\n\nOn 08/11/2019 05:53, Fujii Masao wrote:\n> On Fri, Nov 8, 2019 at 9:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Tue, Sep 03, 2019 at 10:00:08AM +0900, Fujii Masao wrote:\n>>> Sorry for the long delay... Yes, I will update the patch if necessary.\n>>\n>> Fujii-san, are you planning to update this patch then? I have\n>> switched it as waiting on author.\n> \n> No because there has been nothing to update in the latest patch for now\n> unless I'm missing something. So I'm just waiting for some new review\n> comments against the latest patch to come :)\n> Can I switch the status back to \"Needs review\"?\n> \n> Regards,\n> \n\nOne issue is that your patch provides small information. WAL errors \nInvestigation often requires information on xid, subxacts, \ndelete-on-abort/commit rels; rarely - invalidation messages etc.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 8 Nov 2019 08:23:41 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "At Fri, 8 Nov 2019 09:53:07 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> On Fri, Nov 8, 2019 at 9:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Sep 03, 2019 at 10:00:08AM +0900, Fujii Masao wrote:\n> > > Sorry for the long delay... Yes, I will update the patch if necessary.\n> >\n> > Fujii-san, are you planning to update this patch then? I have\n> > switched it as waiting on author.\n> \n> No because there has been nothing to update in the latest patch for now\n> unless I'm missing something. So I'm just waiting for some new review\n> comments against the latest patch to come :)\n> Can I switch the status back to \"Needs review\"?\n\nOn Fri, Apr 26, 2019 at 5:38 AM Michael Paquier <michael(at)paquier(dot)xyz> wrote:\n> +typedef xl_xact_prepare TwoPhaseFileHeader\n> I find this mapping implementation a bit lazy, and your\n> newly-introduced xl_xact_prepare does not count for all the contents\n> of the actual WAL record for PREPARE TRANSACTION. Wouldn't it be\n> better to put all the contents of the record in the same structure,\n> and not only the 2PC header information?\n\nI agree to this in principle, but I'm afraid that we cannot do that\nactually. Doing it straight way would result in something like this.\n\n typedef struct xl_xact_prepare\n {\n uint32 magic;\n ...\n TimestampTz origin_timestamp;\n /* correct alignment here */\n+ char gid[FLEXIBLE_ARRAY_MEMBER]; /* the GID of the prepred xact */\n+ /* subxacts, xnodes, msgs and sentinel follow the gid[] array */\n} xl_xact_prepare;\n\nI don't think this is meaningful..\n\nAfter all, the new xlog record struct is used only at one place.\nxlog_redo is the correspondent of xact_desc, but it is not aware of\nthe stuct and PrepareRedoAdd decodes it using TwoPhaseFileHeader. In\nthat sense, the details of the record is a secret of twophase.c. What\nis worse, apparently TwoPhaseFileHeader is a *subset* of\nxl_xact_prepare but what we want to expose is the super set. Thus I\nporpose to add a comment instead of exposing the full structure in\nxl_xact_prepare definition.\n\n typedef struct xl_xact_prepare\n {\n uint32 magic;\n ...\n TimestampTz origin_timestamp;\n+ /*\n+ * This record has multiple trailing data sections with variable\n+ * length. See twophase.c for the details.\n+ */\n } xl_xact_prepare;\n\nThen, teach xlog_redo to resolve the record pointer to this type\nbefore passing it to PrepareRedoAdd.\n\nDoes it make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Nov 2019 13:14:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "Hello.\n\nAt Fri, 8 Nov 2019 08:23:41 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in \n> > Can I switch the status back to \"Needs review\"?\n> > Regards,\n> > \n> \n> One issue is that your patch provides small information. WAL errors\n> Investigation often requires information on xid, subxacts,\n> delete-on-abort/commit rels; rarely - invalidation messages etc.\n\nBasically agrred, but it can be very large in certain cases, even if\nit is rare.\n\nBy the way, in the patch xact_desc_prepare seems printing\nparseed.xact_time, which is not actually set by ParsePrepareRecord.\n\n# I missed the funtion. xl_xact_prepare is used in *two* places in\n# front end.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Nov 2019 13:26:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "\n\nOn 08/11/2019 09:26, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Fri, 8 Nov 2019 08:23:41 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in\n>>> Can I switch the status back to \"Needs review\"?\n>>> Regards,\n>>>\n>>\n>> One issue is that your patch provides small information. WAL errors\n>> Investigation often requires information on xid, subxacts,\n>> delete-on-abort/commit rels; rarely - invalidation messages etc.\n> \n> Basically agrred, but it can be very large in certain cases, even if\n> it is rare.\n\nMaybe this is the reason for introducing a “verbose” option?\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 8 Nov 2019 09:33:35 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Fri, Nov 8, 2019 at 1:33 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n>\n>\n> On 08/11/2019 09:26, Kyotaro Horiguchi wrote:\n> > Hello.\n> >\n> > At Fri, 8 Nov 2019 08:23:41 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in\n> >>> Can I switch the status back to \"Needs review\"?\n> >>> Regards,\n> >>>\n> >>\n> >> One issue is that your patch provides small information. WAL errors\n> >> Investigation often requires information on xid, subxacts,\n> >> delete-on-abort/commit rels; rarely - invalidation messages etc.\n\nThanks for the review!\n\nxid is already included in the pg_waldump output for\nPREPARE TRANSACTION record. Regarding subxacts, rels and invals,\nI agree that they might be useful when diagnosing 2PC-related trouble.\nI attached the updated version of the patch that also changes\npg_waldump so that it outputs delete-on-commit rels, delete-on-aborts,\nsubxacts and invals.\n\nHere is the example of output for PREPARE TRASACTION record, with the pach.\n\nrmgr: Transaction len (rec/tot): 837/ 837, tx: 503, lsn:\n0/030055C8, prev 0/03005588, desc: PREPARE gid xxx: 2019-11-11\n13:00:18.616056 JST; rels(commit): base/12923/16408 base/12923/16407\nbase/12923/16406 base/12923/16405; rels(abort): base/12923/16412\nbase/12923/16411 base/12923/16408 base/12923/16407; subxacts: 505;\ninval msgs: catcache 50 catcache 49 catcache 50 catcache 49 catcache\n50 catcache 49 catcache 50 catcache 49 catcache 50 catcache 49\ncatcache 50 catcache 49 relcache 16386 relcache 16390 relcache 16390\nrelcache 16386 relcache 16386 relcache 16390 relcache 16390 relcache\n16386\n\n> By the way, in the patch xact_desc_prepare seems printing\n> parseed.xact_time, which is not actually set by ParsePrepareRecord.\n\nThanks for the review! You are right.\nI fixed this issue in the attached patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Mon, 11 Nov 2019 13:21:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 01:21:28PM +0900, Fujii Masao wrote:\n> Thanks for the review! You are right.\n> I fixed this issue in the attached patch.\n\nThe proposed format looks fine to me. I have just one comment. All\nthree callers of standby_desc_invalidations() don't actually need to\nprint any data if there are zero messages, so you can simplify a bit\nxact_desc_commit() and xact_desc_prepare() regarding the check on\nparsed.nmsgs, no?\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 16:16:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 4:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 11, 2019 at 01:21:28PM +0900, Fujii Masao wrote:\n> > Thanks for the review! You are right.\n> > I fixed this issue in the attached patch.\n>\n> The proposed format looks fine to me. I have just one comment. All\n> three callers of standby_desc_invalidations() don't actually need to\n> print any data if there are zero messages, so you can simplify a bit\n> xact_desc_commit() and xact_desc_prepare() regarding the check on\n> parsed.nmsgs, no?\n\nThanks for the review! But probably I failed to understand your point...\nCould you clarify what actual change is necessary? You are thinking that\nthe check of \"parsed.nmsgs >= 0\" should be moved to the inside of\nstandby_desc_invalidations()?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 12 Nov 2019 17:53:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 05:53:02PM +0900, Fujii Masao wrote:\n> Thanks for the review! But probably I failed to understand your point...\n> Could you clarify what actual change is necessary? You are thinking that\n> the check of \"parsed.nmsgs >= 0\" should be moved to the inside of\n> standby_desc_invalidations()?\n\nYes that's what I meant here.\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 18:03:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 6:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Nov 12, 2019 at 05:53:02PM +0900, Fujii Masao wrote:\n> > Thanks for the review! But probably I failed to understand your point...\n> > Could you clarify what actual change is necessary? You are thinking that\n> > the check of \"parsed.nmsgs >= 0\" should be moved to the inside of\n> > standby_desc_invalidations()?\n>\n> Yes that's what I meant here.\n\nOk, I changed the patch that way.\nAttached is the latest version of the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Tue, 12 Nov 2019 18:41:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 06:41:12PM +0900, Fujii Masao wrote:\n> Ok, I changed the patch that way.\n> Attached is the latest version of the patch.\n\nThanks for the new patch. Looks fine to me.\n--\nMichael",
"msg_date": "Wed, 13 Nov 2019 14:41:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "\n\n12.11.2019 12:41, Fujii Masao пишет:\n> Ok, I changed the patch that way.\n> Attached is the latest version of the patch.\n> \n> Regards,\n\nI did not see any problems in this version of the patch. The information \ndisplayed by pg_waldump for the PREPARE record is sufficient for use.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 13 Nov 2019 09:53:27 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 3:53 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n>\n>\n> 12.11.2019 12:41, Fujii Masao пишет:\n> > Ok, I changed the patch that way.\n> > Attached is the latest version of the patch.\n> >\n> > Regards,\n>\n> I did not see any problems in this version of the patch. The information\n> displayed by pg_waldump for the PREPARE record is sufficient for use.\n\nThanks Andrey and Michael for the review! I committed the patch.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:37:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": ">> I did not see any problems in this version of the patch. The \n>> information\n>> displayed by pg_waldump for the PREPARE record is sufficient for use.\n> \n> Thanks Andrey and Michael for the review! I committed the patch.\n> \n> Regards,\n\n\nHi,\nThere is a mistake in the comment in the definition of \nxl_xact_relfilenodes.\nThis is a small patch to correct it.\n\nRegards,\n\nYu Kimura",
"msg_date": "Wed, 20 Nov 2019 16:16:45 +0900",
"msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 04:16:45PM +0900, btkimurayuzk wrote:\n> typedef struct xl_xact_relfilenodes\n> {\n> -\tint\t\t\tnrels;\t\t\t/* number of subtransaction XIDs */\n> +\tint\t\t\tnrels;\t\t\t/* number of relations */\n> \tRelFileNode xnodes[FLEXIBLE_ARRAY_MEMBER];\n> } xl_xact_relfilenodes;\n> #define MinSizeOfXactRelfilenodes offsetof(xl_xact_relfilenodes, xnodes)\n\nThese are relations, and it smells like a copy-pasto coming from\nxl_xact_subxacts. Thanks Kimura-san, committed. \n--\nMichael",
"msg_date": "Wed, 20 Nov 2019 17:50:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump and PREPARE"
}
] |
[
{
"msg_contents": "The documentation has a section called \"Routine Reindexing\", which\nexplains how to simulate REINDEX CONCURRENTLY with a sequence of\ncreation and replacement steps. This should be updated to reference\nthe REINDEX CONCURRENTLY command.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Apr 2019 13:34:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "\"Routine Reindexing\" docs should be updated to reference REINDEX\n CONCURRENTLY"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 01:34:41PM -0700, Peter Geoghegan wrote:\n> The documentation has a section called \"Routine Reindexing\", which\n> explains how to simulate REINDEX CONCURRENTLY with a sequence of\n> creation and replacement steps. This should be updated to reference\n> the REINDEX CONCURRENTLY command.\n\nAgreed, good catch. I would suggest to remove most of the section and\njust replace it with a reference to REINDEX CONCURRENTLY, as per the\nattached. What do you think?\n--\nMichael",
"msg_date": "Fri, 26 Apr 2019 12:05:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"Routine Reindexing\" docs should be updated to reference REINDEX\n CONCURRENTLY"
},
{
"msg_contents": "On 2019-04-26 05:05, Michael Paquier wrote:\n> On Thu, Apr 25, 2019 at 01:34:41PM -0700, Peter Geoghegan wrote:\n>> The documentation has a section called \"Routine Reindexing\", which\n>> explains how to simulate REINDEX CONCURRENTLY with a sequence of\n>> creation and replacement steps. This should be updated to reference\n>> the REINDEX CONCURRENTLY command.\n> \n> Agreed, good catch. I would suggest to remove most of the section and\n> just replace it with a reference to REINDEX CONCURRENTLY, as per the\n> attached. What do you think?\n\nlooks good to me\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Apr 2019 14:32:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"Routine Reindexing\" docs should be updated to reference REINDEX\n CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Apr 25, 2019 at 01:34:41PM -0700, Peter Geoghegan wrote:\n>> The documentation has a section called \"Routine Reindexing\", which\n>> explains how to simulate REINDEX CONCURRENTLY with a sequence of\n>> creation and replacement steps. This should be updated to reference\n>> the REINDEX CONCURRENTLY command.\n\n> Agreed, good catch. I would suggest to remove most of the section and\n> just replace it with a reference to REINDEX CONCURRENTLY, as per the\n> attached. What do you think?\n\n+1. Maybe say \"... which requires only a\n<literal>SHARE UPDATE EXCLUSIVE</literal> lock.\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 10:53:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"Routine Reindexing\" docs should be updated to reference REINDEX\n CONCURRENTLY"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 10:53:35AM -0400, Tom Lane wrote:\n> +1. Maybe say \"... which requires only a\n> <literal>SHARE UPDATE EXCLUSIVE</literal> lock.\"\n\nThanks for the review. Committed with your suggested change.\n--\nMichael",
"msg_date": "Sat, 27 Apr 2019 09:07:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"Routine Reindexing\" docs should be updated to reference REINDEX\n CONCURRENTLY"
}
] |
[
{
"msg_contents": "Hi,\n\nI couldn't find old discussions or source code comments about this, but\nhas someone encountered the following error and wondered whether it's\nworking that way for a reason?\n\nselect a::text, b from foo order by 1, 2 collate \"C\";\nERROR: collations are not supported by type integer\nLINE 1: select a::text, b from foo order by 1, 2 collate \"C\";\n ^\nI expected this to resolve the output column number (2) to actual column\n(b) and apply COLLATE clause on top of it. Attached patch makes it so by\nteaching findTargetlistEntrySQL92() to recognize such ORDER BY items and\nhandle them likewise. With the patch:\n\nselect a::text, b from foo order by 1, 2 collate \"C\";\n a │ b\n────┼──────────\n ab │ ab wins\n ab │ ab1 wins\n ab │ ab2 wins\n(3 rows)\n\nselect a::text, b from foo order by 1 collate \"C\", 2;\n a │ b\n────┼──────────\n ab │ ab1 wins\n ab │ ab2 wins\n ab │ ab wins\n(3 rows)\n\nselect a::text, b from foo order by 3 collate \"C\", 2;\nERROR: ORDER BY position 3 is not in select list\nLINE 1: select a::text, b from foo order by 3 collate \"C\", 2;\n\nAm I missing something?\n\nThanks,\nAmit",
"msg_date": "Fri, 26 Apr 2019 12:56:44 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "findTargetlistEntrySQL92() and COLLATE clause"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> I couldn't find old discussions or source code comments about this, but\n> has someone encountered the following error and wondered whether it's\n> working that way for a reason?\n\n> select a::text, b from foo order by 1, 2 collate \"C\";\n> ERROR: collations are not supported by type integer\n> LINE 1: select a::text, b from foo order by 1, 2 collate \"C\";\n> ^\n\nThe reason it works that way is that *anything* except a bare integer\nconstant is treated according to SQL99 rules (that is, it's an ordinary\nexpression) not SQL92 rules. I do not think we should get into weird\nbastard combinations of SQL92 and SQL99 rules, because once you do,\nthere is no principled way to decide what anything means. Who's to say\nwhether \"ORDER BY 1 + 2\" means to take column 1 and add 2 to it and then\nsort, or maybe to add columns 1 and 2 and sort on the sum, or whatever?\n\nIOW, -1 on treating COLLATE as different from other sorts of expressions\nhere. There's no principle that can justify that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 11:02:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: findTargetlistEntrySQL92() and COLLATE clause"
},
{
"msg_contents": "On 2019/04/27 0:02, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> I couldn't find old discussions or source code comments about this, but\n>> has someone encountered the following error and wondered whether it's\n>> working that way for a reason?\n> \n>> select a::text, b from foo order by 1, 2 collate \"C\";\n>> ERROR: collations are not supported by type integer\n>> LINE 1: select a::text, b from foo order by 1, 2 collate \"C\";\n>> ^\n> \n> The reason it works that way is that *anything* except a bare integer\n> constant is treated according to SQL99 rules (that is, it's an ordinary\n> expression) not SQL92 rules. I do not think we should get into weird\n> bastard combinations of SQL92 and SQL99 rules, because once you do,\n> there is no principled way to decide what anything means. Who's to say\n> whether \"ORDER BY 1 + 2\" means to take column 1 and add 2 to it and then\n> sort, or maybe to add columns 1 and 2 and sort on the sum, or whatever?\n\nAh, OK. Thanks for the explanation.\n\n> IOW, -1 on treating COLLATE as different from other sorts of expressions\n> here. There's no principle that can justify that.\n\nIn contrast to your example above, maybe the COLLATE case is less\nambiguous in terms of what ought to be done?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 7 May 2019 16:17:24 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: findTargetlistEntrySQL92() and COLLATE clause"
}
] |
[
{
"msg_contents": "Hi,\n\nI have created a shared hash table in partitioned mode inside the postgres\nserver code. In order to guard the partitions, I'm trying to initialize an\narray of LWLocks. The code that I'm trying to use for that is\n\nvoid RequestNamedLWLockTranche\n<https://doxygen.postgresql.org/lwlock_8h.html#a77298cebf29062e88e529beb3851219b>\n(const char *tranche_name, int num_lwlocks);\nLWLockPadded <https://doxygen.postgresql.org/unionLWLockPadded.html> *\nGetNamedLWLockTranche\n<https://doxygen.postgresql.org/lwlock_8h.html#a15c956090fa42be94e075a1cf6bdfd8e>\n(const char *tranche_name);\n\nI'm not sure where exactly should this code be called from the server code.\nSo I had placed it in\n\nvoid CreateSharedMemoryAndSemaphores(bool makePrivate, int port);\n\nwithin ipic.c. However, I'm getting the following error message when\nstarting the database:\n\nFATAL: requested tranche is not registered\n\nSo at this point, I'm a little confused as to where the methods should be\ncalled from inside the server code. Any pointers would be appreciated.\n\nThanks,\n-SB\n\nPS: I saw that that are two variables controlling access to\nRequestNamedLWLockTranche\n<https://doxygen.postgresql.org/lwlock_8h.html#a77298cebf29062e88e529beb3851219b>\n()\n\nif (IsUnderPostmaster\n<https://doxygen.postgresql.org/globals_8c.html#a6e9dda2cdd786e5cba0d83b2deb577ec>\n|| !lock_named_request_allowed\n<https://doxygen.postgresql.org/lwlock_8c.html#aa9d4ba31a9dbab8fee08d6f0f649eb64>\n)\n return; /* too late */\n\nwhich implies that IsUnderPostmaster must be false and\nlock_named_request_allowed should be true. Thus, I had invoked\nRequestNamedLWLockTranche\n<https://doxygen.postgresql.org/lwlock_8h.html#a77298cebf29062e88e529beb3851219b>\nbefore the first call to LWLockShmemSize\n<https://doxygen.postgresql.org/lwlock_8c.html#af312a0706333345e2be792a7718b6c43>\nwhich sets lock_named_request_allowed = true and GetNamedLWLockTranche\n<https://doxygen.postgresql.org/lwlock_8h.html#a15c956090fa42be94e075a1cf6bdfd8e>\nlater. This works in the single user mode but fails when I start the server\nexplicitly through postgres -D ...\n\nHi,I have created a shared hash table in partitioned mode inside the postgres server code. In order to guard the partitions, I'm trying to initialize an array of LWLocks. The code that I'm trying to use for that is void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks);LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);I'm not sure where exactly should this code be called from the server code. So I had placed it in void CreateSharedMemoryAndSemaphores(bool makePrivate, int port); within ipic.c. However, I'm getting the following error message when starting the database:FATAL: requested tranche is not registeredSo at this point, I'm a little confused as to where the methods should be called from inside the server code. Any pointers would be appreciated.Thanks,-SBPS: I saw that that are two variables controlling access to RequestNamedLWLockTranche()if (IsUnderPostmaster || !lock_named_request_allowed) return; /* too late */which implies that IsUnderPostmaster must be false and lock_named_request_allowed should be true. Thus, I had invoked RequestNamedLWLockTranche before the first call to LWLockShmemSize which sets lock_named_request_allowed = true and GetNamedLWLockTranche later. This works in the single user mode but fails when I start the server explicitly through postgres -D ...",
"msg_date": "Fri, 26 Apr 2019 14:58:20 -0400",
"msg_from": "Souvik Bhattacherjee <kivuosb@gmail.com>",
"msg_from_op": true,
"msg_subject": "Initializing LWLock Array from Server Code"
},
{
"msg_contents": "Hi Robert,\n\nThank you for your reply and sorry that I couldn't reply earlier.\nSince I didn't get any response within a couple of days, I took the longer\nroute -- changed the lwlock.h and lwlock.c\nfor accommodating the lw locks for the shared hash table.\n\nI'll describe what I modified in the lwlock.h and lwlock.c and it seems to\nbe working fine. Although I haven't got an opportunity\nto test it extensively. If you could let me know if I missed out anything\nthat might cause problems later that would be great.\n\nIn lwlock.h, the following modifications were made:\n\n*/* the number of partitions of the shared hash table */*\n#define NUM_FILTERHASH_PARTITIONS 64\n\n*/* Offsets for various chunks of preallocated lwlocks. */*\n#define BUFFER_MAPPING_LWLOCK_OFFSET NUM_INDIVIDUAL_LWLOCKS\n#define LOCK_MANAGER_LWLOCK_OFFSET \\\n (BUFFER_MAPPING_LWLOCK_OFFSET + NUM_BUFFER_PARTITIONS)\n#define PREDICATELOCK_MANAGER_LWLOCK_OFFSET \\\n (LOCK_MANAGER_LWLOCK_OFFSET + NUM_LOCK_PARTITIONS)\n\n*/* offset for shared filterhash lwlocks in the MainLWLockArray */*\n#define FILTERHASH_OFFSET \\\n (PREDICATELOCK_MANAGER_LWLOCK_OFFSET + NUM_PREDICATELOCK_PARTITIONS)\n*/* modified NUM_FIXED_LOCKS */*\n#define NUM_FIXED_LWLOCKS \\\n (FILTERHASH_OFFSET + NUM_FILTERHASH_PARTITIONS)\n\n*/* added LWTRANCHE_FILTERHASH in the BuiltinTrancheIds */*\ntypedef enum BuiltinTrancheIds\n{\n LWTRANCHE_CLOG_BUFFERS = NUM_INDIVIDUAL_LWLOCKS,\n LWTRANCHE_COMMITTS_BUFFERS,\n LWTRANCHE_SUBTRANS_BUFFERS,\n LWTRANCHE_MXACTOFFSET_BUFFERS,\n LWTRANCHE_MXACTMEMBER_BUFFERS,\n LWTRANCHE_ASYNC_BUFFERS,\n LWTRANCHE_OLDSERXID_BUFFERS,\n LWTRANCHE_WAL_INSERT,\n LWTRANCHE_BUFFER_CONTENT,\n LWTRANCHE_BUFFER_IO_IN_PROGRESS,\n LWTRANCHE_REPLICATION_ORIGIN,\n LWTRANCHE_REPLICATION_SLOT_IO_IN_PROGRESS,\n LWTRANCHE_PROC,\n LWTRANCHE_BUFFER_MAPPING,\n LWTRANCHE_LOCK_MANAGER,\n LWTRANCHE_PREDICATE_LOCK_MANAGER,\n *LWTRANCHE_FILTERHASH*,\n LWTRANCHE_PARALLEL_HASH_JOIN,\n LWTRANCHE_PARALLEL_QUERY_DSA,\n LWTRANCHE_SESSION_DSA,\n LWTRANCHE_SESSION_RECORD_TABLE,\n LWTRANCHE_SESSION_TYPMOD_TABLE,\n LWTRANCHE_SHARED_TUPLESTORE,\n LWTRANCHE_TBM,\n LWTRANCHE_PARALLEL_APPEND,\n LWTRANCHE_FIRST_USER_DEFINED\n} BuiltinTrancheIds;\n\n\nIn lwlock.c, the following modifications were made:\n\nIn function, static void InitializeLWLocks(void), the following lines were\nadded:\n\n*/* Initialize filterhash LWLocks in main array */*\n lock = MainLWLockArray + NUM_INDIVIDUAL_LWLOCKS +\n NUM_BUFFER_PARTITIONS + NUM_LOCK_PARTITIONS +\nNUM_FILTERHASH_PARTITIONS;\n for (id = 0; id < NUM_FILTERHASH_PARTITIONS; id++, lock++)\n LWLockInitialize(&lock->lock, LWTRANCHE_FILTERHASH);\n\nIn function void RegisterLWLockTranches(void), the following line was added:\n\nLWLockRegisterTranche(LWTRANCHE_FILTERHASH, \"filter_hash\");\n\nAll of this was done after allocating the required of space in the shared\nmemory.\n\nThanks,\n-SB\n\n\nOn Mon, Apr 29, 2019 at 1:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Apr 26, 2019 at 2:58 PM Souvik Bhattacherjee <kivuosb@gmail.com>\n> wrote:\n> > I have created a shared hash table in partitioned mode inside the\n> postgres server code. In order to guard the partitions, I'm trying to\n> initialize an array of LWLocks. The code that I'm trying to use for that is\n> >\n> > void RequestNamedLWLockTranche(const char *tranche_name, int\n> num_lwlocks);\n> > LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);\n> >\n> > I'm not sure where exactly should this code be called from the server\n> code. So I had placed it in\n> >\n> > void CreateSharedMemoryAndSemaphores(bool makePrivate, int port);\n> >\n> > within ipic.c. However, I'm getting the following error message when\n> starting the database:\n> >\n> > FATAL: requested tranche is not registered\n> >\n> > So at this point, I'm a little confused as to where the methods should\n> be called from inside the server code. Any pointers would be appreciated.\n>\n> RequestNamedLWLockTranche() changes the behavior of LWLockShmemSize()\n> and CreateLWLocks(), so must be called before either of those.\n> GetNamedLWLockTranche() must be called after CreateLWLocks().\n>\n> This machinery works for pg_stat_statements, so see that as an example\n> of how to make this work from an extension. If you're hacking the\n> core server code, then look at the places where the corresponding bits\n> of pg_stat_statements code get called. IIRC, _PG_init() gets called\n> from process_shared_preload_libraries(), so you might look at the\n> placement of the calls to that function.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi Robert,Thank you for your reply and sorry that I couldn't reply earlier.Since I didn't get any response within a couple of days, I took the longer route -- changed the lwlock.h and lwlock.c for accommodating the lw locks for the shared hash table.I'll describe what I modified in the lwlock.h and lwlock.c and it seems to be working fine. Although I haven't got an opportunity to test it extensively. If you could let me know if I missed out anything that might cause problems later that would be great.In lwlock.h, the following modifications were made:/* the number of partitions of the shared hash table */#define NUM_FILTERHASH_PARTITIONS 64/* Offsets for various chunks of preallocated lwlocks. */#define BUFFER_MAPPING_LWLOCK_OFFSET NUM_INDIVIDUAL_LWLOCKS#define LOCK_MANAGER_LWLOCK_OFFSET \\ (BUFFER_MAPPING_LWLOCK_OFFSET + NUM_BUFFER_PARTITIONS)#define PREDICATELOCK_MANAGER_LWLOCK_OFFSET \\ (LOCK_MANAGER_LWLOCK_OFFSET + NUM_LOCK_PARTITIONS)/* offset for shared filterhash lwlocks in the MainLWLockArray */#define FILTERHASH_OFFSET \\ (PREDICATELOCK_MANAGER_LWLOCK_OFFSET + NUM_PREDICATELOCK_PARTITIONS)/* modified NUM_FIXED_LOCKS */#define NUM_FIXED_LWLOCKS \\ (FILTERHASH_OFFSET + NUM_FILTERHASH_PARTITIONS)/* added LWTRANCHE_FILTERHASH in the BuiltinTrancheIds */typedef enum BuiltinTrancheIds{ LWTRANCHE_CLOG_BUFFERS = NUM_INDIVIDUAL_LWLOCKS, LWTRANCHE_COMMITTS_BUFFERS, LWTRANCHE_SUBTRANS_BUFFERS, LWTRANCHE_MXACTOFFSET_BUFFERS, LWTRANCHE_MXACTMEMBER_BUFFERS, LWTRANCHE_ASYNC_BUFFERS, LWTRANCHE_OLDSERXID_BUFFERS, LWTRANCHE_WAL_INSERT, LWTRANCHE_BUFFER_CONTENT, LWTRANCHE_BUFFER_IO_IN_PROGRESS, LWTRANCHE_REPLICATION_ORIGIN, LWTRANCHE_REPLICATION_SLOT_IO_IN_PROGRESS, LWTRANCHE_PROC, LWTRANCHE_BUFFER_MAPPING, LWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, LWTRANCHE_FILTERHASH, LWTRANCHE_PARALLEL_HASH_JOIN, LWTRANCHE_PARALLEL_QUERY_DSA, LWTRANCHE_SESSION_DSA, LWTRANCHE_SESSION_RECORD_TABLE, LWTRANCHE_SESSION_TYPMOD_TABLE, LWTRANCHE_SHARED_TUPLESTORE, LWTRANCHE_TBM, LWTRANCHE_PARALLEL_APPEND, LWTRANCHE_FIRST_USER_DEFINED} BuiltinTrancheIds;In lwlock.c, the following modifications were made:In function, static void InitializeLWLocks(void), the following lines were added:/* Initialize filterhash LWLocks in main array */ lock = MainLWLockArray + NUM_INDIVIDUAL_LWLOCKS + NUM_BUFFER_PARTITIONS + NUM_LOCK_PARTITIONS + NUM_FILTERHASH_PARTITIONS; for (id = 0; id < NUM_FILTERHASH_PARTITIONS; id++, lock++) LWLockInitialize(&lock->lock, LWTRANCHE_FILTERHASH);In function void RegisterLWLockTranches(void), the following line was added:LWLockRegisterTranche(LWTRANCHE_FILTERHASH, \"filter_hash\");All of this was done after allocating the required of space in the shared memory. Thanks,-SBOn Mon, Apr 29, 2019 at 1:59 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Apr 26, 2019 at 2:58 PM Souvik Bhattacherjee <kivuosb@gmail.com> wrote:\n> I have created a shared hash table in partitioned mode inside the postgres server code. In order to guard the partitions, I'm trying to initialize an array of LWLocks. The code that I'm trying to use for that is\n>\n> void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks);\n> LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);\n>\n> I'm not sure where exactly should this code be called from the server code. So I had placed it in\n>\n> void CreateSharedMemoryAndSemaphores(bool makePrivate, int port);\n>\n> within ipic.c. However, I'm getting the following error message when starting the database:\n>\n> FATAL: requested tranche is not registered\n>\n> So at this point, I'm a little confused as to where the methods should be called from inside the server code. Any pointers would be appreciated.\n\nRequestNamedLWLockTranche() changes the behavior of LWLockShmemSize()\nand CreateLWLocks(), so must be called before either of those.\nGetNamedLWLockTranche() must be called after CreateLWLocks().\n\nThis machinery works for pg_stat_statements, so see that as an example\nof how to make this work from an extension. If you're hacking the\ncore server code, then look at the places where the corresponding bits\nof pg_stat_statements code get called. IIRC, _PG_init() gets called\nfrom process_shared_preload_libraries(), so you might look at the\nplacement of the calls to that function.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 30 Apr 2019 17:52:15 -0400",
"msg_from": "Souvik Bhattacherjee <kivuosb@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Initializing LWLock Array from Server Code"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 5:52 PM Souvik Bhattacherjee <kivuosb@gmail.com> wrote:\n> Thank you for your reply and sorry that I couldn't reply earlier.\n> Since I didn't get any response within a couple of days, I took the longer route -- changed the lwlock.h and lwlock.c\n> for accommodating the lw locks for the shared hash table.\n>\n> I'll describe what I modified in the lwlock.h and lwlock.c and it seems to be working fine. Although I haven't got an opportunity\n> to test it extensively. If you could let me know if I missed out anything that might cause problems later that would be great.\n\nWell, I can't really vouch for your code on a quick look, but it\ndoesn't look insane.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Apr 2019 20:26:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initializing LWLock Array from Server Code"
},
{
"msg_contents": "Replying to myself to resend to the list, since my previous attempt\nseems to have been eaten by a grue.\n\nOn Mon, Apr 29, 2019 at 1:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 26, 2019 at 2:58 PM Souvik Bhattacherjee <kivuosb@gmail.com> wrote:\n> > I have created a shared hash table in partitioned mode inside the postgres server code. In order to guard the partitions, I'm trying to initialize an array of LWLocks. The code that I'm trying to use for that is\n> >\n> > void RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks);\n> > LWLockPadded *GetNamedLWLockTranche(const char *tranche_name);\n> >\n> > I'm not sure where exactly should this code be called from the server code. So I had placed it in\n> >\n> > void CreateSharedMemoryAndSemaphores(bool makePrivate, int port);\n> >\n> > within ipic.c. However, I'm getting the following error message when starting the database:\n> >\n> > FATAL: requested tranche is not registered\n> >\n> > So at this point, I'm a little confused as to where the methods should be called from inside the server code. Any pointers would be appreciated.\n>\n> RequestNamedLWLockTranche() changes the behavior of LWLockShmemSize()\n> and CreateLWLocks(), so must be called before either of those.\n> GetNamedLWLockTranche() must be called after CreateLWLocks().\n>\n> This machinery works for pg_stat_statements, so see that as an example\n> of how to make this work from an extension. If you're hacking the\n> core server code, then look at the places where the corresponding bits\n> of pg_stat_statements code get called. IIRC, _PG_init() gets called\n> from process_shared_preload_libraries(), so you might look at the\n> placement of the calls to that function.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Apr 2019 20:35:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Initializing LWLock Array from Server Code"
}
] |
[
{
"msg_contents": "itemid.h introduces the struct ItemIdData as follows:\n\n/*\n * An item pointer (also called line pointer) on a buffer page\n\nMeanwhile, itemptr.h introduces the struct ItemPointerData as follows:\n\n/*\n * ItemPointer:\n *\n * This is a pointer to an item within a disk page of a known file\n * (for example, a cross-link from an index to its parent table).\n\nIt doesn't seem reasonable to assume that you should know the\ndifference based on context. The two concepts are closely related. An\nItemPointerData points to a block, as well as the, uh, item pointer\nwithin that block.\n\nThis ambiguity is avoidable, and should be avoided. ISTM that the\nleast confusing way of removing the ambiguity would be to no longer\nrefer to ItemIds as item pointers, without changing anything else.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Apr 2019 14:18:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "What is an item pointer, anyway?"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 2:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> itemid.h introduces the struct ItemIdData as follows:\n>\n> /*\n> * An item pointer (also called line pointer) on a buffer page\n>\n> Meanwhile, itemptr.h introduces the struct ItemPointerData as follows:\n>\n> /*\n> * ItemPointer:\n> *\n> * This is a pointer to an item within a disk page of a known file\n> * (for example, a cross-link from an index to its parent table).\n>\n> It doesn't seem reasonable to assume that you should know the\n> difference based on context. The two concepts are closely related. An\n> ItemPointerData points to a block, as well as the, uh, item pointer\n> within that block.\n>\n> This ambiguity is avoidable, and should be avoided.\n\n\nAgree.\n\n\n> ISTM that the\n> least confusing way of removing the ambiguity would be to no longer\n> refer to ItemIds as item pointers, without changing anything else.\n>\n\nHow about we rename ItemPointerData to TupleIdentifier or ItemIdentifier\ninstead and leave ItemPointer or Item confined to AM term, where item can\nbe tuple, datum or anything else ?\n\nOn Fri, Apr 26, 2019 at 2:19 PM Peter Geoghegan <pg@bowt.ie> wrote:itemid.h introduces the struct ItemIdData as follows:\n\n/*\n * An item pointer (also called line pointer) on a buffer page\n\nMeanwhile, itemptr.h introduces the struct ItemPointerData as follows:\n\n/*\n * ItemPointer:\n *\n * This is a pointer to an item within a disk page of a known file\n * (for example, a cross-link from an index to its parent table).\n\nIt doesn't seem reasonable to assume that you should know the\ndifference based on context. The two concepts are closely related. An\nItemPointerData points to a block, as well as the, uh, item pointer\nwithin that block.\n\nThis ambiguity is avoidable, and should be avoided.Agree. ISTM that the\nleast confusing way of removing the ambiguity would be to no longer\nrefer to ItemIds as item pointers, without changing anything else.How about we rename ItemPointerData to TupleIdentifier or ItemIdentifier instead and leave ItemPointer or Item confined to AM term, where item can be tuple, datum or anything else ?",
"msg_date": "Fri, 26 Apr 2019 16:23:44 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 4:23 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> How about we rename ItemPointerData to TupleIdentifier or ItemIdentifier instead and leave ItemPointer or Item confined to AM term, where item can be tuple, datum or anything else ?\n\nI'm not a fan of that idea, because the reality is that an\nItemPointerData is quite explicitly supposed to be a physiological\nidentifier (TID) used by heapam, or a similar heap-like access method\nsuch as zheap. This is baked into a number of things.\n\nThe limitation that pluggable storage engines have to work in terms of\nitem pointers is certainly a problem, especially for things like the\nZedstore column store project you're working on. However, I suspect\nthat that problem is best solved by accommodating other types of\nidentifiers that don't work like TIDs.\n\nI understand why you've adopted ItemPointerData as a fully-logical\nidentifier in your prototype, but it's not a great long term solution.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Apr 2019 16:53:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> On Fri, Apr 26, 2019 at 2:19 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> ISTM that the\n>> least confusing way of removing the ambiguity would be to no longer\n>> refer to ItemIds as item pointers, without changing anything else.\n\nHow many places would we be changing to clean that up?\n\n> How about we rename ItemPointerData to TupleIdentifier or ItemIdentifier\n> instead and leave ItemPointer or Item confined to AM term, where item can\n> be tuple, datum or anything else ?\n\nThere's half a thousand references to ItemPointer[Data] in our\nsources, and probably tons more in external modules. I'm *not*\nin favor of renaming it.\n\nItemId[Data] is somewhat less widely referenced, but I'm still not\nmuch in favor of renaming that type. I think fixing comments to\nuniformly call it an item ID would be more reasonable. (We should\nleave the \"line pointer\" terminology in place, too; if memory serves,\nan awful lot of variables of the type are named \"lp\" or variants.\nRenaming all of those is to nobody's benefit.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 19:57:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 4:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ItemId[Data] is somewhat less widely referenced, but I'm still not\n> much in favor of renaming that type. I think fixing comments to\n> uniformly call it an item ID would be more reasonable. (We should\n> leave the \"line pointer\" terminology in place, too; if memory serves,\n> an awful lot of variables of the type are named \"lp\" or variants.\n> Renaming all of those is to nobody's benefit.)\n\nI was proposing that we not rename any struct at all, and continue to\ncall ItemId[Data]s \"line pointers\" only. This would involve removing\nthe comment in itemid.h that confusingly refers to line pointers as\n\"item pointers\" (plus any other comments that fail to make a clear\ndistinction).\n\nI think that the total number of comments that would be affected by\nthis approach is quite low.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Apr 2019 17:02:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I was proposing that we not rename any struct at all, and continue to\n> call ItemId[Data]s \"line pointers\" only.\n\nYeah, I'd be fine with that, although the disconnect between the type\nname and the comment terminology might confuse some people.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2019 20:05:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 5:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I'd be fine with that, although the disconnect between the type\n> name and the comment terminology might confuse some people.\n\nMaybe, but the fact that the ItemIdData struct consists of bit fields\nthat are all named \"lp_*\" offers a hint. Plus you have the LP_*\nconstants that get stored in ItemIdData.lp_flags.\n\nI wouldn't call the struct ItemIdData if I was in a green field\nsituation, but it doesn't seem too bad under the present\ncircumstances. I'd rather not change the struct's name, because that\nwould probably cause problems without any real benefit. OTOH, calling\ntwo closely related but distinct things by the same name is atrocious.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Apr 2019 17:13:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 5:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Apr 26, 2019 at 5:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Yeah, I'd be fine with that, although the disconnect between the type\n> > name and the comment terminology might confuse some people.\n>\n> Maybe, but the fact that the ItemIdData struct consists of bit fields\n> that are all named \"lp_*\" offers a hint. Plus you have the LP_*\n> constants that get stored in ItemIdData.lp_flags.\n\nAttached draft patch adjusts code comments and error messages where a\nline pointer is referred to as an item pointer. It turns out that this\npractice isn't all that prevalent. Here are some specific concerns\nthat I had to think about when writing the patch, though:\n\n* I ended up removing a big indexam.c \"old comments\" comment paragraph\nfrom the Berkeley days, because it used the term item pointer in what\nseemed like the wrong way, but also because AFAICT it's totally\nobsolete.\n\n* Someone should confirm that I have preserved the original intent of\nthe changes within README.HOT, and the heapam changes that relate to\npruning. It's possible that I changed \"item pointer\" to \"line pointer\"\nin one or two places where I should have changed \"item pointer\" to\n\"tuple\" instead.\n\n* I changed a few long standing \"can't happen\" error messages that\nconcern corruption, most of which also relate to pruning. Maybe that's\na cost that needs to be considered.\n\n* I changed a lazy_scan_heap() log message of long-standing. Another\ndownside that needs to be considered.\n\n* I expanded a little on the advantages of using line pointers within\nbufpage.h. That seemed in scope to me, because it drives home the\ndistinction between item pointers and line pointers.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 5 May 2019 13:14:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Sun, May 5, 2019 at 1:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached draft patch adjusts code comments and error messages where a\n> line pointer is referred to as an item pointer. It turns out that this\n> practice isn't all that prevalent. Here are some specific concerns\n> that I had to think about when writing the patch, though:\n\nPing? Any objections to pushing ahead with this?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 May 2019 11:50:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, May 5, 2019 at 1:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Attached draft patch adjusts code comments and error messages where a\n>> line pointer is referred to as an item pointer. It turns out that this\n>> practice isn't all that prevalent. Here are some specific concerns\n>> that I had to think about when writing the patch, though:\n\n> Ping? Any objections to pushing ahead with this?\n\nPatch looks fine to me. One minor quibble: in pruneheap.c you have\n\n /*\n- * Prune specified item pointer or a HOT chain originating at that item.\n+ * Prune specified line pointer or a HOT chain originating at that item.\n *\n * If the item is an index-referenced tuple (i.e. not a heap-only tuple),\n\nShould \"that item\" also be re-worded, for consistency?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 15:37:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What is an item pointer, anyway?"
},
{
"msg_contents": "On Mon, May 13, 2019 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> /*\n> - * Prune specified item pointer or a HOT chain originating at that item.\n> + * Prune specified line pointer or a HOT chain originating at that item.\n> *\n> * If the item is an index-referenced tuple (i.e. not a heap-only tuple),\n>\n> Should \"that item\" also be re-worded, for consistency?\n\nYes, it should be -- I'll fix it.\n\nI'm going to backpatch the storage.sgml change on its own, while\npushing everything else in a separate HEAD-only commit.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 May 2019 12:54:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: What is an item pointer, anyway?"
}
] |
[
{
"msg_contents": "Update time zone data files to tzdata release 2019a.\n\nDST law changes in Palestine and Metlakatla.\nHistorical corrections for Israel.\n\nEtc/UCT is now a backward-compatibility link to Etc/UTC, instead\nof being a separate zone that generates the abbreviation \"UCT\",\nwhich nowadays is typically a typo. Postgres will still accept\n\"UCT\" as an input zone name, but it won't output it.\n\nBranch\n------\nREL_11_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f6307bacabf555e9343fbf4f91723ce698303b03\n\nModified Files\n--------------\nsrc/timezone/data/tzdata.zi | 14 +++++++++-----\nsrc/timezone/known_abbrevs.txt | 1 -\n2 files changed, 9 insertions(+), 6 deletions(-)\n\n",
"msg_date": "Fri, 26 Apr 2019 21:57:17 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Update time zone data files to tzdata release 2019a."
},
{
"msg_contents": "Re: Tom Lane 2019-04-26 <E1hK8qL-0005yH-VX@gemulon.postgresql.org>\n> Update time zone data files to tzdata release 2019a.\n> \n> DST law changes in Palestine and Metlakatla.\n> Historical corrections for Israel.\n> \n> Etc/UCT is now a backward-compatibility link to Etc/UTC, instead\n> of being a separate zone that generates the abbreviation \"UCT\",\n> which nowadays is typically a typo. Postgres will still accept\n> \"UCT\" as an input zone name, but it won't output it.\n\nThere is something wrong here. On Debian Buster/unstable, using\nsystem tzdata (2019a-1), if /etc/timezone is \"Etc/UTC\":\n\n11.3's initdb adds timezone = 'UCT' to postgresql.conf\n12beta1's initdb add timezone = 'Etc/UCT' to postgresql.conf\n\nIs that expected behavior? Docker users are complaining that \"UCT\"\nmesses up their testsuites. https://github.com/docker-library/postgres/issues/577\n\nChristoph\n\n\n",
"msg_date": "Tue, 4 Jun 2019 10:57:35 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "UCT (Re: pgsql: Update time zone data files to tzdata release 2019a.)"
},
{
"msg_contents": ">>>>> \"Christoph\" == Christoph Berg <myon@debian.org> writes:\n\n >> Etc/UCT is now a backward-compatibility link to Etc/UTC, instead of\n >> being a separate zone that generates the abbreviation \"UCT\", which\n >> nowadays is typically a typo. Postgres will still accept \"UCT\" as an\n >> input zone name, but it won't output it.\n\n Christoph> There is something wrong here. On Debian Buster/unstable,\n Christoph> using system tzdata (2019a-1), if /etc/timezone is\n Christoph> \"Etc/UTC\":\n\n Christoph> 11.3's initdb adds timezone = 'UCT' to postgresql.conf\n Christoph> 12beta1's initdb add timezone = 'Etc/UCT' to postgresql.conf\n\n Christoph> Is that expected behavior?\n\nIt's clearly not what users expect and it's clearly the wrong thing to\ndo, though it's the expected behavior of the current code:\n\n * On most systems, we rely on trying to match the observable behavior of\n * the C library's localtime() function. The database zone that matches\n * furthest into the past is the one to use. Often there will be several\n * zones with identical rankings (since the IANA database assigns multiple\n * names to many zones). We break ties arbitrarily by preferring shorter,\n * then alphabetically earlier zone names.\n\nI believe I pointed out a long, long time ago that this tie-breaking\nstrategy was insane, and that the rule should be to prefer canonical\nnames and use something else only in the case of a strictly better\nmatch.\n\nIf TZ is set or if /etc/localtime is a symlink rather than a hardlink or\ncopy of the zone file, then PG can get the zone name directly rather\nthan having to do the comparisons, so the above comment doesn't apply;\nthat gives you a workaround.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 04 Jun 2019 11:20:14 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> There is something wrong here. On Debian Buster/unstable, using\n> system tzdata (2019a-1), if /etc/timezone is \"Etc/UTC\":\n\n> 11.3's initdb adds timezone = 'UCT' to postgresql.conf\n> 12beta1's initdb add timezone = 'Etc/UCT' to postgresql.conf\n\nHm, I don't have a Debian machine at hand, but I'm unable to\nreproduce this using macOS or RHEL. I tried things like\n\n$ TZ=UTC initdb\n...\nselecting default timezone ... UTC\n...\n\nIs your build using --with-system-tzdata? If so, which tzdb\nrelease is the system on, and is it a completely stock copy\nof that release?\n\nGiven the tie-breaking behavior in findtimezone.c,\n\n * ... Often there will be several\n * zones with identical rankings (since the IANA database assigns multiple\n * names to many zones). We break ties arbitrarily by preferring shorter,\n * then alphabetically earlier zone names.\n\nit's not so surprising that UCT might be chosen, but I don't\nunderstand how Etc/UCT would be.\n\nBTW, does Debian set up /etc/timezone as a symlink, by any chance,\nrather than a copy or hard link? If it's a symlink, we could improve\nmatters by teaching identify_system_timezone() to inspect it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jun 2019 11:27:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> I believe I pointed out a long, long time ago that this tie-breaking\n> strategy was insane, and that the rule should be to prefer canonical\n> names and use something else only in the case of a strictly better\n> match.\n\nThis is assuming that the tzdb data has a concept of a canonical name\nfor a zone, which unfortunately it does not. UTC, UCT, Etc/UTC,\nand about four other strings are equivalent names for the same zone\nso far as one can tell from the installed data.\n\nWe could imagine layering some additional data on top of tzdb,\nbut I don't much want to go there from a maintenance standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jun 2019 11:30:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 11:27:31 -0400, Tom Lane wrote:\n> Hm, I don't have a Debian machine at hand, but I'm unable to\n> reproduce this using macOS or RHEL. I tried things like\n> \n> $ TZ=UTC initdb\n> ...\n> selecting default timezone ... UTC\n> ...\n\nOn debian unstable that's what I get too, both with system and PG\ntzdata.\n\n\n> BTW, does Debian set up /etc/timezone as a symlink, by any chance,\n> rather than a copy or hard link? If it's a symlink, we could improve\n> matters by teaching identify_system_timezone() to inspect it.\n\nOn my system it's a copy (link count 1, not a symlink). Or did you mean\n/etc/localtime? Because that's indeed a symlink.\n\nIf I set the system-wide default, using dpkg-reconfigure -plow tzdata,\nto UTC I *do* get Etc/UTC.\n\nroot@alap4:/home/andres/src/postgresql# cat /etc/timezone\nEtc/UTC\nroot@alap4:/home/andres/src/postgresql# ls -l /etc/timezone\n-rw-r--r-- 1 root root 8 Jun 4 15:44 /etc/timezone\n\nselecting default timezone ... Etc/UTC\n\nThis is independent of being built with system or non-system tzdata.\n\nEnabling debugging shows:\n\nselecting default timezone ... symbolic link \"/etc/localtime\" contains \"/usr/share/zoneinfo/Etc/UCT\"\nTZ \"Etc/UCT\" gets max score 5200\nEtc/UCT\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 08:53:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n >> I believe I pointed out a long, long time ago that this tie-breaking\n >> strategy was insane, and that the rule should be to prefer canonical\n >> names and use something else only in the case of a strictly better\n >> match.\n\n Tom> This is assuming that the tzdb data has a concept of a canonical\n Tom> name for a zone, which unfortunately it does not. UTC, UCT,\n Tom> Etc/UTC, and about four other strings are equivalent names for the\n Tom> same zone so far as one can tell from the installed data.\n\nThe simplest definition is that the names listed in zone.tab or\nzone1970.tab if you prefer that one are canonical, and Etc/UTC and the\nEtc/GMT[offset] names could be regarded as canonical too. Everything\nelse is either an alias or a backward-compatibility hack.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 04 Jun 2019 16:57:03 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 11:27:31 -0400, Tom Lane wrote:\n> $ TZ=UTC initdb\n> ...\n> selecting default timezone ... UTC\n> ...\n\nBtw, if the input is Etc/UTZ, do you also get UTC or Etc/UTZ? Because it\nseems that debian only configures Etc/UTZ on a system-wide basis\nnow. Which seems not insane, given that's it's a backward compat thing\nnow.\n\n- Andres\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:07:15 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Christoph\" == Christoph Berg <myon@debian.org> writes:\n\n Christoph> There is something wrong here. On Debian Buster/unstable,\n Christoph> using system tzdata (2019a-1), if /etc/timezone is\n Christoph> \"Etc/UTC\":\n\n Christoph> 11.3's initdb adds timezone = 'UCT' to postgresql.conf\n Christoph> 12beta1's initdb add timezone = 'Etc/UCT' to postgresql.conf\n\nfwiw on FreeBSD with no /etc/localtime and no TZ in the environment (and\nhence running on UTC), I get \"UCT\" on both 11.3 and HEAD.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 04 Jun 2019 17:20:42 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 08:53:30 -0700, Andres Freund wrote:\n> If I set the system-wide default, using dpkg-reconfigure -plow tzdata,\n> to UTC I *do* get Etc/UTC.\n> \n> root@alap4:/home/andres/src/postgresql# cat /etc/timezone\n> Etc/UTC\n> root@alap4:/home/andres/src/postgresql# ls -l /etc/timezone\n> -rw-r--r-- 1 root root 8 Jun 4 15:44 /etc/timezone\n> \n> selecting default timezone ... Etc/UTC\n> \n> This is independent of being built with system or non-system tzdata.\n>\n> Enabling debugging shows:\n\nSorry, I was not awake enough while reading the thread (and UCT looks so\nsimilar to UTC).\n\nI do indeed see the behaviour of choosing UCT in 11, but not in\n12. Independent of system/non-system tzdata. With system tzdata, I get\nthe following debug output (after filtering lots of lines wiht out |grep\n-v 'scores 0'|grep -v 'uses leap seconds')\n\nTZ \"Zulu\" gets max score 5200\nTZ \"UCT\" gets max score 5200\nTZ \"Universal\" gets max score 5200\nTZ \"UTC\" gets max score 5200\nTZ \"Etc/Zulu\" gets max score 5200\nTZ \"Etc/UCT\" gets max score 5200\nTZ \"Etc/Universal\" gets max score 5200\nTZ \"Etc/UTC\" gets max score 5200\nTZ \"localtime\" gets max score 5200\nTZ \"posix/Zulu\" gets max score 5200\nTZ \"posix/UCT\" gets max score 5200\nTZ \"posix/Universal\" gets max score 5200\nTZ \"posix/UTC\" gets max score 5200\nTZ \"posix/Etc/Zulu\" gets max score 5200\nTZ \"posix/Etc/UCT\" gets max score 5200\nTZ \"posix/Etc/Universal\" gets max score 5200\nTZ \"posix/Etc/UTC\" gets max score 5200\nok\n\nwhereas master only does:\n\nselecting default timezone ... symbolic link \"/etc/localtime\" contains \"/usr/share/zoneinfo/Etc/UTC\"\nTZ \"Etc/UTC\" gets max score 5200\nEtc/UTC\n\nThe reason for the behaviour difference between v12 and 11 is that 12\ndoes:\n\n\t/*\n\t * Try to avoid the brute-force search by seeing if we can recognize the\n\t * system's timezone setting directly.\n\t *\n\t * Currently we just check /etc/localtime; there are other conventions for\n\t * this, but that seems to be the only one used on enough platforms to be\n\t * worth troubling over.\n\t */\n\tif (check_system_link_file(\"/etc/localtime\", &tt, resultbuf))\n\t\treturn resultbuf;\n\nwhich prevents having to iterate through all of these files, and ending\nup with a lot of equivalently scored timezones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:43:38 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 17:20:42 +0100, Andrew Gierth wrote:\n> fwiw on FreeBSD with no /etc/localtime and no TZ in the environment (and\n> hence running on UTC), I get \"UCT\" on both 11.3 and HEAD.\n\nThat makes sense. As far as I can tell the reason that 12 sometimes ends\nup with the proper timezone is that we shortcut the search by:\n\n\t/*\n\t * Try to avoid the brute-force search by seeing if we can recognize the\n\t * system's timezone setting directly.\n\t *\n\t * Currently we just check /etc/localtime; there are other conventions for\n\t * this, but that seems to be the only one used on enough platforms to be\n\t * worth troubling over.\n\t */\n\tif (check_system_link_file(\"/etc/localtime\", &tt, resultbuf))\n\t\treturn resultbuf;\n\nwhich is actually a behaviour changing, rather than just an\noptimization, when there's a lot of equivalently scoring timezones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:44:57 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Re: Tom Lane 2019-06-04 <65800.1559662051@sss.pgh.pa.us>\n> > There is something wrong here. On Debian Buster/unstable, using\n> > system tzdata (2019a-1), if /etc/timezone is \"Etc/UTC\":\n> \n> Is your build using --with-system-tzdata? If so, which tzdb\n> release is the system on, and is it a completely stock copy\n> of that release?\n\nIt's using system tzdata (2019a-1).\n\nThere's one single patch on top of that:\n\nhttps://sources.debian.org/src/tzdata/2019a-1/debian/patches/\n\n> BTW, does Debian set up /etc/timezone as a symlink, by any chance,\n> rather than a copy or hard link? If it's a symlink, we could improve\n> matters by teaching identify_system_timezone() to inspect it.\n\nIn the meantime I realized that I was only testing /etc/timezone\n(which is a plain file with just the zone name), while not touching\n/etc/localtime at all. In this environment, it's a symlink:\n\nlrwxrwxrwx 1 root root 27 M�r 28 14:49 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\n... but the name still gets canonicalized to Etc/UCT or UCT.\n\nChristoph\n\n\n",
"msg_date": "Wed, 5 Jun 2019 10:47:35 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "[ sorry for slow response, I'm on vacation ]\n\nAndres Freund <andres@anarazel.de> writes:\n> That makes sense. As far as I can tell the reason that 12 sometimes ends\n> up with the proper timezone is that we shortcut the search by:\n\n> \t/*\n> \t * Try to avoid the brute-force search by seeing if we can recognize the\n> \t * system's timezone setting directly.\n> \t *\n> \t * Currently we just check /etc/localtime; there are other conventions for\n> \t * this, but that seems to be the only one used on enough platforms to be\n> \t * worth troubling over.\n> \t */\n> \tif (check_system_link_file(\"/etc/localtime\", &tt, resultbuf))\n> \t\treturn resultbuf;\n\n> which is actually a behaviour changing, rather than just an\n> optimization, when there's a lot of equivalently scoring timezones.\n\nSure, that is intentionally a behavior change in this situation.\nThe theory is that if \"Etc/UCT\" is what the user put in /etc/localtime,\nthen that's the spelling she wants. See 23bd3cec6.\n\nBut it seems to me that this code is *not* determining the result in\nChristoph's case, because if it were, it'd be settling on Etc/UTC,\naccording to his followup report that\n\n>> lrwxrwxrwx 1 root root 27 Mär 28 14:49 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\nI'm not too familiar with what actually determines glibc's behavior\non Debian, but I'm suspicious that there's an inconsistency between\n/etc/localtime and /etc/timezone. We won't adopt the spelling we\nsee in /etc/localtime unless it agrees with the observed behavior of\nlocaltime(3).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jun 2019 12:51:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-06 12:51:30 -0400, Tom Lane wrote:\n> [ sorry for slow response, I'm on vacation ]\n\nGood.\n\n\n> Andres Freund <andres@anarazel.de> writes:\n> > That makes sense. As far as I can tell the reason that 12 sometimes ends\n> > up with the proper timezone is that we shortcut the search by:\n>\n> > \t/*\n> > \t * Try to avoid the brute-force search by seeing if we can recognize the\n> > \t * system's timezone setting directly.\n> > \t *\n> > \t * Currently we just check /etc/localtime; there are other conventions for\n> > \t * this, but that seems to be the only one used on enough platforms to be\n> > \t * worth troubling over.\n> > \t */\n> > \tif (check_system_link_file(\"/etc/localtime\", &tt, resultbuf))\n> > \t\treturn resultbuf;\n>\n> > which is actually a behaviour changing, rather than just an\n> > optimization, when there's a lot of equivalently scoring timezones.\n>\n> Sure, that is intentionally a behavior change in this situation.\n> The theory is that if \"Etc/UCT\" is what the user put in /etc/localtime,\n> then that's the spelling she wants. See 23bd3cec6.\n\nRight, I'm not complaining about that. I'm just noting that that\nexplains the cross-version divergence.\n\nNote that on 11 I *do* end up with some *other* timezone with the newer\ntimezone data:\n\n$cat /etc/timezone;ls -l /etc/localtime\nEtc/UTC\nlrwxrwxrwx 1 root root 27 Jun 6 17:02 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\n$ rm -rf /tmp/tztest;~/build/postgres/11-assert/install/bin/initdb /tmp/tztest 2>&1|grep -v 'scores 0'|grep -v 'uses leap seconds';grep timezone /tmp/tztest/postgresql.conf\n...\nTZ \"Zulu\" gets max score 5200\nTZ \"UCT\" gets max score 5200\nTZ \"Universal\" gets max score 5200\nTZ \"UTC\" gets max score 5200\nTZ \"Etc/Zulu\" gets max score 5200\nTZ \"Etc/UCT\" gets max score 5200\nTZ \"Etc/Universal\" gets max score 5200\nTZ \"Etc/UTC\" gets max score 5200\nTZ \"localtime\" gets max score 5200\nTZ \"posix/Zulu\" gets max score 5200\nTZ \"posix/UCT\" gets max score 5200\nTZ \"posix/Universal\" gets max score 5200\nTZ \"posix/UTC\" gets max score 5200\nTZ \"posix/Etc/Zulu\" gets max score 5200\nTZ \"posix/Etc/UCT\" gets max score 5200\nTZ \"posix/Etc/Universal\" gets max score 5200\nTZ \"posix/Etc/UTC\" gets max score 5200\nok\n...\n\nlog_timezone = 'UCT'\ntimezone = 'UCT'\n#timezone_abbreviations = 'Default' # Select the set of available time zone\n\t\t\t\t\t# share/timezonesets/.\n\nAs you can see the switch from Etc/UTC to UCT does happen here\n(presumably in any branch before 12). Which did not happen before the\nimport of 2019a / when using a system tzdata that's before\nthat. There you get:\n\nTZ \"Zulu\" gets max score 5200\nTZ \"Universal\" gets max score 5200\nTZ \"UTC\" gets max score 5200\nTZ \"Etc/Zulu\" gets max score 5200\nTZ \"Etc/Universal\" gets max score 5200\nTZ \"Etc/UTC\" gets max score 5200\nok\n\nand end up with UTC as the selection.\n\nI do think that < 12 clearly regressed here, although it's only exposing\nprevious behaviour further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jun 2019 10:18:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-06-06 12:51:30 -0400, Tom Lane wrote:\n>> Sure, that is intentionally a behavior change in this situation.\n>> The theory is that if \"Etc/UCT\" is what the user put in /etc/localtime,\n>> then that's the spelling she wants. See 23bd3cec6.\n\n> Right, I'm not complaining about that. I'm just noting that that\n> explains the cross-version divergence.\n\nIt explains some cross-version divergence for sure. What I'm still not\nclear about is whether Christoph's report is entirely that, or whether\nthere's some additional factor we don't understand yet.\n\n> As you can see the switch from Etc/UTC to UCT does happen here\n> (presumably in any branch before 12). Which did not happen before the\n> import of 2019a / when using a system tzdata that's before\n> that.\n\nRight. Before 2019a, UCT would not have been a match to a system\nsetting of UTC because the zone abbreviation reported by localtime()\nwas different. Now it's the same abbreviation.\n\nMaybe we should consider back-patching 23bd3cec6.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jun 2019 13:44:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> In the meantime I realized that I was only testing /etc/timezone\n> (which is a plain file with just the zone name), while not touching\n> /etc/localtime at all. In this environment, it's a symlink:\n> lrwxrwxrwx 1 root root 27 Mär 28 14:49 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n> ... but the name still gets canonicalized to Etc/UCT or UCT.\n\nNow that I'm home again, I tried to replicate this behavior. I don't\nhave Debian Buster installed, but I do have an up-to-date Stretch\ninstall, and I can't get it to do this. What I see is that\n\n1. HEAD will follow the spelling appearing in /etc/localtime, if that's\na symlink. It will not pay any attention to /etc/timezone --- but as\nfar as I can tell, glibc doesn't either. (For instance, if I remove\n/etc/localtime, then date(1) starts reporting UTC, independently of\nwhat /etc/timezone might say.)\n\n2. Pre-v12, or if we can't get a valid zone name out of /etc/localtime,\nthe identify_system_timezone() search settles on \"UCT\" as being the\nshortest and alphabetically first of the various equivalent names for\nthe zone.\n\nThe only way I can get it to pick \"Etc/UCT\" is if that's what I put\ninto /etc/localtime. (In which case I maintain that that's not a bug,\nor at least not our bug.)\n\nSo I'm still mystified by Christoph's report, and am forced to suspect\npilot error -- specifically, /etc/localtime not containing what he said.\n\nAnyway, moving on to the question of what should we do about this,\nI don't really have anything better to offer than back-patching 23bd3cec6.\nI'm fairly hesitant to do that given the small amount of testing it's\ngotten ... but given that it's been in the tree since September, maybe\nwe can feel like we'd have noticed any really bad problems. I don't have\nany use for Andrew's suggestion of looking into zone1970.tab: in the\nfirst place I'm unconvinced that the tzdb guys intend that file to offer\ncanonical zone names, and in the second place I doubt we can rely on the\nfile to be present (it's not installed by zic itself), and in the third\nplace it definitely won't fix this particular issue because it has no\nentries for UTC/UCT/GMT etc, only for geographical locations.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\nPS: As a side note, I do notice an interesting difference between the\ntimezone database files as they appear on Debian versus what I see on\nRHEL or in a PG-generated timezone tree. Debian seems to use symlinks\nfor multiple equivalent zones:\n\n$ ls -l /usr/share/zoneinfo/U??\n-rw-r--r-- 1 root root 127 Mar 27 16:34 /usr/share/zoneinfo/UCT\nlrwxrwxrwx 1 root root 3 Mar 27 16:34 /usr/share/zoneinfo/UTC -> UCT\n$ ls -l /usr/share/zoneinfo/Etc/U??\nlrwxrwxrwx 1 root root 6 Mar 27 16:34 /usr/share/zoneinfo/Etc/UCT -> ../UCT\nlrwxrwxrwx 1 root root 6 Mar 27 16:34 /usr/share/zoneinfo/Etc/UTC -> ../UCT\n\nbut elsewhere these are hard links:\n\n$ ls -l /usr/share/zoneinfo/U??\n-rw-r--r--. 8 root root 118 Mar 26 11:37 /usr/share/zoneinfo/UCT\n-rw-r--r--. 8 root root 118 Mar 26 11:37 /usr/share/zoneinfo/UTC\n$ ls -l /usr/share/zoneinfo/Etc/U??\n-rw-r--r--. 8 root root 118 Mar 26 11:37 /usr/share/zoneinfo/Etc/UCT\n-rw-r--r--. 8 root root 118 Mar 26 11:37 /usr/share/zoneinfo/Etc/UTC\n\nHowever, identify_system_timezone() doesn't treat symlinks differently\nfrom regular files, so this doesn't explain anything about the problem\nat hand, AFAICS.\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:41:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Re: Tom Lane 2019-06-11 <24452.1560285699@sss.pgh.pa.us>\n> The only way I can get it to pick \"Etc/UCT\" is if that's what I put\n> into /etc/localtime. (In which case I maintain that that's not a bug,\n> or at least not our bug.)\n\nDid you try a symlink or a plain file for /etc/localtime?\n\n> So I'm still mystified by Christoph's report, and am forced to suspect\n> pilot error -- specifically, /etc/localtime not containing what he said.\n\nOn Debian unstable, deleting /etc/timezone, $TZ not set, and with this symlink:\nlrwxrwxrwx 1 root root 27 M�r 28 14:49 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\n/usr/lib/postgresql/11/bin/initdb -D pgdata\n$ grep timezone pgdata/postgresql.conf\nlog_timezone = 'UCT'\ntimezone = 'UCT'\n\n/usr/lib/postgresql/12/bin/initdb -D pgdata\n$ grep timezone pgdata/postgresql.conf\nlog_timezone = 'Etc/UTC'\ntimezone = 'Etc/UTC'\n\nSame behavior on Debian Stretch (stable):\nlrwxrwxrwx 1 root root 27 Mai 7 11:14 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\n$ grep timezone pgdata/postgresql.conf\nlog_timezone = 'UCT'\ntimezone = 'UCT'\n\n$ grep timezone pgdata/postgresql.conf\nlog_timezone = 'Etc/UTC'\ntimezone = 'Etc/UTC'\n\n> Anyway, moving on to the question of what should we do about this,\n> I don't really have anything better to offer than back-patching 23bd3cec6.\n\nThe PG12 behavior seems sane, so +1.\n\nChristoph\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:55:46 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2019-06-11 <24452.1560285699@sss.pgh.pa.us>\n>> The only way I can get it to pick \"Etc/UCT\" is if that's what I put\n>> into /etc/localtime. (In which case I maintain that that's not a bug,\n>> or at least not our bug.)\n\n> Did you try a symlink or a plain file for /etc/localtime?\n\nSymlink --- if it's a plain file, our code can't learn anything from it.\n\n> On Debian unstable, deleting /etc/timezone, $TZ not set, and with this symlink:\n> lrwxrwxrwx 1 root root 27 Mär 28 14:49 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC\n\n> /usr/lib/postgresql/11/bin/initdb -D pgdata\n> $ grep timezone pgdata/postgresql.conf\n> log_timezone = 'UCT'\n> timezone = 'UCT'\n\n> /usr/lib/postgresql/12/bin/initdb -D pgdata\n> $ grep timezone pgdata/postgresql.conf\n> log_timezone = 'Etc/UTC'\n> timezone = 'Etc/UTC'\n\nThat's what I'd expect. Do you think your upthread report of HEAD\npicking \"Etc/UCT\" was a typo? Or maybe you actually had /etc/localtime\nset that way?\n\n>> Anyway, moving on to the question of what should we do about this,\n>> I don't really have anything better to offer than back-patching 23bd3cec6.\n\n> The PG12 behavior seems sane, so +1.\n\nOK, I'll make that happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 09:11:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >>> Anyway, moving on to the question of what should we do about this,\n >>> I don't really have anything better to offer than back-patching\n >>> 23bd3cec6.\n\n >> The PG12 behavior seems sane, so +1.\n\n Tom> OK, I'll make that happen.\n\nThis isn't good enough, because it still picks \"UCT\" on a system with no\n/etc/localtime and no TZ variable. Testing on HEAD as of 3da73d683 (on\nFreeBSD, but it'll be the same anywhere else):\n\n% ls -l /etc/*time*\nls: /etc/*time*: No such file or directory\n\n% env -u TZ bin/initdb -D data -E UTF8 --no-locale\n[...]\nselecting default timezone ... UCT\n\nWe need to absolutely prefer UTC over UCT if both match.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 14 Jun 2019 19:24:50 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> This isn't good enough, because it still picks \"UCT\" on a system with no\n> /etc/localtime and no TZ variable. Testing on HEAD as of 3da73d683 (on\n> FreeBSD, but it'll be the same anywhere else):\n\n[ shrug... ] Too bad. I doubt that that's a common situation anyway.\n\n> We need to absolutely prefer UTC over UCT if both match.\n\nI don't see a reason why that's a hard requirement. There are at least\ntwo ways for a user to override initdb's decision (/etc/localtime or TZ),\nor she could just change the GUC setting after the fact, and for that\nmatter it's not obvious that it matters to most people how TimeZone\nis spelled as long as it delivers the right external behavior. We had\nthe business with \"Navajo\" being preferred for US Mountain time for\nquite a few years, with not very many complaints.\n\nI don't see any way that we could \"fix\" this except with a hardwired\nspecial case to prefer UTC over other spellings, and I definitely do\nnot want to go there. If we start putting in magic special cases to make\nparticular zone names be preferred over other ones, where will we stop?\n(I've been lurking on the tzdb mailing list for long enough now to know\nthat that's a fine recipe for opening ourselves up to politically-\nmotivated demands that name X be preferred over name Y.)\n\nA possibly better idea is to push back on tzdb's choice to unify\nthese zones. Don't know if they'd listen, but we could try. The\nUCT symlink hasn't been out there so long that it's got much inertia.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:12:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> This isn't good enough, because it still picks \"UCT\" on a system with no\n >> /etc/localtime and no TZ variable. Testing on HEAD as of 3da73d683 (on\n >> FreeBSD, but it'll be the same anywhere else):\n\n Tom> [ shrug... ] Too bad. I doubt that that's a common situation anyway.\n\nLiterally every server I have set up is like this...\n\n >> We need to absolutely prefer UTC over UCT if both match.\n\n Tom> I don't see a reason why that's a hard requirement.\n\nBecause the reverse is clearly insane.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 14 Jun 2019 21:27:18 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Fri, Jun 14, 2019, 3:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> A possibly better idea is to push back on tzdb's choice to unify\n> these zones. Don't know if they'd listen, but we could try. The\n> UCT symlink hasn't been out there so long that it's got much inertia.\n\n\nOne oddity; AIX had a preference for CUT with fallbacks to CUT0 and UCT\nback when we had AIX boxes (5.2 or 5.3, if my memory still works on this).\n\nWe wound up setting PGTZ explicitly to UTC to overrule any such fighting\nbetween time zones.\n\nThere may therefore be some older history (and some sort of inertia) in AIX\nland than meets the eye elsewhere.\n\nThat doesn't prevent it from being a good idea to talk to tzdb maintainers,\nof course.\n\nOn Fri, Jun 14, 2019, 3:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nA possibly better idea is to push back on tzdb's choice to unify\nthese zones. Don't know if they'd listen, but we could try. The\nUCT symlink hasn't been out there so long that it's got much inertia.One oddity; AIX had a preference for CUT with fallbacks to CUT0 and UCT back when we had AIX boxes (5.2 or 5.3, if my memory still works on this).We wound up setting PGTZ explicitly to UTC to overrule any such fighting between time zones.There may therefore be some older history (and some sort of inertia) in AIX land than meets the eye elsewhere.That doesn't prevent it from being a good idea to talk to tzdb maintainers, of course.",
"msg_date": "Fri, 14 Jun 2019 16:29:34 -0400",
"msg_from": "Christopher Browne <cbbrowne@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> This isn't good enough, because it still picks \"UCT\" on a system\n >> with no /etc/localtime and no TZ variable. Testing on HEAD as of\n >> 3da73d683 (on FreeBSD, but it'll be the same anywhere else):\n\n Tom> [ shrug... ] Too bad. I doubt that that's a common situation anyway.\n\nI'm also reminded that this applies also if the /etc/localtime file is a\n_copy_ of the UTC zonefile rather than a symlink, which is possibly even\nmore common.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 14 Jun 2019 22:36:57 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n>>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >>> This isn't good enough, because it still picks \"UCT\" on a system\n >>> with no /etc/localtime and no TZ variable. Testing on HEAD as of\n >>> 3da73d683 (on FreeBSD, but it'll be the same anywhere else):\n\n Tom> [ shrug... ] Too bad. I doubt that that's a common situation anyway.\n\n Andrew> I'm also reminded that this applies also if the /etc/localtime\n Andrew> file is a _copy_ of the UTC zonefile rather than a symlink,\n Andrew> which is possibly even more common.\n\nAnd testing shows that if you select \"UTC\" when installing FreeBSD, you\nindeed get /etc/localtime as a copy not a symlink, and I've confirmed\nthat initdb picks \"UCT\" in that case.\n\nSo here is my current proposed fix.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Fri, 14 Jun 2019 23:14:09 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Re: Tom Lane 2019-06-14 <26948.1560517875@sss.pgh.pa.us>\n> > /usr/lib/postgresql/12/bin/initdb -D pgdata\n> > $ grep timezone pgdata/postgresql.conf\n> > log_timezone = 'Etc/UTC'\n> > timezone = 'Etc/UTC'\n> \n> That's what I'd expect. Do you think your upthread report of HEAD\n> picking \"Etc/UCT\" was a typo? Or maybe you actually had /etc/localtime\n> set that way?\n\nThat was likely a typo, yes. Sorry for the confusion, there's many\nvariables...\n\nChristoph\n\n\n",
"msg_date": "Mon, 17 Jun 2019 13:46:07 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On 2019-06-14 23:14:09 +0100, Andrew Gierth wrote:\n> So here is my current proposed fix.\n\nBefore pushing a commit that's controversial - and this clearly seems to\nsomewhat be - it'd be good to give others a heads up that you intend to\ndo so, so they can object. Rather than just pushing less than 24h later,\nwithout a warning.\n\n- Andres\n\n\n",
"msg_date": "Mon, 17 Jun 2019 10:39:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-06-14 23:14:09 +0100, Andrew Gierth wrote:\n> > So here is my current proposed fix.\n> \n> Before pushing a commit that's controversial - and this clearly seems to\n> somewhat be - it'd be good to give others a heads up that you intend to\n> do so, so they can object. Rather than just pushing less than 24h later,\n> without a warning.\n\nSeems like that would have meant a potentially very late commit to avoid\nhaving a broken (for some value of broken anyway) point release (either\nwith new code, or with reverting the timezone changes previously\ncommitted), which isn't great either.\n\nIn general, I agree with you, and we should try to give everyone time to\ndiscuss when something is controversial, but this seems like it was at\nleast a bit of a tough call.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Jun 2019 14:34:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-17 14:34:58 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-06-14 23:14:09 +0100, Andrew Gierth wrote:\n> > > So here is my current proposed fix.\n> > \n> > Before pushing a commit that's controversial - and this clearly seems to\n> > somewhat be - it'd be good to give others a heads up that you intend to\n> > do so, so they can object. Rather than just pushing less than 24h later,\n> > without a warning.\n> \n> Seems like that would have meant a potentially very late commit to avoid\n> having a broken (for some value of broken anyway) point release (either\n> with new code, or with reverting the timezone changes previously\n> committed), which isn't great either.\n\n> In general, I agree with you, and we should try to give everyone time to\n> discuss when something is controversial, but this seems like it was at\n> least a bit of a tough call.\n\nHm? All I'm saying is that Andrew's email should have included something\nto the effect of \"Due to the upcoming release, I'm intending to push and\nbackpatch the attached fix in ~20h\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jun 2019 11:38:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-06-17 14:34:58 -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > On 2019-06-14 23:14:09 +0100, Andrew Gierth wrote:\n> > > > So here is my current proposed fix.\n> > > \n> > > Before pushing a commit that's controversial - and this clearly seems to\n> > > somewhat be - it'd be good to give others a heads up that you intend to\n> > > do so, so they can object. Rather than just pushing less than 24h later,\n> > > without a warning.\n> > \n> > Seems like that would have meant a potentially very late commit to avoid\n> > having a broken (for some value of broken anyway) point release (either\n> > with new code, or with reverting the timezone changes previously\n> > committed), which isn't great either.\n> \n> > In general, I agree with you, and we should try to give everyone time to\n> > discuss when something is controversial, but this seems like it was at\n> > least a bit of a tough call.\n> \n> Hm? All I'm saying is that Andrew's email should have included something\n> to the effect of \"Due to the upcoming release, I'm intending to push and\n> backpatch the attached fix in ~20h\".\n\nAh, ok, I agree that would have been good to do. Of course, hindsight\nbeing 20/20 and all that. Something to keep in mind for the future\nthough.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Jun 2019 14:40:53 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 2:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Ah, ok, I agree that would have been good to do. Of course, hindsight\n> being 20/20 and all that. Something to keep in mind for the future\n> though.\n\nI think it was inappropriate to commit this at all. You can't just\nsay \"some other committer objects, but I think I'm right so I'll just\nignore them and commit anyway.\" If we all do that it'll be chaos.\n\nI don't know exactly how many concurring vote it takes to override\nsomebody else's -1, but it's got to be more than zero.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jun 2019 17:26:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 17, 2019 at 2:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Ah, ok, I agree that would have been good to do. Of course, hindsight\n>> being 20/20 and all that. Something to keep in mind for the future\n>> though.\n\n> I think it was inappropriate to commit this at all. You can't just\n> say \"some other committer objects, but I think I'm right so I'll just\n> ignore them and commit anyway.\" If we all do that it'll be chaos.\n\nFWIW, that was my concern about this.\n\n> I don't know exactly how many concurring vote it takes to override\n> somebody else's -1, but it's got to be more than zero.\n\nIf even one other person had +1'd Andrew's proposal, I'd have yielded\nto the consensus --- this was certainly an issue on which it's not\ntotally clear what to do. But unless I missed some traffic, the vote\nwas exactly 1 to 1. There is no way that that represents consensus to\ncommit.\n\nAlso on the topic of process: 48 hours before a wrap deadline is\n*particularly* not the time to play fast and loose with this sort of\nthing. It'd have been better to wait till after this week's releases,\nso there'd at least be time to reconsider if the patch turned out to\nhave unexpected side-effects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jun 2019 17:35:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "BTW ... now that that patch has been in long enough to collect some\nactual data on what it's doing, I set out to scrape the buildfarm logs\nto see what is happening in the farm. Here are the popularities of\nvarious timezone settings, as of the end of May:\n\n 3 America/Los_Angeles\n 9 America/New_York\n 3 America/Sao_Paulo\n 2 Asia/Tokyo\n 2 CET\n 24 Etc/UTC\n 3 Europe/Amsterdam\n 11 Europe/Berlin\n 1 Europe/Brussels\n 1 Europe/Helsinki\n 1 Europe/Isle_of_Man\n 2 Europe/London\n 7 Europe/Paris\n 6 Europe/Prague\n 5 Europe/Stockholm\n 1 ROK\n 7 UCT\n 1 US/Central\n 7 US/Eastern\n 2 US/Pacific\n 15 UTC\n 1 localtime\n\n(These are the zone choices reported in the initdb-C step for the\nanimal's last successful run before 06-01. I excluded animals for which\nthe configuration summary shows that their choice is being forced by a\nTZ environment variable.)\n\nAs of now, six of the seven UCT-reporting members have switched to UTC;\nthe lone holdout is elver which hasn't run in ten days. (Perhaps it\nzneeds unwedged.) There are no other changes, so it seems like Andrew's\npatch is doing what it says on the tin.\n\nHowever, that one entry for 'localtime' disturbs me. (It's from snapper.)\nThat seems like a particularly useless choice of representation: it's not\ninformative, it's not portable, and it would lead to postmaster startup\nfailure if someone were to remove the machine's localtime file, which\nI assume is a nonstandard insertion into /usr/share/zoneinfo. Very\nlikely the only reason we don't see this behavior more is that sticking\na \"localtime\" file into /usr/share/zoneinfo is an obsolescent practice.\nOn machines that have such a file, it has a good chance of winning on\nthe grounds of being a short name.\n\nSo I'm toying with the idea of extending Andrew's patch to put a negative\npreference on \"localtime\", ensuring we'll use some other name for the zone\nif one is available.\n\nAlso, now that we have this mechanism, maybe we should charge it with\nde-preferencing the old \"Factory\" zone, removing the hard-wired kluge\nthat we currently have for rejecting that. (Modern tzdb doesn't install\n\"Factory\" at all, but some installations might still do so in the service\nof blind backwards compatibility.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jun 2019 18:47:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As of now, six of the seven UCT-reporting members have switched to UTC;\n> the lone holdout is elver which hasn't run in ten days. (Perhaps it\n> zneeds unwedged.) There are no other changes, so it seems like Andrew's\n> patch is doing what it says on the tin.\n\nOops. Apparentlly REL_10 of the build farm scripts lost the ability\nto find \"buildroot\" in the current working directory automatically. I\nhave updated eelpout and elver's .conf file to have an explicit path,\nand they are now busily building stuff.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Jun 2019 11:14:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "I wrote:\n> So I'm toying with the idea of extending Andrew's patch to put a negative\n> preference on \"localtime\", ensuring we'll use some other name for the zone\n> if one is available.\n\nOh ... after further review it seems like \"posixrules\" should be\nde-preferred on the same basis: it's uninformative and unportable,\nand it's short enough to have a good chance of capturing initdb's\nattention. I recall having seen at least one machine picking it\nrecently.\n\nMoreover, while I think most tzdb installations have that file (ours\ncertainly do), the handwriting is on the wall for it to go away,\nleaving only postmaster startup failures behind:\n\nhttp://mm.icann.org/pipermail/tz/2019-June/028172.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jun 2019 19:30:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> So I'm toying with the idea of extending Andrew's patch to put a\n Tom> negative preference on \"localtime\", ensuring we'll use some other\n Tom> name for the zone if one is available.\n\n Tom> Also, now that we have this mechanism, maybe we should charge it\n Tom> with de-preferencing the old \"Factory\" zone, removing the\n Tom> hard-wired kluge that we currently have for rejecting that.\n Tom> (Modern tzdb doesn't install \"Factory\" at all, but some\n Tom> installations might still do so in the service of blind backwards\n Tom> compatibility.)\n\nI was planning on submitting a follow-up myself (for pg13+) for\ndiscussion of further improvements. My suggestion would be that we\nshould have the following order of preference, from highest to lowest:\n\n - UTC (justified by being an international standard)\n \n - Etc/UTC\n\n - zones in zone.tab/zone1970.tab:\n\n These are the zone names that are intended to be presented to the\n user to select from. Dispute the exact meaning as you will, but I\n think it makes sense that these names should be chosen over\n equivalently good matches just on that basis.\n\n - zones in Africa/ America/ Antarctica/ Asia/ Atlantic/ Australia/\n Europe/ Indian/ Pacific/ Arctic/\n\n These subdirs are the ones generated by the \"primary\" zone data\n files, including both Zone and Link statements but not counting\n the \"backward\" and \"etcetera\" files.\n\n - GMT (justified on the basis of its presence as a default in the code)\n\n - Etc/*\n\n - any other zone name with a /\n\n - any zone name without a /, excluding 'localtime' and 'Factory'\n\n - 'localtime'\n\n - 'Factory'\n\nChoosing names with / over ones without is a change from our existing\npreference for shorter names, but it's more robust in the face of the\nvarious crap that gets dumped in the top level of the zoneinfo dir.\nIt could be argued that we should reverse the relative order of UTC vs.\nEtc/UTC and likewise for GMT for the same reason, but I think that's\nless important.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 20 Jun 2019 00:59:29 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> 1 Europe/Isle_of_Man\n\nIs this from HEAD and therefore possibly getting the value from an\n/etc/localtime symlink? I can't see any other way that\nEurope/Isle_of_Man could ever be chosen over Europe/London...\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 20 Jun 2019 01:50:37 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Also on the topic of process: 48 hours before a wrap deadline is\n> *particularly* not the time to play fast and loose with this sort of\n> thing. It'd have been better to wait till after this week's releases,\n> so there'd at least be time to reconsider if the patch turned out to\n> have unexpected side-effects.\n\nOur typical process for changes that actually end up breaking other\nthings is to put things back the way they were and come up with a\nbetter answer.\n\nShould we have reverted the code change that caused the issue in the\nfirst place, namely, as I understand it at least, the tz code update, to\ngive us time to come up with a better solution and to fix it properly?\n\nI'll admit that I wasn't following the thread very closely initially,\nbut I don't recall seeing that even discussed as an option, even though\nwe do it routinely and even had another such case for this set of\nreleases. Possibly a bad assumption on my part, but I did assume that\nthe lack of such a discussion meant that reverting wasn't really an\noption due to the nature of the changes, leading us into an atypical\ncase already where our usual processes weren't able to be followed.\n\nThat doesn't mean we should throw the whole thing out the window either,\ncertainly, but I'm not sure that between the 3 options of 'revert',\n'live with things being arguably broken', and 'push a contentious\ncommit' that I'd have seen a better option either.\n\nI do agree that it would have been better if intentions had been made\nclearer, such as announcing the plan to push the changes so that we\ndidn't end up with an issue during this patch set (either from out of\ndate zone information, or from having the wrong timezone alias be used),\nbut also with feelings on both sides- if there had been a more explicit\n\"hey, we really need input from someone else on which way they think\nthis should go\" ideally with the options spelled out, it would have\nhelped.\n\nI don't want to come across as implying that I'm saying what was done\nwas 'fine', or that we shouldn't be having this conversation, I'm just\ntrying to figure out how we can frame it in a way that we learn from it\nand work to improve on it for the future, should something like this\nhappen again.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 20 Jun 2019 08:52:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 8:52 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't want to come across as implying that I'm saying what was done\n> was 'fine', or that we shouldn't be having this conversation, I'm just\n> trying to figure out how we can frame it in a way that we learn from it\n> and work to improve on it for the future, should something like this\n> happen again.\n\nI agree that it's a difficult situation. I do kind of wonder whether\nwe were altogether overreacting. If we had shipped it as it was,\nwhat's the worst thing that would have happened?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 Jun 2019 12:02:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-20 12:02:30 -0400, Robert Haas wrote:\n> On Thu, Jun 20, 2019 at 8:52 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't want to come across as implying that I'm saying what was done\n> > was 'fine', or that we shouldn't be having this conversation, I'm just\n> > trying to figure out how we can frame it in a way that we learn from it\n> > and work to improve on it for the future, should something like this\n> > happen again.\n> \n> I agree that it's a difficult situation. I do kind of wonder whether\n> we were altogether overreacting. If we had shipped it as it was,\n> what's the worst thing that would have happened?\n\nI think it's not good, but also nothing particularly bad came out of\nit. I don't think we should try to set up procedures for future\noccurances, and rather work/plan on that not happening very often.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Jun 2019 09:14:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-06-20 12:02:30 -0400, Robert Haas wrote:\n> > On Thu, Jun 20, 2019 at 8:52 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > I don't want to come across as implying that I'm saying what was done\n> > > was 'fine', or that we shouldn't be having this conversation, I'm just\n> > > trying to figure out how we can frame it in a way that we learn from it\n> > > and work to improve on it for the future, should something like this\n> > > happen again.\n> > \n> > I agree that it's a difficult situation. I do kind of wonder whether\n> > we were altogether overreacting. If we had shipped it as it was,\n> > what's the worst thing that would have happened?\n> \n> I think it's not good, but also nothing particularly bad came out of\n> it. I don't think we should try to set up procedures for future\n> occurances, and rather work/plan on that not happening very often.\n\nAgreed.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 20 Jun 2019 13:26:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On 2019-Jun-20, Andres Freund wrote:\n\n> On 2019-06-20 12:02:30 -0400, Robert Haas wrote:\n\n> > I agree that it's a difficult situation. I do kind of wonder whether\n> > we were altogether overreacting. If we had shipped it as it was,\n> > what's the worst thing that would have happened?\n> \n> I think it's not good, but also nothing particularly bad came out of\n> it. I don't think we should try to set up procedures for future\n> occurances, and rather work/plan on that not happening very often.\n\nI suppose we could have a moratorium on commits starting from (say) EOB\nWednesday of the week prior to the release; patches can only be\ncommitted after that if they have ample support (where \"ample support\"\nmight be defined as having +1 from, say, two other committers). That\nway there's time to discuss/revert/fix anything that is deemed\ncontroversial.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Jun 2019 13:28:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 1:28 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I suppose we could have a moratorium on commits starting from (say) EOB\n> Wednesday of the week prior to the release; patches can only be\n> committed after that if they have ample support (where \"ample support\"\n> might be defined as having +1 from, say, two other committers). That\n> way there's time to discuss/revert/fix anything that is deemed\n> controversial.\n\nOr we could have a moratorium on any change at any time that has a -1\nfrom a committer and a +1 from nobody.\n\nI mean, your idea is not bad either. I'm just saying.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 Jun 2019 13:53:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Jun 20, 2019 at 1:28 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I suppose we could have a moratorium on commits starting from (say) EOB\n> > Wednesday of the week prior to the release; patches can only be\n> > committed after that if they have ample support (where \"ample support\"\n> > might be defined as having +1 from, say, two other committers). That\n> > way there's time to discuss/revert/fix anything that is deemed\n> > controversial.\n> \n> Or we could have a moratorium on any change at any time that has a -1\n> from a committer and a +1 from nobody.\n\nWhat about a change that's already been committed but another committer\nfeels caused a regression? If that gets a -1, does it get reverted\nuntil things are sorted out, or...?\n\nIn the situation that started this discussion, a change had already been\nmade and it was only later realized that it caused a regression. Piling\non to that, the regression was entwined with other important changes\nthat we wanted to include in the release.\n\nHaving a system where when the commit was made is a driving factor seems\nlike it would potentially reward people who pushed a change early by\ngiving them the upper hand in such a discussion as this.\n\nUltimately though, I still agree with Andres that this is something we\nshould act to avoid these situation and we shouldn't try to make a\npolicy to fit what's been a very rare occurance. If nothing else, I\nfeel like we'd probably re-litigate the policy every time since it would\nlikely have been a long time since the last discussion of it and the\nspecific circumstances will always be at least somewhat different.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 20 Jun 2019 14:24:07 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Stephen\" == Stephen Frost <sfrost@snowman.net> writes:\n\n Stephen> In the situation that started this discussion, a change had\n Stephen> already been made and it was only later realized that it\n Stephen> caused a regression.\n\nJust to keep the facts straight:\n\nThe regression was introduced by importing tzdb 2019a (in late April)\ninto the previous round of point releases; the change in UTC behaviour\nwas not mentioned in the commit and presumably didn't show up on\nanyone's radar until there were field complaints (which didn't reach our\nmailing lists until Jun 4 as far as I know).\n\nTom's \"fix\" of backpatching 23bd3cec6 (which happened on Friday 14th)\naddressed only a subset of cases, as far as I know working only on Linux\n(the historical convention has always been for /etc/localtime to be a\ncopy of a zonefile, not a symlink to one). I only decided to write (and\nif need be commit) my own followup fix after confirming that the bug was\nunfixed in a default FreeBSD install when set to UTC, and there was a\ngood chance that a number of other less-popular platforms were affected\ntoo.\n\n Stephen> Piling on to that, the regression was entwined with other\n Stephen> important changes that we wanted to include in the release.\n\nI'm not sure what you're referring to here?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 20 Jun 2019 20:19:24 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:\n> >>>>> \"Stephen\" == Stephen Frost <sfrost@snowman.net> writes:\n> \n> Stephen> In the situation that started this discussion, a change had\n> Stephen> already been made and it was only later realized that it\n> Stephen> caused a regression.\n> \n> Just to keep the facts straight:\n> \n> The regression was introduced by importing tzdb 2019a (in late April)\n\nAh, thanks, I had misunderstood when that was committed then.\n\n> into the previous round of point releases; the change in UTC behaviour\n> was not mentioned in the commit and presumably didn't show up on\n> anyone's radar until there were field complaints (which didn't reach our\n> mailing lists until Jun 4 as far as I know).\n\nOk.\n\n> Tom's \"fix\" of backpatching 23bd3cec6 (which happened on Friday 14th)\n> addressed only a subset of cases, as far as I know working only on Linux\n> (the historical convention has always been for /etc/localtime to be a\n> copy of a zonefile, not a symlink to one). I only decided to write (and\n> if need be commit) my own followup fix after confirming that the bug was\n> unfixed in a default FreeBSD install when set to UTC, and there was a\n> good chance that a number of other less-popular platforms were affected\n> too.\n> \n> Stephen> Piling on to that, the regression was entwined with other\n> Stephen> important changes that we wanted to include in the release.\n> \n> I'm not sure what you're referring to here?\n\nI was referring to the fact that the regression was introduced by a,\npresumably important, tzdb update (2019a, as mentioned above). At\nleast, I made the assumption that the commit of the import of 2019a had\nmore than just the change that introduced the regression, but I'm happy\nto admit I'm no where near as close to the code here as you/Tom here.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 20 Jun 2019 15:24:12 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Andrew Gierth (andrew@tao11.riddles.org.uk) wrote:\n> \"Stephen\" == Stephen Frost <sfrost@snowman.net> writes:\n>> Stephen> Piling on to that, the regression was entwined with other\n>> Stephen> important changes that we wanted to include in the release.\n>> \n>> I'm not sure what you're referring to here?\n\nI was confused by that too.\n\n> I was referring to the fact that the regression was introduced by a,\n> presumably important, tzdb update (2019a, as mentioned above). At\n> least, I made the assumption that the commit of the import of 2019a had\n> more than just the change that introduced the regression, but I'm happy\n> to admit I'm no where near as close to the code here as you/Tom here.\n\nKeep in mind that dealing with whatever tzdb chooses to ship is not\noptional from our standpoint. Even if we'd refused to import 2019a,\nevery installation using --with-system-tzdata (which, I sincerely hope,\nincludes most production installs) is going to have to deal with it\nas soon as the respective platform vendor gets around to shipping the\ntzdata update. So reverting that commit was never on the table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Jun 2019 15:58:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> I was referring to the fact that the regression was introduced by a,\n >> presumably important, tzdb update (2019a, as mentioned above). At\n >> least, I made the assumption that the commit of the import of 2019a\n >> had more than just the change that introduced the regression, but\n >> I'm happy to admit I'm no where near as close to the code here as\n >> you/Tom here.\n\n Tom> Keep in mind that dealing with whatever tzdb chooses to ship is\n Tom> not optional from our standpoint. Even if we'd refused to import\n Tom> 2019a, every installation using --with-system-tzdata (which, I\n Tom> sincerely hope, includes most production installs) is going to\n Tom> have to deal with it as soon as the respective platform vendor\n Tom> gets around to shipping the tzdata update. So reverting that\n Tom> commit was never on the table.\n\nExactly. But that means that if the combination of our arbitrary rules\nand the data in the tzdb results in an undesirable result, then we have\nno real option but to fix our rules (we can't reasonably expect the tzdb\nupstream to choose zone names to make our alphabetical-order preference\ncome out right).\n\nMy commit was intended to be the minimum fix that would restore the\npre-2019a behavior on all systems.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 20 Jun 2019 21:23:25 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Keep in mind that dealing with whatever tzdb chooses to ship is\n> Tom> not optional from our standpoint. Even if we'd refused to import\n> Tom> 2019a, every installation using --with-system-tzdata (which, I\n> Tom> sincerely hope, includes most production installs) is going to\n> Tom> have to deal with it as soon as the respective platform vendor\n> Tom> gets around to shipping the tzdata update. So reverting that\n> Tom> commit was never on the table.\n\n> Exactly. But that means that if the combination of our arbitrary rules\n> and the data in the tzdb results in an undesirable result, then we have\n> no real option but to fix our rules (we can't reasonably expect the tzdb\n> upstream to choose zone names to make our alphabetical-order preference\n> come out right).\n\nMy position is basically that having TimeZone come out as 'UCT' rather\nthan 'UTC' (affecting no visible behavior of the timestamp types, AFAIK)\nwas not such a grave problem as to require violating community norms\nto get it fixed in this week's releases rather than the next batch.\n\nI hadn't had time to consider your patch last week because I was (a)\nbusy with release prep and (b) sick as a dog. I figured we could let\nit slide and discuss it after the release work died down. I imagine\nthe reason you got zero other responses was that nobody else thought\nit was of life-and-death urgency either.\n\nAnyway, as I said already, my beef is not with the substance of the\npatch but with failing to follow community process. One \"yes\" vote\nand one \"no\" vote do not constitute consensus. You had no business\nassuming that I would reverse the \"no\" vote.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Jun 2019 17:07:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> Tom's \"fix\" of backpatching 23bd3cec6 (which happened on Friday 14th)\n> addressed only a subset of cases, as far as I know working only on Linux\n> (the historical convention has always been for /etc/localtime to be a\n> copy of a zonefile, not a symlink to one). I only decided to write (and\n> if need be commit) my own followup fix after confirming that the bug was\n> unfixed in a default FreeBSD install when set to UTC, and there was a\n> good chance that a number of other less-popular platforms were affected\n> too.\n\nI think your info is out of date on that.\n\nNetBSD uses a symlink, and has done for at least 5 years: see\nset_timezone in\nhttp://cvsweb.netbsd.org/bsdweb.cgi/src/usr.sbin/sysinst/util.c?only_with_tag=MAIN\n\nmacOS seems to have done it like that for at least 10 years, too.\nI didn't bother digging into their source repo, as it's likely that\nSystem Preferences isn't open-source; but *all* of my macOS machines\nhave symlinks there, and some of those link files are > 10 years old.\n\nI could not easily find OpenBSD's logic to set the zone during install,\nif they have any; but at least their admin-facing documentation says to\ncreate the file as a symlink:\nhttps://www.openbsd.org/faq/faq8.html#TimeZone\nand there are plenty of similar recommendations found by Mr. Google.\n\nIn short, I think FreeBSD are holdouts not the norm. I note that\neven their code will preserve /etc/localtime's symlink status if\nit was a symlink to start with: see install_zoneinfo_file in\nhttps://github.com/freebsd/freebsd/blob/master/usr.sbin/tzsetup/tzsetup.c\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Jun 2019 17:59:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "[ starting to come up for air again after a truly nasty sinus infection...\n fortunately, once I stopped thinking it was \"a cold\" and went to the\n doctor, antibiotics seem to be working ]\n\nAndrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> 1 Europe/Isle_of_Man\n\n> Is this from HEAD and therefore possibly getting the value from an\n> /etc/localtime symlink? I can't see any other way that\n> Europe/Isle_of_Man could ever be chosen over Europe/London...\n\nAll of the results I quoted there are HEAD-only, since we did not put\nthe code to make initdb print its timezone selection into the back\nbranches until 14-June.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Jun 2019 19:57:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> I was planning on submitting a follow-up myself (for pg13+) for\n> discussion of further improvements. My suggestion would be that we\n> should have the following order of preference, from highest to lowest:\n\n> - UTC (justified by being an international standard)\n> - Etc/UTC\n> - zones in zone.tab/zone1970.tab:\n> These are the zone names that are intended to be presented to the\n> user to select from. Dispute the exact meaning as you will, but I\n> think it makes sense that these names should be chosen over\n> equivalently good matches just on that basis.\n> - zones in Africa/ America/ Antarctica/ Asia/ Atlantic/ Australia/\n> Europe/ Indian/ Pacific/ Arctic/\n> These subdirs are the ones generated by the \"primary\" zone data\n> files, including both Zone and Link statements but not counting\n> the \"backward\" and \"etcetera\" files.\n> - GMT (justified on the basis of its presence as a default in the code)\n> - Etc/*\n> - any other zone name with a /\n> - any zone name without a /, excluding 'localtime' and 'Factory'\n> - 'localtime'\n> - 'Factory'\n\nTBH, I find this borderline insane: it's taking a problem we did not\nhave and moving the goalposts to the next county. Not just any\nold county, either, but one where there's a shooting war going on.\n\nAs soon as you do something like putting detailed preferences into the\nzone name selection rules, you are going to be up against problems like\n\"should Europe/ have priority over Asia/, or vice versa?\" This is not\nacademic; see for example\n\nLink\tAsia/Nicosia\tEurope/Nicosia\nLink\tEurope/Istanbul\tAsia/Istanbul\t# Istanbul is in both continents.\n\nThese choices affect exactly the people who are going to get bent out of\nshape because you picked the \"wrong\" name for their zone. Doesn't matter\nthat both names are \"wrong\" to different subsets.\n\nAs long as we have a trivial and obviously apolitical rule like\nalphabetical order, I think we can skate over such things; but the minute\nwe have any sort of human choices involved there, we're going to be\ngetting politically driven requests to do-it-like-this-because-I-think-\nthe-default-should-be-that. Again, trawl the tzdb list archives for\nawhile if you think this might not be a problem:\nhttp://mm.icann.org/pipermail/tz/\n\nI think we can get away with fixing simple cases that are directly\ncaused by tzdb's own idiosyncrasies, ie \"localtime\" and \"posixrules\"\nand \"Factory\". If we go further than that, we *will* regret it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Jun 2019 20:18:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> TBH, I find this borderline insane: it's taking a problem we did\n Tom> not have and moving the goalposts to the next county. Not just any\n Tom> old county, either, but one where there's a shooting war going on.\n\n Tom> As soon as you do something like putting detailed preferences into\n Tom> the zone name selection rules, you are going to be up against\n Tom> problems like \"should Europe/ have priority over Asia/, or vice\n Tom> versa?\"\n\nI would say that this problem exists with arbitrary preferences too.\n\n Tom> As long as we have a trivial and obviously apolitical rule like\n Tom> alphabetical order, I think we can skate over such things; but the\n Tom> minute we have any sort of human choices involved there, we're\n Tom> going to be getting politically driven requests to\n Tom> do-it-like-this-because-I-think- the-default-should-be-that.\n\nThe actual content of the rules I suggested all come from the tzdb\ndistribution; anyone complaining can be told to take it up with them.\n\nFor the record, this is the list of zones (91 out of 348, or about 26%)\nthat we currently deduce wrongly, as obtained by trying each zone name\nlisted in zone1970.tab and seeing which zone we deduce when that zone's\nfile is copied to /etc/localtime. Note in particular that our arbitrary\nrules heavily prefer the deprecated backward-compatibility aliases which\nare the most likely to disappear in future versions.\n\n(not all of these are fixable, of course)\n\nAfrica/Abidjan -> GMT\nAfrica/Cairo -> Egypt\nAfrica/Johannesburg -> Africa/Maseru\nAfrica/Maputo -> Africa/Harare\nAfrica/Nairobi -> Africa/Asmara\nAfrica/Tripoli -> Libya\nAmerica/Adak -> US/Aleutian\nAmerica/Anchorage -> US/Alaska\nAmerica/Argentina/Buenos_Aires -> America/Buenos_Aires\nAmerica/Argentina/Catamarca -> America/Catamarca\nAmerica/Argentina/Cordoba -> America/Cordoba\nAmerica/Argentina/Jujuy -> America/Jujuy\nAmerica/Argentina/Mendoza -> America/Mendoza\nAmerica/Argentina/Rio_Gallegos -> America/Argentina/Ushuaia\nAmerica/Chicago -> US/Central\nAmerica/Creston -> MST\nAmerica/Curacao -> America/Aruba\nAmerica/Denver -> Navajo\nAmerica/Detroit -> US/Michigan\nAmerica/Edmonton -> Canada/Mountain\nAmerica/Havana -> Cuba\nAmerica/Indiana/Indianapolis -> US/East-Indiana\nAmerica/Indiana/Knox -> America/Knox_IN\nAmerica/Jamaica -> Jamaica\nAmerica/Kentucky/Louisville -> America/Louisville\nAmerica/Los_Angeles -> US/Pacific\nAmerica/Manaus -> Brazil/West\nAmerica/Mazatlan -> Mexico/BajaSur\nAmerica/Mexico_City -> Mexico/General\nAmerica/New_York -> US/Eastern\nAmerica/Panama -> EST\nAmerica/Phoenix -> US/Arizona\nAmerica/Port_of_Spain -> America/Virgin\nAmerica/Rio_Branco -> Brazil/Acre\nAmerica/Sao_Paulo -> Brazil/East\nAmerica/Toronto -> Canada/Eastern\nAmerica/Vancouver -> Canada/Pacific\nAmerica/Whitehorse -> Canada/Yukon\nAmerica/Winnipeg -> Canada/Central\nAsia/Dhaka -> Asia/Dacca\nAsia/Ho_Chi_Minh -> Asia/Saigon\nAsia/Hong_Kong -> Hongkong\nAsia/Jerusalem -> Israel\nAsia/Kathmandu -> Asia/Katmandu\nAsia/Kuala_Lumpur -> Singapore\nAsia/Macau -> Asia/Macao\nAsia/Riyadh -> Asia/Aden\nAsia/Seoul -> ROK\nAsia/Shanghai -> PRC\nAsia/Singapore -> Singapore\nAsia/Taipei -> ROC\nAsia/Tehran -> Iran\nAsia/Thimphu -> Asia/Thimbu\nAsia/Tokyo -> Japan\nAsia/Ulaanbaatar -> Asia/Ulan_Bator\nAtlantic/Reykjavik -> Iceland\nAtlantic/South_Georgia -> Etc/GMT+2\nAustralia/Adelaide -> Australia/South\nAustralia/Broken_Hill -> Australia/Yancowinna\nAustralia/Darwin -> Australia/North\nAustralia/Lord_Howe -> Australia/LHI\nAustralia/Melbourne -> Australia/Victoria\nAustralia/Perth -> Australia/West\nAustralia/Sydney -> Australia/ACT\nEurope/Belgrade -> Europe/Skopje\nEurope/Dublin -> Eire\nEurope/Istanbul -> Turkey\nEurope/Lisbon -> Portugal\nEurope/London -> GB\nEurope/Moscow -> W-SU\nEurope/Warsaw -> Poland\nEurope/Zurich -> Europe/Vaduz\nIndian/Christmas -> Etc/GMT-7\nIndian/Mahe -> Etc/GMT-4\nIndian/Reunion -> Etc/GMT-4\nPacific/Auckland -> NZ\nPacific/Chatham -> NZ-CHAT\nPacific/Chuuk -> Pacific/Yap\nPacific/Funafuti -> Etc/GMT-12\nPacific/Gambier -> Etc/GMT+9\nPacific/Guadalcanal -> Etc/GMT-11\nPacific/Honolulu -> US/Hawaii\nPacific/Kwajalein -> Kwajalein\nPacific/Pago_Pago -> US/Samoa\nPacific/Palau -> Etc/GMT-9\nPacific/Pohnpei -> Pacific/Ponape\nPacific/Port_Moresby -> Etc/GMT-10\nPacific/Tahiti -> Etc/GMT+10\nPacific/Tarawa -> Etc/GMT-12\nPacific/Wake -> Etc/GMT-12\nPacific/Wallis -> Etc/GMT-12\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 26 Jun 2019 07:32:29 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Wed, Jun 26, 2019 at 6:32 PM Andrew Gierth\n<andrew@tao11.riddles.org.uk> wrote:\n> Pacific/Auckland -> NZ\n\nRight. On a FreeBSD system here in New Zealand you get \"NZ\" with\ndefault configure options (ie using PostgreSQL's tzdata). But if you\nbuild with --with-system-tzdata=/usr/share/zoneinfo you get\n\"Pacific/Auckland\", and that's because the FreeBSD zoneinfo directory\ndoesn't include the old non-city names like \"NZ\", \"GB\", \"Japan\",\n\"US/Eastern\" etc. (Unfortunately the FreeBSD packages for PostgreSQL\nare not being built with that option so initdb chooses the old names.\nSomething to take up with the maintainers.)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jun 2019 21:11:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Thomas\" == Thomas Munro <thomas.munro@gmail.com> writes:\n\n >> Pacific/Auckland -> NZ\n\n Thomas> Right. On a FreeBSD system here in New Zealand you get \"NZ\"\n Thomas> with default configure options (ie using PostgreSQL's tzdata).\n Thomas> But if you build with --with-system-tzdata=/usr/share/zoneinfo\n Thomas> you get \"Pacific/Auckland\", and that's because the FreeBSD\n Thomas> zoneinfo directory doesn't include the old non-city names like\n Thomas> \"NZ\", \"GB\", \"Japan\", \"US/Eastern\" etc. (Unfortunately the\n Thomas> FreeBSD packages for PostgreSQL are not being built with that\n Thomas> option so initdb chooses the old names. Something to take up\n Thomas> with the maintainers.)\n\nSame issue here with Europe/London getting \"GB\".\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 26 Jun 2019 14:01:43 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Thomas\" == Thomas Munro <thomas.munro@gmail.com> writes:\n> Thomas> Right. On a FreeBSD system here in New Zealand you get \"NZ\"\n> Thomas> with default configure options (ie using PostgreSQL's tzdata).\n> Thomas> But if you build with --with-system-tzdata=/usr/share/zoneinfo\n> Thomas> you get \"Pacific/Auckland\", and that's because the FreeBSD\n> Thomas> zoneinfo directory doesn't include the old non-city names like\n> Thomas> \"NZ\", \"GB\", \"Japan\", \"US/Eastern\" etc.\n\n> Same issue here with Europe/London getting \"GB\".\n\nFreeBSD offers yet another obstacle to Andrew's proposal:\n\n$ uname -a\nFreeBSD rpi3.sss.pgh.pa.us 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC arm64\n$ ls /usr/share/zoneinfo/\nAfrica/ Australia/ Etc/ MST WET\nAmerica/ CET Europe/ MST7MDT posixrules\nAntarctica/ CST6CDT Factory PST8PDT zone.tab\nArctic/ EET HST Pacific/\nAsia/ EST Indian/ SystemV/\nAtlantic/ EST5EDT MET UTC\n\nNo zone1970.tab. I do not think we can rely on that file being there,\nsince zic itself doesn't install it; it's up to packagers whether or\nwhere to install the \"*.tab\" files.\n\nIn general, the point I'm trying to make is that our policy should be\n\"Ties are broken arbitrarily, and if you don't like the choice that initdb\nmakes, here's how to fix it\". As soon as we try to break some ties in\nfavor of somebody's idea of what is \"right\", we are in for neverending\nproblems with different people disagreeing about what is \"right\", and\ninsisting that their preference should be the one the code enforces.\nLet's *please* not go there, or even within hailing distance of it.\n\n(By this light, even preferring UTC over UCT is a dangerous precedent.\nI won't argue for reverting that, but I don't want to go further.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Jun 2019 13:06:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Further on this --- I now remember that the reason we used to want to\nreject the \"Factory\" timezone is that it used to report this as the\nzone abbreviation:\n\n\tLocal time zone must be set--see zic manual page\n\nwhich (a) resulted in syntactically invalid timestamp output from the\ntimeofday() function and (b) completely screwed up the column width\nin the pg_timezone_names view.\n\nBut since 2016g, it's reported the much-less-insane string \"-00\".\nI propose therefore that it's time to just drop the discrimination\nagainst \"Factory\", as per attached. There doesn't seem to be any\nreason anymore to forbid people from seeing it in pg_timezone_names\nor selecting it as the timezone if they're so inclined. We would\nonly have a problem if somebody is using --with-system-tzdata in\na machine where they've not updated the system tzdata since 2016,\nand I'm no longer willing to consider that a valid use-case.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 26 Jun 2019 13:59:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> No zone1970.tab.\n\nzone.tab is an adequate substitute - a fact which I thought was\nsufficiently obvious as to not be worth mentioning.\n\n(also see https://reviews.freebsd.org/D20646 )\n\n Tom> I do not think we can rely on that file being there, since zic\n Tom> itself doesn't install it; it's up to packagers whether or where\n Tom> to install the \"*.tab\" files.\n\nThe proposed rules I suggested do work almost as well if zone[1970].tab\nis absent, though obviously that's not the optimal situation. But are\nthere any systems which lack it? It's next to impossible to implement a\nsane \"ask the user what timezone to use\" procedure without it.\n\n Tom> In general, the point I'm trying to make is that our policy should\n Tom> be \"Ties are broken arbitrarily, and if you don't like the choice\n Tom> that initdb makes, here's how to fix it\".\n\nYes, you've repeated that point at some length, and I am not convinced.\nIs anyone else?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 26 Jun 2019 23:48:16 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "> On 27 Jun 2019, at 00:48, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n\n> Tom> In general, the point I'm trying to make is that our policy should\n> Tom> be \"Ties are broken arbitrarily, and if you don't like the choice\n> Tom> that initdb makes, here's how to fix it\".\n> \n> Yes, you've repeated that point at some length, and I am not convinced.\n> Is anyone else?\n\nI don’t have any insights into the patches comitted or proposed. However,\nhaving been lurking on the tz mailinglist for a long time, I totally see where\nTom is coming from with this.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 27 Jun 2019 00:58:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 27 Jun 2019, at 00:48, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n> \n> > Tom> In general, the point I'm trying to make is that our policy should\n> > Tom> be \"Ties are broken arbitrarily, and if you don't like the choice\n> > Tom> that initdb makes, here's how to fix it\".\n> > \n> > Yes, you've repeated that point at some length, and I am not convinced.\n> > Is anyone else?\n> \n> I don’t have any insights into the patches comitted or proposed. However,\n> having been lurking on the tz mailinglist for a long time, I totally see where\n> Tom is coming from with this.\n\nI understand this concern, but I have to admit that I'm not entirely\nthrilled with having the way we pick defaults be based on the concern\nthat people will complain. If anything, this community, at least in my\nexperience, has thankfully been relatively reasonable and I have some\npretty serious doubts that a change like this will suddenly invite the\nmasses to argue with us or that, should someone try, they'd end up\ngetting much traction.\n\nOn the other hand, picking deprecated spellings is clearly a poor\nchoice, and we don't prevent people from picking whatever they want to.\nI also don't see what Andrew's suggesting as being terribly\ncontroversial, though that's likely because I'm looking through\nrose-colored glasses, as the saying goes. Even with that understanding\nthough, I tend to side with Andrew on this.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 26 Jun 2019 19:43:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> In general, the point I'm trying to make is that our policy should\n> Tom> be \"Ties are broken arbitrarily, and if you don't like the choice\n> Tom> that initdb makes, here's how to fix it\".\n\n> Yes, you've repeated that point at some length, and I am not convinced.\n\n[ shrug... ] You haven't convinced me, either. By my count we each have\nabout 0.5 other votes in favor of our positions, so barring more opinions\nthere's no consensus here for the sort of behavioral change you suggest.\n\nHowever, not to let the perfect be the enemy of the good, it seems like\nnobody has spoken against the ideas of (a) installing negative preferences\nfor the \"localtime\" and \"posixrules\" pseudo-zones, and (b) getting rid of\nour now-unnecessary special treatment for \"Factory\". How about we do that\nmuch and leave any more-extensive change for another day?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Jun 2019 12:58:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As long as we have a trivial and obviously apolitical rule like\n> alphabetical order, I think we can skate over such things; but the minute\n> we have any sort of human choices involved there, we're going to be\n> getting politically driven requests to do-it-like-this-because-I-think-\n> the-default-should-be-that. Again, trawl the tzdb list archives for\n> awhile if you think this might not be a problem:\n> http://mm.icann.org/pipermail/tz/\n\nI'm kind of unsure what to think about this whole debate\nsubstantively. If Andrew is correct that zone.tab or zone1970.tab is a\nlist of time zone names to be preferred over alternatives, then it\nseems like we ought to prefer them. He remarks that we are preferring\n\"deprecated backward-compatibility aliases\" and to the extent that\nthis is true, it seems like a bad thing. We can't claim to be\naltogether here apolitical, because when those deprecated\nbackward-compatibility names are altogether removed, we are going to\nremove them and they're going to stop working. If we know which ones\nare likely to suffer that fate eventually, we ought to stop spitting\nthem out. It's no more political to de-prefer them when upstream does\nthan it is to remove them with the upstream does.\n\nHowever, I don't know whether Andrew is right about those things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Jun 2019 13:27:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm kind of unsure what to think about this whole debate\n> substantively. If Andrew is correct that zone.tab or zone1970.tab is a\n> list of time zone names to be preferred over alternatives, then it\n> seems like we ought to prefer them.\n\nIt's not really clear to me that the IANA folk intend those files to\nbe read as a list of preferred zone names. If they do, what are we\nto make of the fact that no variant of \"UTC\" appears in them?\n\n> He remarks that we are preferring\n> \"deprecated backward-compatibility aliases\" and to the extent that\n> this is true, it seems like a bad thing. We can't claim to be\n> altogether here apolitical, because when those deprecated\n> backward-compatibility names are altogether removed, we are going to\n> remove them and they're going to stop working. If we know which ones\n> are likely to suffer that fate eventually, we ought to stop spitting\n> them out. It's no more political to de-prefer them when upstream does\n> than it is to remove them with the upstream does.\n\nI think that predicting what IANA will do in the future is a fool's\nerrand. Our contract is to select some one of the aliases that the\ntzdb database presents, not to guess about whether it might present\na different set in the future. (Also note that a lot of the observed\nvariation here has to do with whether individual platforms choose to\ninstall backward-compatibility zone names. I think the odds that\nIANA proper will remove those links are near zero; TTBOMK they\nnever have removed one yet.)\n\nMore generally, my unhappiness about Andrew's proposal is:\n\n1. It's solving a problem that just about nobody cares about, as\nevidenced by the very tiny number of complaints we've had to date.\nAs long as the \"timezone\" setting has the correct external behavior\n(UTC offset, DST rules, and abbreviations), very few people notice\nit at all. With the addition of the code to resolve /etc/localtime\nwhen it's a symlink, the population of people who might care has\ntaken a further huge drop.\n\n2. Changing this behavior might create more problems than it solves.\nIn particular, it seemed to me that a lot of the complaints in the\nUCT/UTC kerfuffle were less about \"UCT is a silly name for my zone\"\nthan about \"this change broke my regression test that expected\ntimezone to be set to X in this environment\". Rearranging the tiebreak\nrules is just going to make different sets of such people unhappy.\n(Admittedly, the symlink-lookup addition has already created some\nrisk of this ilk. Maybe we should wait for that to be in the field\nfor more than a week before we judge whether further hacking is\nadvisable.)\n\n3. The proposal has technical issues, in particular I'm not nearly\nas sanguine as Andrew is about whether we can rely on zone[1970].tab\nto be available.\n\nSo I'm very unexcited about writing a bunch of new code or opening\nourselves to politically-driven complaints in order to change this.\nIt seems like a net loss almost independently of the details.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Jun 2019 13:58:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Robert Haas <robertmhaas@gmail.com> writes:\n >> I'm kind of unsure what to think about this whole debate\n >> substantively. If Andrew is correct that zone.tab or zone1970.tab is\n >> a list of time zone names to be preferred over alternatives, then it\n >> seems like we ought to prefer them.\n\n Tom> It's not really clear to me that the IANA folk intend those files\n Tom> to be read as a list of preferred zone names.\n\nThe files exist to support user selection of zone names. That is, it is\nintended that you can use them to allow the user to choose their country\nand then timezone within that country, rather than offering them a flat\nregional list (which can be large and the choices non-obvious).\n\nThe zone*.tab files therefore include only geographic names, and not\neither Posix-style abbreviations or special cases like Etc/UTC. Programs\nthat use zone*.tab to allow user selection handle cases like that\nseparately (for example, FreeBSD's tzsetup offers \"UTC\" at the\n\"regional\" menu).\n\nIt's quite possible that people have implemented time zone selection\ninterfaces that use some other presentation of the list, but that\ndoesn't particularly diminish the value of zone*.tab. In particular, the\ncurrent zone1970.tab has:\n\n - at least one entry for every iso3166 country code that's not an\n uninhabited remote island;\n\n - an entry for every distinct \"Zone\" in the primary data files, with\n the exception of entries that are specifically commented as being\n for backward compatibility (e.g. CET, CST6CDT, etc. - see the\n comments in the europe and northamerica data files for why these\n exist)\n\nThe zonefiles that get installed in addition to the ones in zone1970.tab\nfall into these categories:\n\n - they are \"Link\" entries in the primary data files\n\n - they are from the \"backward\" data file, which is omitted in some\n system tzdb installations because it exists only for backward\n compatibility (but we install it because it's still listed in\n tzdata.zi by default)\n\n - they are from the \"etcetera\" file, which lists special cases such as\n UTC and fixed UTC offsets\n\n Tom> If they do, what are we to make of the fact that no variant of\n Tom> \"UTC\" appears in them?\n\nThat \"UTC\" is not a geographic timezone name?\n\n >> He remarks that we are preferring \"deprecated backward-compatibility\n >> aliases\" and to the extent that this is true, it seems like a bad\n >> thing. We can't claim to be altogether here apolitical, because when\n >> those deprecated backward-compatibility names are altogether\n >> removed, we are going to remove them and they're going to stop\n >> working. If we know which ones are likely to suffer that fate\n >> eventually, we ought to stop spitting them out. It's no more\n >> political to de-prefer them when upstream does than it is to remove\n >> them with the upstream does.\n\n Tom> I think that predicting what IANA will do in the future is a\n Tom> fool's errand.\n\nMaybe so, but when something is explicitly in a file called \"backward\",\nand the upstream-provided Makefile has specific options for omitting it\n(even though it is included by default), and all the comments about it\nare explicit about it being for backward compatibility, I think it's\nreasonable to avoid _preferring_ the names in it.\n\nThe list of backward-compatibility zones is in any case extremely\narbitrary and nonsensical: for example \"GB\", \"Eire\", \"Iceland\",\n\"Poland\", \"Portugal\" are aliases for their respective countries, but\nthere are no comparable aliases for any other European country. The\n\"Navajo\" entry (an alias for America/Denver) has already been mentioned\nin this thread; our arbitrary rule prefers it (due to shortness) for all\nUS zones that use Mountain time with DST. And so on.\n\n Tom> Our contract is to select some one of the aliases that the tzdb\n Tom> database presents, not to guess about whether it might present a\n Tom> different set in the future. (Also note that a lot of the observed\n Tom> variation here has to do with whether individual platforms choose\n Tom> to install backward-compatibility zone names. I think the odds\n Tom> that IANA proper will remove those links are near zero; TTBOMK\n Tom> they never have removed one yet.)\n\nWell, we should also consider the possibility that we might be using the\nsystem tzdata and that the upstream OS or distro packager may choose to\nremove the \"backward\" data or split it to a separate package.\n\n Tom> More generally, my unhappiness about Andrew's proposal is:\n\n [...]\n Tom> 3. The proposal has technical issues, in particular I'm not nearly\n Tom> as sanguine as Andrew is about whether we can rely on\n Tom> zone[1970].tab to be available.\n\nMy proposal works even if it's not, though I don't expect that to be an\nissue in practice.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 28 Jun 2019 01:33:58 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 1:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's not really clear to me that the IANA folk intend those files to\n> be read as a list of preferred zone names. If they do, what are we\n> to make of the fact that no variant of \"UTC\" appears in them?\n\nI think their intent is key. We can't make reasonable decisions about\nwhat to do with some data if we don't know what the data is intended\nto mean.\n\n> I think that predicting what IANA will do in the future is a fool's\n> errand. Our contract is to select some one of the aliases that the\n> tzdb database presents, not to guess about whether it might present\n> a different set in the future. (Also note that a lot of the observed\n> variation here has to do with whether individual platforms choose to\n> install backward-compatibility zone names. I think the odds that\n> IANA proper will remove those links are near zero; TTBOMK they\n> never have removed one yet.)\n\nThat doesn't make it a good idea to call Mountain time \"Navajo,\" as\nAndrew alleges we are doing. Then again, the MacBook upon which I am\nwriting this email thinks that my time zone is \"America/New_York,\"\nwhereas I think it is \"US/Eastern,\" which I suppose reinforces your\npoint about all of this being political. But on the third hand, if\nsomebody tells me that my time zone is America/New_York, I can say to\nmyself \"oh, they mean Eastern time,\" whereas if they say that I'm on\n\"Navajo\" time, I'm going to have to sit down with 'diff' and the\nzoneinfo files to figure out what that actually means.\n\nI note that https://github.com/eggert/tz/blob/master/backward seems\npretty clear about which things are backward compatibility aliases,\nwhich seems to imply that we would not be taking a political position\nseparate from the upstream position if we tried to de-prioritize\nthose.\n\nAlso, https://github.com/eggert/tz/blob/master/theory.html says...\n\nNames normally have the form\n<var>AREA</var><code>/</code><var>LOCATION</var>, where\n<var>AREA</var> is a continent or ocean, and\n<var>LOCATION</var> is a specific location within the area.\n\n...which seems to imply that AREA/LOCATION is the \"normal\" and thus\npreferred form, and also that...\n\nThe file '<code>zone1970.tab</code>' lists geographical locations used\nto name timezones.\nIt is intended to be an exhaustive list of names for geographic\nregions as described above; this is a subset of the timezones in the data.\n\n...which seems to support Andrew's idea that you can identify\nAREA/LOCATION time zones by looking in that file.\n\nLong story short, I agree with you that most people probably don't\ncare about this very much, but I also agree with Andrew that some of\nthe current choices we're making are pretty strange, and I'm not\nconvinced as you are that it's impossible to make a principled choice\nbetween alternatives in all cases. The upstream data appears to\ncontain some information about intent; it's not just a jumble of\nexactly-equally-preferred alternatives.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 29 Jun 2019 16:48:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Long story short, I agree with you that most people probably don't\n> care about this very much, but I also agree with Andrew that some of\n> the current choices we're making are pretty strange, and I'm not\n> convinced as you are that it's impossible to make a principled choice\n> between alternatives in all cases. The upstream data appears to\n> contain some information about intent; it's not just a jumble of\n> exactly-equally-preferred alternatives.\n\nI agree that if there were an easy way to discount the IANA \"backward\ncompatibility\" zone names, that'd likely be a reasonable thing to do.\nThe problem is that those names aren't distinguished from others in\nthe representation we have available to us (ie, the actual\n/usr/share/zoneinfo file tree). I'm dubious that relying on\nzone[1970].tab would improve matters substantially; it would fix\nsome cases, but I don't think it would fix all of them. Resolving\nall ambiguous zone-name choices is not the charter of those files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 11:47:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> I'm dubious that relying on zone[1970].tab would improve matters\n Tom> substantially; it would fix some cases, but I don't think it would\n Tom> fix all of them. Resolving all ambiguous zone-name choices is not\n Tom> the charter of those files.\n\nAllowing zone matching by _content_ (as we do) rather than by name does\nnot seem to be supported in any respect whatever by the upstream data;\nwe've always been basically on our own with that.\n\n[tl/dr for what follows: my proposal reduces the number of discrepancies\nfrom 91 (see previously posted list) to 16 or 7, none of which are new]\n\nSo here are the ambiguities that are not resolvable at all:\n\nAfrica/Abidjan -> GMT\n\nThis happens because the Africa/Abidjan zone is literally just GMT even\ndown to the abbreviation, and we don't want to guess Africa/Abidjan for\nall GMT installs.\n\nAmerica/Argentina/Rio_Gallegos -> America/Argentina/Ushuaia\nAsia/Kuala_Lumpur -> Asia/Singapore\n\nThese are cases where zone1970.tab, despite its name, includes\ndistinctly-named zones which are distinct only for times in the far past\n(before 1920 or 1905 respectively). They are otherwise identical by\ncontent. We therefore end up choosing arbitrarily.\n\nIn addition, the following collection of random islands have timezones\nwhich lack local abbreviation names, recent offset changes, or DST, and\nare therefore indistinguishable by content from fixed-offset zones like\nEtc/GMT+2:\n\nEtc/GMT-4 ==\n Indian/Mahe\n Indian/Reunion\n\nEtc/GMT-7 == Indian/Christmas\nEtc/GMT-9 == Pacific/Palau\nEtc/GMT-10 == Pacific/Port_Moresby\nEtc/GMT-11 == Pacific/Guadalcanal\n\nEtc/GMT-12 ==\n Pacific/Funafuti\n Pacific/Tarawa\n Pacific/Wake\n Pacific/Wallis\n\nEtc/GMT+10 == Pacific/Tahiti\nEtc/GMT+9 == Pacific/Gambier\n\nEtc/GMT+2 == Atlantic/South_Georgia\n\nWe currently map all of these to the Etc/GMT+x names on the grounds of\nlength. If we chose to prefer zone.tab names over Etc/* names for all of\nthese, we'd be ambiguous only for a handful of relatively small islands.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 04 Jul 2019 06:57:19 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 10:48 AM Andrew Gierth\n<andrew@tao11.riddles.org.uk> wrote:\n> >>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> No zone1970.tab.\n>\n> zone.tab is an adequate substitute - a fact which I thought was\n> sufficiently obvious as to not be worth mentioning.\n>\n> (also see https://reviews.freebsd.org/D20646 )\n\nFWIW this is now fixed for FreeBSD 13-CURRENT, with a good chance of\nback-patch. I don't know if there are any other operating systems\nthat are shipping zoneinfo but failing to install zone1970.tab, but if\nthere are it's a mistake IMHO and they'll probably fix that if someone\ncomplains, considering that zone.tab literally tells you to go and use\nthe newer version, and Paul Eggert has implied that zone1970.tab is\nthe \"full\" and \"canonical\" list[1].\n\n[1] http://mm.icann.org/pipermail/tz/2014-October/021760.html\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 19:16:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> FWIW this is now fixed for FreeBSD 13-CURRENT, with a good chance of\n> back-patch. I don't know if there are any other operating systems\n> that are shipping zoneinfo but failing to install zone1970.tab, but if\n> there are it's a mistake IMHO and they'll probably fix that if someone\n> complains, considering that zone.tab literally tells you to go and use\n> the newer version, and Paul Eggert has implied that zone1970.tab is\n> the \"full\" and \"canonical\" list[1].\n\nI'm not sure we're any closer to a meeting of the minds on whether\nconsulting zone[1970].tab is a good thing to do, but we got an actual\nuser complaint[1] about how \"localtime\" should not be a preferred\nspelling. So I want to go ahead and insert the discussed anti-preference\nagainst \"localtime\" and \"posixrules\", as per 0001 below. If we do do\nsomething with zone[1970].tab, we'd still need these special rules,\nso I don't think this is blocking anything.\n\nAlso, I poked into the question of the \"Factory\" zone a bit more,\nand was disappointed to find that not only does FreeBSD still install\nthe \"Factory\" zone, but they are apparently hacking the data so that\nit emits the two-changes-back abbreviation \"Local time zone must be\nset--use tzsetup\". This bypasses the filter in pg_timezone_names that\nis expressly trying to prevent showing such silly \"abbreviations\".\nSo I now feel that not only can we not remove initdb's discrimination\nagainst \"Factory\", but we indeed need to make the pg_timezone_names\nfilter more aggressive. Hence, I now propose 0002 below to tweak\nwhat we're doing with \"Factory\". I did remove our special cases for\nit in zic.c, as we don't need them anymore with modern tzdb data, and\nthere's no reason to support running \"zic -P\" with hacked-up data.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com",
"msg_date": "Thu, 25 Jul 2019 16:35:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "> I'm not sure we're any closer to a meeting of the minds on whether\n> consulting zone[1970].tab is a good thing to do, but we got an actual\n> user complaint[1] about how \"localtime\" should not be a preferred\n> spelling. So I want to go ahead and insert the discussed anti-preference\n> against \"localtime\" and \"posixrules\", as per 0001 below. If we do do\n> something with zone[1970].tab, we'd still need these special rules,\n> so I don't think this is blocking anything.\n\nJust want to stress this point from a PostgreSQL driver maintainer\nperspective (see here[1] for the full details). Having \"localtime\" as the\nPostgreSQL timezone basically means that the timezone is completely opaque\nfrom a client point of view - there is no way for clients to know what\nactual timezone the server is in, and react to that. This is a limiting\nfactor in client development, I hope a consensus on this specific point can\nbe reached.\n\n[1]\nhttps://www.postgresql.org/message-id/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com\n\n> I'm not sure we're any closer to a meeting of the minds on whether> consulting zone[1970].tab is a good thing to do, but we got an actual> user complaint[1] about how \"localtime\" should not be a preferred> spelling. So I want to go ahead and insert the discussed anti-preference> against \"localtime\" and \"posixrules\", as per 0001 below. If we do do> something with zone[1970].tab, we'd still need these special rules,> so I don't think this is blocking anything.Just want to stress this point from a PostgreSQL driver maintainer perspective (see here[1] for the full details). Having \"localtime\" as the PostgreSQL timezone basically means that the timezone is completely opaque from a client point of view - there is no way for clients to know what actual timezone the server is in, and react to that. This is a limiting factor in client development, I hope a consensus on this specific point can be reached.[1] https://www.postgresql.org/message-id/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com",
"msg_date": "Thu, 1 Aug 2019 11:09:54 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Shay Rojansky <roji@roji.org> writes:\n>> I'm not sure we're any closer to a meeting of the minds on whether\n>> consulting zone[1970].tab is a good thing to do, but we got an actual\n>> user complaint[1] about how \"localtime\" should not be a preferred\n>> spelling. So I want to go ahead and insert the discussed anti-preference\n>> against \"localtime\" and \"posixrules\", as per 0001 below. If we do do\n>> something with zone[1970].tab, we'd still need these special rules,\n>> so I don't think this is blocking anything.\n\n> Just want to stress this point from a PostgreSQL driver maintainer\n> perspective (see here[1] for the full details). Having \"localtime\" as the\n> PostgreSQL timezone basically means that the timezone is completely opaque\n> from a client point of view - there is no way for clients to know what\n> actual timezone the server is in, and react to that. This is a limiting\n> factor in client development, I hope a consensus on this specific point can\n> be reached.\n\nI have in fact committed that patch. It won't do anything for your\nproblem with respect to existing installations that may have picked\n\"localtime\", but it'll at least prevent new initdb runs from picking\nthat.\n\n\t\t\tregards, tom lane\n\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master [3754113f3] 2019-07-26 12:45:32 -0400\nBranch: REL_12_STABLE [e31dfe99c] 2019-07-26 12:45:52 -0400\nBranch: REL_11_STABLE [4459266bf] 2019-07-26 12:45:57 -0400\nBranch: REL_10_STABLE [ae9b91be7] 2019-07-26 12:46:03 -0400\nBranch: REL9_6_STABLE [51b47471f] 2019-07-26 12:46:10 -0400\nBranch: REL9_5_STABLE [9ef811742] 2019-07-26 12:46:15 -0400\nBranch: REL9_4_STABLE [6c4ffab76] 2019-07-26 12:46:20 -0400\n\n Avoid choosing \"localtime\" or \"posixrules\" as TimeZone during initdb.\n \n Some platforms create a file named \"localtime\" in the system\n timezone directory, making it a copy or link to the active time\n zone file. If Postgres is built with --with-system-tzdata, initdb\n will see that file as an exact match to localtime(3)'s behavior,\n and it may decide that \"localtime\" is the most preferred spelling of\n the active zone. That's a very bad choice though, because it's\n neither informative, nor portable, nor stable if someone changes\n the system timezone setting. Extend the preference logic added by\n commit e3846a00c so that we will prefer any other zone file that\n matches localtime's behavior over \"localtime\".\n \n On the same logic, also discriminate against \"posixrules\", which\n is another not-really-a-zone file that is often present in the\n timezone directory. (Since we install \"posixrules\" but not\n \"localtime\", this change can affect the behavior of Postgres\n with or without --with-system-tzdata.)\n \n Note that this change doesn't prevent anyone from choosing these\n pseudo-zones if they really want to (i.e., by setting TZ for initdb,\n or modifying the timezone GUC later on). It just prevents initdb\n from preferring these zone names when there are multiple matches to\n localtime's behavior.\n \n Since we generally prefer to keep timezone-related behavior the\n same in all branches, and since this is arguably a bug fix,\n back-patch to all supported branches.\n \n Discussion: https://postgr.es/m/CADT4RqCCnj6FKLisvT8tTPfTP4azPhhDFJqDF1JfBbOH5w4oyQ@mail.gmail.com\n Discussion: https://postgr.es/m/27991.1560984458@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 01 Aug 2019 10:08:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Tom,\n\n> I have in fact committed that patch. It won't do anything for your\n> problem with respect to existing installations that may have picked\n>\"localtime\", but it'll at least prevent new initdb runs from picking\n> that.\n\nThanks! At least over time the problem will hopefully diminish.\n\nTom,> I have in fact committed that patch. It won't do anything for your> problem with respect to existing installations that may have picked>\"localtime\", but it'll at least prevent new initdb runs from picking> that.Thanks! At least over time the problem will hopefully diminish.",
"msg_date": "Thu, 1 Aug 2019 18:33:52 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 10:08:01 -0400, Tom Lane wrote:\n> I have in fact committed that patch. It won't do anything for your\n> problem with respect to existing installations that may have picked\n> \"localtime\", but it'll at least prevent new initdb runs from picking\n> that.\n\n> Avoid choosing \"localtime\" or \"posixrules\" as TimeZone during initdb.\n> \n> Some platforms create a file named \"localtime\" in the system\n> timezone directory, making it a copy or link to the active time\n> zone file. If Postgres is built with --with-system-tzdata, initdb\n> will see that file as an exact match to localtime(3)'s behavior,\n> and it may decide that \"localtime\" is the most preferred spelling of\n> the active zone. That's a very bad choice though, because it's\n> neither informative, nor portable, nor stable if someone changes\n> the system timezone setting. Extend the preference logic added by\n> commit e3846a00c so that we will prefer any other zone file that\n> matches localtime's behavior over \"localtime\".\n\nWhen used and a symlink, could we resolve the symlink when determining\nthe timezone? When loading a timezone in the backend, not during\ninitdb. While that'd leave us with the instability, it'd at least would\nhelp clients etc understand what the setting actually means?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 10:19:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> When used and a symlink, could we resolve the symlink when determining\n> the timezone? When loading a timezone in the backend, not during\n> initdb. While that'd leave us with the instability, it'd at least would\n> help clients etc understand what the setting actually means?\n\nThe question here is what the string \"localtime\" means when it's in\nthe timezone variable.\n\nI guess yes, we could install some show_hook for timezone\nthat goes and looks to see if it can resolve what that means.\nBut that sure seems to me to be in you've-got-to-be-kidding\nterritory. Especially since the platforms I've seen that\ndo this tend to use hard links, so that it's questionable\nwhether the pushups would accomplish anything at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 13:59:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 13:59:11 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > When used and a symlink, could we resolve the symlink when determining\n> > the timezone? When loading a timezone in the backend, not during\n> > initdb. While that'd leave us with the instability, it'd at least would\n> > help clients etc understand what the setting actually means?\n> \n> The question here is what the string \"localtime\" means when it's in\n> the timezone variable.\n\nRight.\n\n\n> I guess yes, we could install some show_hook for timezone that goes\n> and looks to see if it can resolve what that means. But that sure\n> seems to me to be in you've-got-to-be-kidding territory.\n\nFair enough. I'm mildly worried that people will just carry their\ntimezone setting from one version's postgresql.conf to the next as they\nupgrade.\n\n\n> Especially since the platforms I've seen that do this tend to use hard\n> links, so that it's questionable whether the pushups would accomplish\n> anything at all.\n\nHm, debian's is a symlink (or rather a chain of):\n\n$ ls -l /usr/share/zoneinfo/localtime\nlrwxrwxrwx 1 root root 14 Jul 4 14:04 /usr/share/zoneinfo/localtime -> /etc/localtime\n\n$ ls -l /etc/localtime\nlrwxrwxrwx 1 root root 39 Jul 15 15:40 /etc/localtime -> /usr/share/zoneinfo/America/Los_Angeles\n\nThe system installed versions of postgres I have available all ended up\nwith timezone=localtime.\n\nNot sure how long they've been symlinks. I randomly accessed a backup of\nan older debian installation, from 2014, and there it's a file (with\nlink count 1).\n\nBut presumably upgrading would yield a postgresql.conf that still had\nlocaltime, but localtime becoming a symlink.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 11:25:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Fair enough. I'm mildly worried that people will just carry their\n> timezone setting from one version's postgresql.conf to the next as they\n> upgrade.\n\nMaybe. I don't believe pg_upgrade copies over the old postgresql.conf,\nand I doubt we should consider it good practice in any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 15:13:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: UCT (Re: pgsql: Update time zone data files to tzdata release\n 2019a.)"
}
] |
[
{
"msg_contents": "The SQL keywords table in the documentation had until now been generated\nby me every year by some ad hoc scripting outside the source tree once\nfor each major release. This patch changes it to an automated process.\n\nWe have the PostgreSQL keywords available in a parseable format in\nparser/kwlist.h[*]. For the relevant SQL standard versions, keep the\nkeyword lists in new text files. A new script\ngenerate-keywords-table.pl pulls it all together and produces a DocBook\ntable.\n\nThe final output in the documentation should be identical after this\nchange.\n\n(Updates for SQL:2016 to come.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 27 Apr 2019 09:33:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "generate documentation keywords table automatically"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The SQL keywords table in the documentation had until now been generated\n> by me every year by some ad hoc scripting outside the source tree once\n> for each major release. This patch changes it to an automated process.\n\nDidn't test this, but +1 for the concept.\n\nWould it make more sense to have just one source file per SQL standard\nversion, and distinguish the keyword types by labels within the file?\nThe extreme version of that would be to format the standards-info files\njust like parser/kwlist.h, which perhaps would even save a bit of\nparsing code in the Perl script. I don't insist you have to go that\nfar, but lists of keywords-and-categories seem to make sense.\n\nThe thing in the back of my mind here is that at some point the SQL\nstandard might have more than two keyword categories. What you've got\nhere would take some effort to handle that, whereas it'd be an entirely\ntrivial data change in the scheme I'm thinking of.\n\nA policy issue, independent of this mechanism, is how many different\nSQL spec versions we want to show in the table. HEAD currently shows just\nthree (2011, 2008, SQL92), and it doesn't look to me like the table can\naccommodate more than one or at most two more columns without getting\ntoo wide for most output formats. We could buy back some space by making\nthe \"cannot be X\" annotations for PG keywords more compact, but I fear\nthat'd still not be enough for the seven spec versions you propose to\nshow in this patch. (And, presumably, the committee's not done.)\nCan we pick a different table layout?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Apr 2019 11:25:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: generate documentation keywords table automatically"
},
{
"msg_contents": "On 2019-04-27 17:25, Tom Lane wrote:\n> Would it make more sense to have just one source file per SQL standard\n> version, and distinguish the keyword types by labels within the file?\n\nThe way I have written it, the lists can be compared directly with the\nrelevant standards by a human. Otherwise we'd need another level of\ntooling to compose and verify those lists.\n\n> A policy issue, independent of this mechanism, is how many different\n> SQL spec versions we want to show in the table.\n\nWe had previously established that we want to show 92 and the latest\ntwo. I don't propose to change that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Apr 2019 20:45:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: generate documentation keywords table automatically"
},
{
"msg_contents": "On 4/29/19 2:45 PM, Peter Eisentraut wrote:\n>> A policy issue, independent of this mechanism, is how many different\n>> SQL spec versions we want to show in the table.\n> \n> We had previously established that we want to show 92 and the latest\n> two. I don't propose to change that.\n\nAn annoying API requirement imposed by the JDBC spec applies to its\nmethod DatabaseMetaData.getSQLKeywords():\n\nIt is required to return a list of the keywords supported by the DBMS\nthat are NOT also SQL:2003 keywords. [1]\n\nWhy? I have no idea. Were the JDBC spec authors afraid of infringing\nISO copyright if they specified a method that just returns all the\nkeywords?\n\nSo instead they implicitly require every JDBC developer to know just\nwhat all the SQL:2003 keywords are, to make any practical use of the\nJDBC method that returns only the keywords that aren't those.\n\nTo make it even goofier, the requirement in the JDBC spec has changed\n(once, that I know of). It has been /all the keywords not in SQL:2003/\nsince JDBC 4 / Java SE 6 [2], but before that, it (the same method!)\nwas spec'd to return /all the keywords not in SQL92/. [3]\n\nSo the ideal JDBC developer will know (a) exactly what keywords are\nSQL92, (b) exactly what keywords are SQL:2003, and (c) which JDBC\nversion the driver in use is implementing (though, mercifully, drivers\nfrom pre-4.0 should be rare by now).\n\nIf the reorganization happening in this thread were to make possible\nrun-time-enumerable keyword lists that could be filtered for SQL92ness\nor SQL:2003ness, that might relieve an implementation headache that,\nat present, both PgJDBC and PL/Java have to deal with.\n\nRegards,\n-Chap\n\n\n[1]\nhttps://docs.oracle.com/en/java/javase/12/docs/api/java.sql/java/sql/DatabaseMetaData.html#getSQLKeywords()\n\n[2]\nhttps://docs.oracle.com/javase/6/docs/api/index.html?overview-summary.html\n\n[3]\nhttps://docs.oracle.com/javase/1.5.0/docs/api/index.html?overview-summary.html\n\n\n",
"msg_date": "Mon, 29 Apr 2019 15:19:02 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: generate documentation keywords table automatically"
},
{
"msg_contents": "On 2019-04-29 21:19, Chapman Flack wrote:\n> If the reorganization happening in this thread were to make possible\n> run-time-enumerable keyword lists that could be filtered for SQL92ness\n> or SQL:2003ness, that might relieve an implementation headache that,\n> at present, both PgJDBC and PL/Java have to deal with.\n\nGood information, but probably too big of a change at this point.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 08:02:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: generate documentation keywords table automatically"
}
] |
[
{
"msg_contents": "As you might know, generating SSL certificates for postgres (to be used \nby pgadmin, for example...) can be quite a bear; especially if you need \nmore than one, since they are based on the username of the postgres user.\n\nI have made two command-line utilities written in python 3.6 to do just \nthat (I, as a number of other developers do, appreciate python for its \nease of code inspection...); one is called *pg_ssl_server*, and the \nother is called *pg_ssl_client*. Packaged together, they are referred to \nby the name \"*pg_ssl*\". They are issued under the postgres license.\n\nThey have been tested out on Ubuntu 18 and python 3.6.7 with postgres \n11. They were designed to be cross-platform, but they have not been \ntested yet on Windows, OSx, BSD, or distros other than Ubuntu. [My \nimmediate concern is with their ability to run cross-platform; as for \ndownlevel versions of postgres or python, that is not a priority right \nnow. The \"subprocess\" module in python used by the utilities has \ninconsistencies working cross-platform in older versions of python; _for \nnow_, people should just upgrade if they really need to use them...]\n\nIf anyone would be interested in testing these and sending back a notice \nas to what problems were encountered on their platform, it would be much \nappreciated. The availability of these utilities will remove a rather \nrough spot from the administration of postgres. To keep noise on this \nmail thread to a minimum, please report any problems encountered \ndirectly to my address.\n\nAlso, if anyone is a security fanatic and facile with python, a code \nreview would not be a bad idea (the two utilities check in at ~1,500 \nlines; but since it's python, it's an easy read...)\n\nThe latest version of the utility can be retrieved here: \nhttps://osfda.org/downloads/pg_ssl.zip\n\nYou can also use the Contact Form at osfda.org to report issues.\n\n\n\n\n\n\n\n\nAs you might know, generating SSL certificates for postgres (to\n be used by pgadmin, for example...) can be quite a bear;\n especially if you need more than one, since they are based on the\n username of the postgres user.\nI have made two command-line utilities written in python 3.6 to\n do just that (I, as a number of other developers do, appreciate\n python for its ease of code inspection...); one is called pg_ssl_server,\n and the other is called pg_ssl_client. Packaged together,\n they are referred to by the name \"pg_ssl\". They are issued\n under the postgres license.\n\nThey have been tested out on Ubuntu 18 and python 3.6.7 with\n postgres 11. They were designed to be cross-platform, but they\n have not been tested yet on Windows, OSx, BSD, or distros other\n than Ubuntu. [My immediate concern is with their ability to run\n cross-platform; as for downlevel versions of postgres or python,\n that is not a priority right now. The \"subprocess\" module in\n python used by the utilities has inconsistencies working\n cross-platform in older versions of python; _for now_, people\n should just upgrade if they really need to use them...]\nIf anyone would be interested in testing these and sending back a\n notice as to what problems were encountered on their platform, it\n would be much appreciated. The availability of these utilities\n will remove a rather rough spot from the administration of\n postgres. To keep noise on this mail thread to a minimum, please\n report any problems encountered directly to my address.\n\nAlso, if anyone is a security fanatic and facile with python, a\n code review would not be a bad idea (the two utilities check in at\n ~1,500 lines; but since it's python, it's an easy read...)\nThe latest version of the utility can be retrieved here: https://osfda.org/downloads/pg_ssl.zip\nYou can also use the Contact Form at osfda.org to report issues.",
"msg_date": "Sat, 27 Apr 2019 12:54:07 -0400",
"msg_from": "Steve <steve.b@osfda.org>",
"msg_from_op": true,
"msg_subject": "pg_ssl"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 12:54:07PM -0400, Steve wrote:\n> As you might know, generating SSL certificates for postgres (to be\n> used by pgadmin, for example...) can be quite a bear; especially if\n> you need more than one, since they are based on the username of the\n> postgres user.\n\nThanks for sending this along!\n\nIs there a public repo for this, in case people have patches they'd\nlike to contribute? If not, would you be so kind as to make one?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 28 Apr 2019 17:25:33 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_ssl"
},
{
"msg_contents": "Will be doing in just a few days. I am taking _initial_ suggestions, \nincorporating them, then I will be setting that up.\n\nOn 4/28/2019 11:25 AM, David Fetter wrote:\n> On Sat, Apr 27, 2019 at 12:54:07PM -0400, Steve wrote:\n>> As you might know, generating SSL certificates for postgres (to be\n>> used by pgadmin, for example...) can be quite a bear; especially if\n>> you need more than one, since they are based on the username of the\n>> postgres user.\n> Thanks for sending this along!\n>\n> Is there a public repo for this, in case people have patches they'd\n> like to contribute? If not, would you be so kind as to make one?\n>\n> Best,\n> David.\n\n\n",
"msg_date": "Sun, 28 Apr 2019 11:28:40 -0400",
"msg_from": "Steve <steve.b@osfda.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_ssl"
},
{
"msg_contents": "Greetings,\n\n* Steve (steve.b@osfda.org) wrote:\n> As you might know, generating SSL certificates for postgres (to be used by\n> pgadmin, for example...) can be quite a bear; especially if you need more\n> than one, since they are based on the username of the postgres user.\n\nWell, you can map the common name in the client certificate to another\nuser if you want using pg_ident.conf.\n\n> I have made two command-line utilities written in python 3.6 to do just that\n> (I, as a number of other developers do, appreciate python for its ease of\n> code inspection...); one is called *pg_ssl_server*, and the other is called\n> *pg_ssl_client*. Packaged together, they are referred to by the name\n> \"*pg_ssl*\". They are issued under the postgres license.\n> \n> They have been tested out on Ubuntu 18 and python 3.6.7 with postgres 11.\n\nIf you're targeting PG11, I'd strongly recommend using 'scram' as the\npassword auth type and not md5.\n\n> If anyone would be interested in testing these and sending back a notice as\n> to what problems were encountered on their platform, it would be much\n> appreciated. The availability of these utilities will remove a rather rough\n> spot from the administration of postgres. To keep noise on this mail thread\n> to a minimum, please report any problems encountered directly to my address.\n> \n> Also, if anyone is a security fanatic and facile with python, a code review\n> would not be a bad idea (the two utilities check in at ~1,500 lines; but\n> since it's python, it's an easy read...)\n\nI've only glanced through the code and haven't tested it myself, but it\nseems like a pretty serious issue that you're just using clientcert=1\ninstead of using clientcert=verify-full, though unfortunately we didn't\nget that until 0516c61b756e39ed6eb7a6bb54311a841002211a. Have you\ntested that what you're doing here worked with latest HEAD and\nclientcert=verify-full on the server side, and setting\nsslmode=verify-full on the client side?\n\n> The latest version of the utility can be retrieved here:\n> https://osfda.org/downloads/pg_ssl.zip\n\nNot sure what can be done about it, if anything, but calling this\n'pg_ssl' seems awfully likely to lead to confusion when what you're\nreally doing here is creating SSL certificates and doing a bit of PG\nconfiguration.. Maybe 'pg_setup_ssl' or similar would be better?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 29 Apr 2019 11:04:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ssl"
},
{
"msg_contents": "Update: I have moved the previously contributed \"pg_ssl\" package to a \nformal git, and have renamed it to \"pg_ssl_init\"\n(at the request of initial reviewers, who were concerned about future \nname collisions...)\n\n\"pg_ssl_init\" is a set of command line scripts that conveniently \nconfigures self-signed server and client keys and certificates to \naccommodate secure SSL connections to a postgres server (typically, via \npgadmin...)\n\nIts git is at: https://gitlab.com/osfda/pg_ssl_init\n\n\n\n\n",
"msg_date": "Sun, 5 May 2019 20:54:36 -0400",
"msg_from": "Steve <steve.b@osfda.org>",
"msg_from_op": true,
"msg_subject": "pg_ssl_init"
}
] |
[
{
"msg_contents": "Is it possible to differentialy synchronise two databases on the basis of\nequality and differences between both? Can I review this piece of code?\n\nIs it possible to differentialy synchronise two databases on the basis of equality and differences between both? Can I review this piece of code?",
"msg_date": "Sun, 28 Apr 2019 15:59:15 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Data streaming between different databases"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 03:59:15PM +0200, Sascha Kuhl wrote:\n> Is it possible to differentialy synchronise two databases on the basis of\n> equality and differences between both? Can I review this piece of code?\n\nIt's rather unclear what exactly are you looking for, what do you mean\nby 'on the basis of equality and differences' and how it all should\nwork. Maybe try explaining it in more detail, with some examples or\nsomething. Or if there are other products doing something like that,\nshare a link.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 28 Apr 2019 21:52:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Data streaming between different databases"
}
] |
[
{
"msg_contents": "Folks,\n\nOur test coverage needs all the help it can get.\n\nThis patch, extracted from another by Fabian Coelho, helps move things\nin that direction.\n\nI'd like to argue that it's not a new feature, and that it should be\nback-patched as far as possible.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 28 Apr 2019 17:07:16 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] Add a way to supply stdin to TAP tests"
},
{
"msg_contents": "Hi.\n\nAt Sun, 28 Apr 2019 17:07:16 +0200, David Fetter <david@fetter.org> wrote in <20190428150716.GP28936@fetter.org>\n> Our test coverage needs all the help it can get.\n> \n> This patch, extracted from another by Fabian Coelho, helps move things\n> in that direction.\n> \n> I'd like to argue that it's not a new feature, and that it should be\n> back-patched as far as possible.\n\nThe comment for the parameter \"in\".\n\n+# - in: standard input\n\nPerhaps this is \"string to be fed to standard input\". This also\ncan be a I/O reference but we don't care that?\n\n+\t$in = '' if not defined $in;\n\nrun($cmd, '<', \\undef) seems to work, maybe assuming \"<\n/dev/null\", which might be better?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 07 May 2019 11:05:32 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add a way to supply stdin to TAP tests"
},
{
"msg_contents": "On Tue, May 07, 2019 at 11:05:32AM +0900, Kyotaro HORIGUCHI wrote:\n> Hi.\n> \n> At Sun, 28 Apr 2019 17:07:16 +0200, David Fetter <david@fetter.org> wrote in <20190428150716.GP28936@fetter.org>\n> > Our test coverage needs all the help it can get.\n> > \n> > This patch, extracted from another by Fabian Coelho, helps move things\n> > in that direction.\n> > \n> > I'd like to argue that it's not a new feature, and that it should be\n> > back-patched as far as possible.\n> \n> The comment for the parameter \"in\".\n> \n> +# - in: standard input\n> \n> Perhaps this is \"string to be fed to standard input\". This also\n> can be a I/O reference but we don't care that?\n\nOK\n\n> +\t$in = '' if not defined $in;\n> \n> run($cmd, '<', \\undef) seems to work, maybe assuming \"<\n> /dev/null\", which might be better?\n\nIs /dev/null a thing on Windows?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 7 May 2019 04:42:42 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add a way to supply stdin to TAP tests"
},
{
"msg_contents": "\nOn 5/6/19 10:42 PM, David Fetter wrote:\n> On Tue, May 07, 2019 at 11:05:32AM +0900, Kyotaro HORIGUCHI wrote:\n>> Hi.\n>>\n>> At Sun, 28 Apr 2019 17:07:16 +0200, David Fetter <david@fetter.org> wrote in <20190428150716.GP28936@fetter.org>\n>>> Our test coverage needs all the help it can get.\n>>>\n>>> This patch, extracted from another by Fabian Coelho, helps move things\n>>> in that direction.\n>>>\n>>> I'd like to argue that it's not a new feature, and that it should be\n>>> back-patched as far as possible.\n>> The comment for the parameter \"in\".\n>>\n>> +# - in: standard input\n>>\n>> Perhaps this is \"string to be fed to standard input\". This also\n>> can be a I/O reference but we don't care that?\n> OK\n>\n>> +\t$in = '' if not defined $in;\n>>\n>> run($cmd, '<', \\undef) seems to work, maybe assuming \"<\n>> /dev/null\", which might be better?\n> Is /dev/null a thing on Windows?\n\n\n\nNot as such, although there is NUL (see src/include/port.h).\n\n\nHowever, I don't think we should be faking anything here. I think it\nwould be better to avoid setting $in if not supplied and then have this:\n\n\n if (defined($in))\n\n {\n\n IPC::Run::run($cmd, '<', \\$in, '>', \\$stdout, '2>', \\$stderr);\n\n }\n\n else\n\n {\n\n IPC::Run::run($cmd, >', \\$stdout, '2>', \\$stderr); \n\n }\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 09:39:57 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] Add a way to supply stdin to TAP tests"
},
{
"msg_contents": "On Tue, May 07, 2019 at 09:39:57AM -0400, Andrew Dunstan wrote:\n> \n> On 5/6/19 10:42 PM, David Fetter wrote:\n> > On Tue, May 07, 2019 at 11:05:32AM +0900, Kyotaro HORIGUCHI wrote:\n> >> Hi.\n> >>\n> >> At Sun, 28 Apr 2019 17:07:16 +0200, David Fetter <david@fetter.org> wrote in <20190428150716.GP28936@fetter.org>\n> >>> Our test coverage needs all the help it can get.\n> >>>\n> >>> This patch, extracted from another by Fabian Coelho, helps move things\n> >>> in that direction.\n> >>>\n> >>> I'd like to argue that it's not a new feature, and that it should be\n> >>> back-patched as far as possible.\n> >> The comment for the parameter \"in\".\n> >>\n> >> +# - in: standard input\n> >>\n> >> Perhaps this is \"string to be fed to standard input\". This also\n> >> can be a I/O reference but we don't care that?\n> > OK\n> >\n> >> +\t$in = '' if not defined $in;\n> >>\n> >> run($cmd, '<', \\undef) seems to work, maybe assuming \"<\n> >> /dev/null\", which might be better?\n> > Is /dev/null a thing on Windows?\n> \n> However, I don't think we should be faking anything here. I think it\n> would be better to� avoid setting $in if not supplied and then have this:\n> \n> if (defined($in))\n> \n> {\n> \n> ��� IPC::Run::run($cmd, '<', \\$in, '>', \\$stdout, '2>', \\$stderr);\n> \n> }\n> \n> else\n> \n> {\n> \n> ��� IPC::Run::run($cmd, >', \\$stdout, '2>', \\$stderr);���\n> \n> }\n\nDone that way.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 7 May 2019 18:47:59 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add a way to supply stdin to TAP tests"
},
{
"msg_contents": "On Tue, May 07, 2019 at 06:47:59PM +0200, David Fetter wrote:\n> On Tue, May 07, 2019 at 09:39:57AM -0400, Andrew Dunstan wrote:\n> > \n> > On 5/6/19 10:42 PM, David Fetter wrote:\n> > > On Tue, May 07, 2019 at 11:05:32AM +0900, Kyotaro HORIGUCHI wrote:\n> > >> Hi.\n> > >>\n> > >> At Sun, 28 Apr 2019 17:07:16 +0200, David Fetter <david@fetter.org> wrote in <20190428150716.GP28936@fetter.org>\n> > >>> Our test coverage needs all the help it can get.\n> > >>>\n> > >>> This patch, extracted from another by Fabian Coelho, helps move things\n> > >>> in that direction.\n> > >>>\n> > >>> I'd like to argue that it's not a new feature, and that it should be\n> > >>> back-patched as far as possible.\n> > >> The comment for the parameter \"in\".\n> > >>\n> > >> +# - in: standard input\n> > >>\n> > >> Perhaps this is \"string to be fed to standard input\". This also\n> > >> can be a I/O reference but we don't care that?\n> > > OK\n> > >\n> > >> +\t$in = '' if not defined $in;\n> > >>\n> > >> run($cmd, '<', \\undef) seems to work, maybe assuming \"<\n> > >> /dev/null\", which might be better?\n> > > Is /dev/null a thing on Windows?\n> > \n> > However, I don't think we should be faking anything here. I think it\n> > would be better to� avoid setting $in if not supplied and then have this:\n> > \n> > if (defined($in))\n> > \n> > {\n> > \n> > ��� IPC::Run::run($cmd, '<', \\$in, '>', \\$stdout, '2>', \\$stderr);\n> > \n> > }\n> > \n> > else\n> > \n> > {\n> > \n> > ��� IPC::Run::run($cmd, >', \\$stdout, '2>', \\$stderr);���\n> > \n> > }\n> \n> Done that way.\n\nIt helps to commit the work before putting together the patch.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 7 May 2019 19:37:15 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] Add a way to supply stdin to TAP tests"
}
] |
[
{
"msg_contents": "Commit ab0dfc961b6 used a \"long\" variable within _bt_load() to count\nthe number of tuples entered into a B-Tree index as it is built. This\nwill not work as expected on Windows, even on 64-bit Windows, because\n\"long\" is only 32-bits wide. It's far from impossible that you'd have\n~2 billion index tuples when building a new index.\n\nProgrammers use \"long\" because it is assumed to be wider than \"int\"\n(even though that isn't required by the standard, and isn't true\nacross all of the platforms we support). The use of \"long\" seems\ninherently suspect given our constraints, except perhaps in the\ncontext of sizing work_mem-based allocations, where it is used as part\nof a semi-standard idiom...albeit one that only works because of the\nrestrictions on work_mem size on Windows.\n\nISTM that we should try to come up with a way of making code like this\nwork, rather than placing the burden on new code to get it right. This\nexact issue has bitten users on a number of occasions that I can\nrecall. There is also a hidden landmine that we know about but haven't\nfixed: logtape.c, which will break on Windows with very very large\nindex builds.\n\nAlso, \"off_t\" is only 32-bits on Windows, which broke parallel CREATE\nINDEX (issued fixed by commit aa551830). I suppose that \"off_t\" is\nreally a variant of the same problem.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 28 Apr 2019 15:30:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "\"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Commit ab0dfc961b6 used a \"long\" variable within _bt_load() to count\n> the number of tuples entered into a B-Tree index as it is built. This\n> will not work as expected on Windows, even on 64-bit Windows, because\n> \"long\" is only 32-bits wide.\n\nRight. \"long\" used to be our convention years ago, but these days\ntuple counts should be int64 or perhaps uint64. See e.g. 23a27b039.\n\n> ISTM that we should try to come up with a way of making code like this\n> work, rather than placing the burden on new code to get it right.\n\nOther than \"use the right datatype\", I'm not sure what we can do?\nIn the meantime, somebody should fix ab0dfc961b6 ...\n\n> Also, \"off_t\" is only 32-bits on Windows, which broke parallel CREATE\n> INDEX (issued fixed by commit aa551830). I suppose that \"off_t\" is\n> really a variant of the same problem.\n\nHmm, why is this a problem? We should only use off_t for actual file\naccess operations, and we don't use files greater than 1GB. (There's a\nreason for that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Apr 2019 19:24:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Sun, Apr 28, 2019 at 4:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > ISTM that we should try to come up with a way of making code like this\n> > work, rather than placing the burden on new code to get it right.\n>\n> Other than \"use the right datatype\", I'm not sure what we can do?\n\nAmbiguity seems like the real problem here. If we could impose a\ngeneral rule that you cannot use \"long\" (perhaps with some limited\nwiggle-room), then a lint-like tool could catch bugs like this. This\nmay not be that difficult. Nobody is particularly concerned about\nperformance on 32-bit platforms these days.\n\n> In the meantime, somebody should fix ab0dfc961b6 ...\n\nI'll leave this to Alvaro.\n\n> Hmm, why is this a problem? We should only use off_t for actual file\n> access operations, and we don't use files greater than 1GB. (There's a\n> reason for that.)\n\nThe issue that was fixed by commit aa551830 showed this assumption to\nbe kind of brittle. Admittedly this is not as clear-cut as the \"long\"\nissue, and might not be worth worrying about. I don't want to go as\nfar as requiring explicit width integer types in all situations, since\nthat seems totally impractical, and without any real upside. But it\nwould be nice to identify integer types where there is a real risk of\nmaking an incorrect assumption, and then eliminate that risk once and\nfor all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 28 Apr 2019 16:49:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-28 19:24:59 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > ISTM that we should try to come up with a way of making code like this\n> > work, rather than placing the burden on new code to get it right.\n> \n> Other than \"use the right datatype\", I'm not sure what we can do?\n> In the meantime, somebody should fix ab0dfc961b6 ...\n\nI think we should start by just removing all uses of long. There's\nreally no excuse for them today, and a lot of them are bugs waiting to\nhappen. And then either just add a comment to the coding style, or even\nbetter a small script, to prevent them from being re-used.\n\n\n> > Also, \"off_t\" is only 32-bits on Windows, which broke parallel CREATE\n> > INDEX (issued fixed by commit aa551830). I suppose that \"off_t\" is\n> > really a variant of the same problem.\n> \n> Hmm, why is this a problem? We should only use off_t for actual file\n> access operations, and we don't use files greater than 1GB. (There's a\n> reason for that.)\n\nWe read from larger files in a few places though. E.g. pg_dump. Most\nplaces really just should use pgoff_t...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 08:11:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 8:11 AM Andres Freund <andres@anarazel.de> wrote:\n> I think we should start by just removing all uses of long. There's\n> really no excuse for them today, and a lot of them are bugs waiting to\n> happen.\n\nI like the idea of banning \"long\" altogether. It will probably be hard\nto keep it out of third party code that we vendor-in, or even code\nthat interfaces with libraries in some way, but it should be removed\nfrom everything else. It actually doesn't seem particularly hard to do\nso, based on a quick grep of src/backend/. Most uses of \"long\" is code\nthat sizes something in local memory, where \"long\" works for the same\nreason as it works when calculating the size of a work_mem allocation\n-- ugly, but correct. A few uses of \"long\" seem to be real live bugs,\nalbeit bugs that are very unlikely to ever hit.\n\n_h_indexbuild() has the same bug as _bt_load(), also due to commit\nab0dfc961b6 -- I spotted that in passing when I used grep.\n\n> We read from larger files in a few places though. E.g. pg_dump. Most\n> places really just should use pgoff_t...\n\nI wasn't even aware of pgoff_t. It is only used in frontend utilities\nthat I don't know that much about, whereas off_t is used all over the\nbackend code.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:16:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 10:16:39 -0700, Peter Geoghegan wrote:\n> On Mon, Apr 29, 2019 at 8:11 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think we should start by just removing all uses of long. There's\n> > really no excuse for them today, and a lot of them are bugs waiting to\n> > happen.\n> \n> I like the idea of banning \"long\" altogether. It will probably be hard\n> to keep it out of third party code that we vendor-in, or even code\n> that interfaces with libraries in some way, but it should be removed\n> from everything else.\n\nI don't think any of the code we've vendored in where we also track\nupstream, actually uses long in a meaningful amount. And putside of\nbackward compatibility, I don't think there's many libraries that still\nuse it.\n\n\n> > We read from larger files in a few places though. E.g. pg_dump. Most\n> > places really just should use pgoff_t...\n> \n> I wasn't even aware of pgoff_t. It is only used in frontend utilities\n> that I don't know that much about, whereas off_t is used all over the\n> backend code.\n\nYea, we've some delightful hackery to make that work:\n\n * WIN32 does not provide 64-bit off_t, but does provide the functions operating\n * with 64-bit offsets.\n */\n#define pgoff_t __int64\n#ifdef _MSC_VER\n#define fseeko(stream, offset, origin) _fseeki64(stream, offset, origin)\n#define ftello(stream) _ftelli64(stream)\n#else\n#ifndef fseeko\n#define fseeko(stream, offset, origin) fseeko64(stream, offset, origin)\n#endif\n#ifndef ftello\n#define ftello(stream) ftello64(stream)\n#endif\n#endif\n\nwhich also shows that the compatibility hackery is fairly limited.\n\n\nThomas, ISTM, that pg_pread() etc should rather take the offset as a\nuint64 or such. And then actually initialize OFFSET.offsetHigh.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:28:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Apr 29, 2019 at 8:11 AM Andres Freund <andres@anarazel.de> wrote:\n>> I think we should start by just removing all uses of long. There's\n>> really no excuse for them today, and a lot of them are bugs waiting to\n>> happen.\n\n> I like the idea of banning \"long\" altogether. It will probably be hard\n> to keep it out of third party code that we vendor-in, or even code\n> that interfaces with libraries in some way, but it should be removed\n> from everything else. It actually doesn't seem particularly hard to do\n> so, based on a quick grep of src/backend/. Most uses of \"long\" is code\n> that sizes something in local memory, where \"long\" works for the same\n> reason as it works when calculating the size of a work_mem allocation\n> -- ugly, but correct.\n\nThere's more to that than you might realize. For example, guc.c\nenforces a limit on work_mem that's designed to ensure that\nexpressions like \"work_mem * 1024L\" won't overflow, and there are\nsimilar choices elsewhere. I'm not sure if we want to go to the\neffort of rethinking that; it's not really a bug, though it does\nresult in 64-bit Windows being more restricted than it has to be.\n\nTrying to get rid of type-L constants along with more explicit\nuses of \"long\" would be a PITA I'm afraid.\n\nAnother problem is that while \"%lu\" format specifiers are portable,\nINT64_FORMAT is a *big* pain, not least because you can't put it into\ntranslatable strings without causing problems. To the extent that\nwe could go over to \"%zu\" instead, maybe this could be finessed,\nbut blind \"s/long/int64/g\" isn't going to be any fun.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 13:32:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 13:32:13 -0400, Tom Lane wrote:\n> There's more to that than you might realize. For example, guc.c\n> enforces a limit on work_mem that's designed to ensure that\n> expressions like \"work_mem * 1024L\" won't overflow, and there are\n> similar choices elsewhere. I'm not sure if we want to go to the\n> effort of rethinking that; it's not really a bug, though it does\n> result in 64-bit Windows being more restricted than it has to be.\n\nHm, but why does that require the use of long? We could fairly trivially\ndefine a type that's guaranteed to be 32 bit on 32 bit platforms, and 64\nbit on 64 bit platforms. Even a dirty hack like using intptr_t instead\nof long would be better than using long.\n\n\n> Another problem is that while \"%lu\" format specifiers are portable,\n> INT64_FORMAT is a *big* pain, not least because you can't put it into\n> translatable strings without causing problems. To the extent that\n> we could go over to \"%zu\" instead, maybe this could be finessed,\n> but blind \"s/long/int64/g\" isn't going to be any fun.\n\nHm. It appears that gettext supports expanding PRId64 PRIu64 etc in\ntranslated strings. Perhaps we should implement them in our printf, and\nthen replace all use of INT64_FORMAT with that?\n\nI've not tested the gettext code, but it's there:\n\n/* Expand a system dependent string segment. Return NULL if unsupported. */\nstatic const char *\nget_sysdep_segment_value (const char *name)\n{\n /* Test for an ISO C 99 section 7.8.1 format string directive.\n Syntax:\n P R I { d | i | o | u | x | X }\n { { | LEAST | FAST } { 8 | 16 | 32 | 64 } | MAX | PTR } */\n /* We don't use a table of 14 times 6 'const char *' strings here, because\n data relocations cost startup time. */\n if (name[0] == 'P' && name[1] == 'R' && name[2] == 'I')\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:52:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There's more to that than you might realize. For example, guc.c\n> enforces a limit on work_mem that's designed to ensure that\n> expressions like \"work_mem * 1024L\" won't overflow, and there are\n> similar choices elsewhere.\n\nI was aware of that, but I wasn't aware of how many places that bleeds\ninto until I checked just now.\n\nIt would be nice if we could figure out how to make it obvious that\nthe idioms around the use of long for work_mem stuff are idioms that\nhave a specific rationale. It's pretty confusing as things stand.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 10:56:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-29 13:32:13 -0400, Tom Lane wrote:\n>> There's more to that than you might realize. For example, guc.c\n>> enforces a limit on work_mem that's designed to ensure that\n>> expressions like \"work_mem * 1024L\" won't overflow, and there are\n>> similar choices elsewhere. I'm not sure if we want to go to the\n>> effort of rethinking that; it's not really a bug, though it does\n>> result in 64-bit Windows being more restricted than it has to be.\n\n> Hm, but why does that require the use of long? We could fairly trivially\n> define a type that's guaranteed to be 32 bit on 32 bit platforms, and 64\n> bit on 64 bit platforms. Even a dirty hack like using intptr_t instead\n> of long would be better than using long.\n\nThe point is that\n\n(a) work_mem is an int; adding support to GUC for some other integer\nwidth would be an unreasonable amount of overhead.\n\n(b) \"1024L\" is a nice simple non-carpal-tunnel-inducing way to get\nthe right width of the product, for some value of \"right\".\n\nIf we don't want to rely on \"L\" constants then we'll have to write these\ncases like \"work_mem * (size_t) 1024\" which is ugly, lots more keystrokes,\nand prone to weird precedence problems unless you throw even more\nkeystrokes (parentheses) at it. I'm not excited about doing that just\nto allow larger work_mem settings on Win64.\n\n(But if we do go in this direction, maybe some notation like\n#define KILOBYTE ((size_t) 1024)\nwould help.)\n\nI'm not suggesting that we don't need to fix uses of \"long\" for tuple\ncounts, and perhaps other things. But I think getting rid of it in memory\nsize calculations might be a lot of work for not a lot of reward.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 14:10:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 11:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we don't want to rely on \"L\" constants then we'll have to write these\n> cases like \"work_mem * (size_t) 1024\" which is ugly, lots more keystrokes,\n> and prone to weird precedence problems unless you throw even more\n> keystrokes (parentheses) at it. I'm not excited about doing that just\n> to allow larger work_mem settings on Win64.\n\nI don't think that anybody cares about Win64 very much. Simplifying\nthe code might lead to larger work_mem settings on that platform, but\nthat's not the end goal I have in mind. For me, the end goal is\nsimpler code.\n\n> I'm not suggesting that we don't need to fix uses of \"long\" for tuple\n> counts, and perhaps other things. But I think getting rid of it in memory\n> size calculations might be a lot of work for not a lot of reward.\n\nWhether or not *fully* banning the use of \"long\" is something that\nwill simplify the code is debatable. However, we could substantially\nreduce the use of \"long\" across the backend without any real downside.\nThe work_mem question can be considered later. Does that seem\nreasonable?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 11:18:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-Apr-28, Peter Geoghegan wrote:\n\n> Commit ab0dfc961b6 used a \"long\" variable within _bt_load() to count\n> the number of tuples entered into a B-Tree index as it is built. This\n> will not work as expected on Windows, even on 64-bit Windows, because\n> \"long\" is only 32-bits wide. It's far from impossible that you'd have\n> ~2 billion index tuples when building a new index.\n\nAgreed. Here's a patch. I see downthread that you also discovered the\nsame mistake in _h_indexbuild by grepping for \"long\"; I got to it by\nexamining callers of pgstat_progress_update_param and\npgstat_progress_update_multi_param. I didn't find any other mistakes of\nthe same ilk. Some codesites use \"double\" instead of \"int64\", but those\nare not broken.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 29 Apr 2019 14:19:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-29 11:18:49 -0700, Peter Geoghegan wrote:\n> On Mon, Apr 29, 2019 at 11:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If we don't want to rely on \"L\" constants then we'll have to write these\n> > cases like \"work_mem * (size_t) 1024\" which is ugly, lots more keystrokes,\n> > and prone to weird precedence problems unless you throw even more\n> > keystrokes (parentheses) at it. I'm not excited about doing that just\n> > to allow larger work_mem settings on Win64.\n> \n> I don't think that anybody cares about Win64 very much.\n\nI seriously doubt this assertion. Note that the postgres packages on\nhttps://www.postgresql.org/download/windows/ do not support 32bit\nwindows anymore (edb from 11 onwards, bigsql apparently always). And I\nthink there's a pretty substantial number of windows users out there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Apr 2019 11:24:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 11:20 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Agreed. Here's a patch. I see downthread that you also discovered the\n> same mistake in _h_indexbuild by grepping for \"long\"; I got to it by\n> examining callers of pgstat_progress_update_param and\n> pgstat_progress_update_multi_param. I didn't find any other mistakes of\n> the same ilk. Some codesites use \"double\" instead of \"int64\", but those\n> are not broken.\n\nThis seems fine, though FWIW I probably would have gone with int64\ninstead of uint64. There is generally no downside to using int64, and\nbeing to support negative integers can be useful in some contexts\n(though not this context).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 11:28:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think that anybody cares about Win64 very much.\n>\n> I seriously doubt this assertion. Note that the postgres packages on\n> https://www.postgresql.org/download/windows/ do not support 32bit\n> windows anymore (edb from 11 onwards, bigsql apparently always). And I\n> think there's a pretty substantial number of windows users out there.\n\nI was talking about the motivation behind this thread, and I suppose\nthat I included you in that based on things you've said about Windows\nin the past (apparently I shouldn't have done so).\n\nI am interested in making the code less complicated. If we can remove\nthe work_mem kludge for Windows as a consequence of that, then so much\nthe better.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Apr 2019 11:31:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Tue, 30 Apr 2019 at 06:28, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Apr 29, 2019 at 11:20 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > Agreed. Here's a patch. I see downthread that you also discovered the\n> > same mistake in _h_indexbuild by grepping for \"long\"; I got to it by\n> > examining callers of pgstat_progress_update_param and\n> > pgstat_progress_update_multi_param. I didn't find any other mistakes of\n> > the same ilk. Some codesites use \"double\" instead of \"int64\", but those\n> > are not broken.\n>\n> This seems fine, though FWIW I probably would have gone with int64\n> instead of uint64. There is generally no downside to using int64, and\n> being to support negative integers can be useful in some contexts\n> (though not this context).\n\nCopyFrom() returns uint64. I think it's better to be consistent in the\ntypes we use to count tuples in commands.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:29:44 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-Apr-30, David Rowley wrote:\n\n> On Tue, 30 Apr 2019 at 06:28, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Mon, Apr 29, 2019 at 11:20 AM Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n> > > Agreed. Here's a patch. I see downthread that you also discovered the\n> > > same mistake in _h_indexbuild by grepping for \"long\"; I got to it by\n> > > examining callers of pgstat_progress_update_param and\n> > > pgstat_progress_update_multi_param. I didn't find any other mistakes of\n> > > the same ilk. Some codesites use \"double\" instead of \"int64\", but those\n> > > are not broken.\n> >\n> > This seems fine, though FWIW I probably would have gone with int64\n> > instead of uint64. There is generally no downside to using int64, and\n> > being to support negative integers can be useful in some contexts\n> > (though not this context).\n> \n> CopyFrom() returns uint64. I think it's better to be consistent in the\n> types we use to count tuples in commands.\n\nThat's not a bad argument ... but I still committed it as int64, mostly\nbecause that's what pgstat_progress_update_param takes. Anyway, these\nare just local variables, not return values, so it's easily changeable\nif we determine (??) that unsigned is better.\n\nI don't know if anybody plans to do progress report for COPY, but I hope\nwe don't find ourselves in a problem when some user claims that they are\ninserting more than 2^63 but less than 2^64 tuples.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:41:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I don't know if anybody plans to do progress report for COPY, but I hope\n> we don't find ourselves in a problem when some user claims that they are\n> inserting more than 2^63 but less than 2^64 tuples.\n\nAt one tuple per nanosecond, it'd take a shade under 300 years to\nreach 2^63. Seems like a problem for our descendants to worry about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:51:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-04-29 19:32, Tom Lane wrote:\n> Another problem is that while \"%lu\" format specifiers are portable,\n> INT64_FORMAT is a *big* pain, not least because you can't put it into\n> translatable strings without causing problems. To the extent that\n> we could go over to \"%zu\" instead, maybe this could be finessed,\n> but blind \"s/long/int64/g\" isn't going to be any fun.\n\nSince we control our own snprintf now, this could probably be addressed\nsomehow, right?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 16:39:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-04-29 19:52, Andres Freund wrote:\n> Hm. It appears that gettext supports expanding PRId64 PRIu64 etc in\n> translated strings.\n\nThat won't work in non-GNU gettext.\n\n> Perhaps we should implement them in our printf, and\n> then replace all use of INT64_FORMAT with that?\n\nBut this might.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 16:40:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn May 22, 2019 7:39:41 AM PDT, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>On 2019-04-29 19:32, Tom Lane wrote:\n>> Another problem is that while \"%lu\" format specifiers are portable,\n>> INT64_FORMAT is a *big* pain, not least because you can't put it into\n>> translatable strings without causing problems. To the extent that\n>> we could go over to \"%zu\" instead, maybe this could be finessed,\n>> but blind \"s/long/int64/g\" isn't going to be any fun.\n>\n>Since we control our own snprintf now, this could probably be addressed\n>somehow, right?\n\nz is for size_t though? Not immediately first how It'd help us?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 22 May 2019 08:26:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On May 22, 2019 7:39:41 AM PDT, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-04-29 19:32, Tom Lane wrote:\n>>> Another problem is that while \"%lu\" format specifiers are portable,\n>>> INT64_FORMAT is a *big* pain, not least because you can't put it into\n>>> translatable strings without causing problems. To the extent that\n>>> we could go over to \"%zu\" instead, maybe this could be finessed,\n>>> but blind \"s/long/int64/g\" isn't going to be any fun.\n\n>> Since we control our own snprintf now, this could probably be addressed\n>> somehow, right?\n\n> z is for size_t though? Not immediately first how It'd help us?\n\nYeah, z doesn't reliably translate to int64 either, so it's only useful\nwhen the number you're trying to print is a memory object size.\n\nI don't really see how controlling snprintf is enough to get somewhere\non this. Sure we could invent some new always-64-bit length modifier\nand teach snprintf.c about it, but none of the other tools we use\nwould know about it. I don't want to give up compiler cross-checking\nof printf formats, do you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 11:52:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn May 22, 2019 7:40:28 AM PDT, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>On 2019-04-29 19:52, Andres Freund wrote:\n>> Hm. It appears that gettext supports expanding PRId64 PRIu64 etc in\n>> translated strings.\n>\n>That won't work in non-GNU gettext.\n\nWhich of those do currently work with postgres?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 22 May 2019 09:59:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-05-22 18:59, Andres Freund wrote:\n> On May 22, 2019 7:40:28 AM PDT, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-04-29 19:52, Andres Freund wrote:\n>>> Hm. It appears that gettext supports expanding PRId64 PRIu64 etc in\n>>> translated strings.\n>>\n>> That won't work in non-GNU gettext.\n> \n> Which of those do currently work with postgres?\n\nI don't know what the current situation is, but in the past we've been\ngetting complaints when using GNU-specific features, mostly from Solaris\nI think.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 21:06:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-05-22 17:52, Tom Lane wrote:\n> I don't really see how controlling snprintf is enough to get somewhere\n> on this. Sure we could invent some new always-64-bit length modifier\n> and teach snprintf.c about it, but none of the other tools we use\n> would know about it. I don't want to give up compiler cross-checking\n> of printf formats, do you?\n\nCould we define int64 to be long long int on all platforms and just\nalways use %lld?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 21:07:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Could we define int64 to be long long int on all platforms and just\n> always use %lld?\n\nHmmm ... maybe. Once upon a time we had to cope with compilers\nthat didn't have \"long long\", but perhaps that time is past.\n\nAnother conceivable hazard is somebody deciding that the world\nneeds a platform where \"long long\" is 128 bits. I don't know\nhow likely that is to happen.\n\nAs a first step, we could try asking configure to compute\nsizeof(long long) and seeing what the buildfarm makes of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 15:21:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-05-22 21:21, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Could we define int64 to be long long int on all platforms and just\n>> always use %lld?\n> \n> Hmmm ... maybe. Once upon a time we had to cope with compilers\n> that didn't have \"long long\", but perhaps that time is past.\n\nIt's required by C99, and the configure test for C99 checks it.\n\n> Another conceivable hazard is somebody deciding that the world\n> needs a platform where \"long long\" is 128 bits. I don't know\n> how likely that is to happen.\n\nAnother option is that in cases where it doesn't affect storage layouts,\nlike the counting tuples case that started this thread, code could just\nuse long long int directly instead of int64. Then if someone wants to\nmake it 128 bits or 96 bits or whatever it would not be a problem.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 11:31:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Thu, May 23, 2019 at 5:31 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Another option is that in cases where it doesn't affect storage layouts,\n> like the counting tuples case that started this thread, code could just\n> use long long int directly instead of int64. Then if someone wants to\n> make it 128 bits or 96 bits or whatever it would not be a problem.\n\nI think that sort of thing tends not to work out well, because at some\npoint it's likely to be sent out via the wire protocol; at that point\nwe'll need a value of a certain width. Better to use that width right\nfrom the beginning.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 May 2019 09:52:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-05-23 15:52, Robert Haas wrote:\n> On Thu, May 23, 2019 at 5:31 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> Another option is that in cases where it doesn't affect storage layouts,\n>> like the counting tuples case that started this thread, code could just\n>> use long long int directly instead of int64. Then if someone wants to\n>> make it 128 bits or 96 bits or whatever it would not be a problem.\n> \n> I think that sort of thing tends not to work out well, because at some\n> point it's likely to be sent out via the wire protocol; at that point\n> we'll need a value of a certain width. Better to use that width right\n> from the beginning.\n\nHmm, by that argument, we shouldn't ever use any integer type other than\nint16, int32, and int64.\n\nI'm thinking for example that pgbench makes a lot of use of int64 and\nprinting that out makes quite messy code. Replacing that by long long\nint would make this much nicer and should be pretty harmless relative to\nyour concern.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 16:20:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-05-23 15:52, Robert Haas wrote:\n>> On Thu, May 23, 2019 at 5:31 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> Another option is that in cases where it doesn't affect storage layouts,\n>>> like the counting tuples case that started this thread, code could just\n>>> use long long int directly instead of int64. Then if someone wants to\n>>> make it 128 bits or 96 bits or whatever it would not be a problem.\n\n>> I think that sort of thing tends not to work out well, because at some\n>> point it's likely to be sent out via the wire protocol; at that point\n>> we'll need a value of a certain width. Better to use that width right\n>> from the beginning.\n\n> Hmm, by that argument, we shouldn't ever use any integer type other than\n> int16, int32, and int64.\n> I'm thinking for example that pgbench makes a lot of use of int64 and\n> printing that out makes quite messy code. Replacing that by long long\n> int would make this much nicer and should be pretty harmless relative to\n> your concern.\n\nIt does seem attractive to use long long in cases where we're not too\nfussed about the exact width. OTOH, that reasoning was exactly why we\nused \"long\" in a lot of places back in the day, and sure enough it came\nback to bite us.\n\nOn the whole I think I could live with a policy that says \"tuple counts\nshall be 'long long' when being passed around in code, but for persistent\nstorage or wire-protocol transmission, use 'int64'\".\n\nAn alternative and much narrower policy is to say it's okay to do this\nwith an int64 value:\n\n\tprintf(\"processed %lld tuples\", (long long) count);\n\nIn such code, all we're assuming is long long >= 64 bits, which\nis completely safe per C99, and we dodge the need for a\nplatform-varying format string.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 10:34:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-05-23 16:34, Tom Lane wrote:\n> On the whole I think I could live with a policy that says \"tuple counts\n> shall be 'long long' when being passed around in code, but for persistent\n> storage or wire-protocol transmission, use 'int64'\".\n> \n> An alternative and much narrower policy is to say it's okay to do this\n> with an int64 value:\n> \n> \tprintf(\"processed %lld tuples\", (long long) count);\n> \n> In such code, all we're assuming is long long >= 64 bits, which\n> is completely safe per C99, and we dodge the need for a\n> platform-varying format string.\n\nSome combination of this seems quite reasonable.\n\nAttached is a patch to implement this in a handful of cases that are\nparticularly verbose right now. I think those are easy wins.\n\n(Also a second patch that makes use of %zu for size_t where this was not\nyet done.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 6 Jun 2019 14:50:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On Thu, May 23, 2019 at 10:21 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Hmm, by that argument, we shouldn't ever use any integer type other than\n> int16, int32, and int64.\n\nI think we basically shouldn't. I mean it's fine to use 'int' as a\nflags argument as part of an internal API, or as a loop counter\nprivate to a function or something. But if you are passing around\nvalues that involve on-disk compatibility or wire protocol\ncompatibility, it's just a recipe for bugs. If the code has to\nsometimes cast a value to some other type, somebody may do it wrong.\nIf there's a uniform rule that tuple counts are always int64, that's\npretty easy to understand.\n\nIn short, when a certain kind of value is widely-used, it should have\na clearly-declared width.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Jun 2019 08:58:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Attached is a patch to implement this in a handful of cases that are\n> particularly verbose right now. I think those are easy wins.\n> (Also a second patch that makes use of %zu for size_t where this was not\n> yet done.)\n\nI took a look through these and see nothing objectionable. There are\nprobably more places that can be improved, but we need not insist on\ngetting every such place in one go.\n\nPer Robert's position that variables ought to have well-defined widths,\nthere might be something to be said for not touching the variable\ndeclarations that you changed from int64 to long long, and instead\ncasting them to long long in the sprintf calls. But I'm not really\nconvinced that that's better than what you've done.\n\nMarked CF entry as ready-for-committer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 16:56:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-07-02 22:56, Tom Lane wrote:\n> I took a look through these and see nothing objectionable. There are\n> probably more places that can be improved, but we need not insist on\n> getting every such place in one go.\n> \n> Per Robert's position that variables ought to have well-defined widths,\n> there might be something to be said for not touching the variable\n> declarations that you changed from int64 to long long, and instead\n> casting them to long long in the sprintf calls. But I'm not really\n> convinced that that's better than what you've done.\n> \n> Marked CF entry as ready-for-committer.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 17:02:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 16:40:28 +0200, Peter Eisentraut wrote:\n> On 2019-04-29 19:52, Andres Freund wrote:\n> > Hm. It appears that gettext supports expanding PRId64 PRIu64 etc in\n> > translated strings.\n> \n> That won't work in non-GNU gettext.\n\nWhich of those do we actually support? We already depend on\nbind_textdomain_codeset, which afaict wasn't present in older gettext\nimplementations.\n\n- Andres\n\n\n",
"msg_date": "Wed, 14 Aug 2019 11:17:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
},
{
"msg_contents": "On 2019-08-14 20:17, Andres Freund wrote:\n> On 2019-05-22 16:40:28 +0200, Peter Eisentraut wrote:\n>> On 2019-04-29 19:52, Andres Freund wrote:\n>>> Hm. It appears that gettext supports expanding PRId64 PRIu64 etc in\n>>> translated strings.\n>>\n>> That won't work in non-GNU gettext.\n> \n> Which of those do we actually support? We already depend on\n> bind_textdomain_codeset, which afaict wasn't present in older gettext\n> implementations.\n\nAt least in theory we support Solaris gettext. In the past we\noccasionally got complaints when we broke something there.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 14:00:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"long\" type is not appropriate for counting tuples"
}
] |
[
{
"msg_contents": "Hi Team,\n\nLet us say we have a Master (M1) and a Slave (S1) in replication using\nStreaming Replication.\n\nI stopped all my writes from Application and i switched a WAL and made sure\nit is replicated to Slave.\nI have then shutdown M1. And ran a promote on S1.\nNow S1 is my new Master with a new timeline.\n\nNow, in order to let M1 replicate changes from S1 (Master) as a Slave, i am\nable to succeed with the following approach.\n\nAdd recovery_target_timeline = 'latest' and then have the appropriate\nentries such as primary_conninfo, standby_mode in the recovery.conf and\nstart the M1 using pg_ctl.\n\nI see that it M1 (Old Master) is able to catch up with S1 (New Master). And\nreplication is going fine.\nHave you ever faced or think of a problem with this approach ?\n\nPoints to note are :\n1. Master was neatly SHUTDOWN after shutting down writes. So, it has not\ndiverged. (If it is diverged, i would of course need a pg_rewind like\napproach).\n2. It was a planned switchover. During this entire process, there are no\nwrites to M1 (before Switchover) or S1 (after promote).\n3. timeline history file is also accessible to the Old Master (M1) after S1\nwas promoted. No transactions, so no WALs generated, may be 1 or 2\nconsidering timeout, etc.\n\nIt looks like a clean approach, but do you think there could be a problem\nwith this approach of rebuilding Old Master as a Slave ? Is this approach\nstill okay ?\n\nThanks,\nAvinash Vallarapu.\n\nHi Team,Let us say we have a Master (M1) and a Slave (S1) in replication using Streaming Replication.I stopped all my writes from Application and i switched a WAL and made sure it is replicated to Slave. I have then shutdown M1. And ran a promote on S1. Now S1 is my new Master with a new timeline. Now, in order to let M1 replicate changes from S1 (Master) as a Slave, i am able to succeed with the following approach. Add recovery_target_timeline = 'latest' and then have the appropriate entries such as primary_conninfo, standby_mode in the recovery.conf and start the M1 using pg_ctl.I see that it M1 (Old Master) is able to catch up with S1 (New Master). And replication is going fine. Have you ever faced or think of a problem with this approach ?Points to note are : 1. Master was neatly SHUTDOWN after shutting down writes. So, it has not diverged. (If it is diverged, i would of course need a pg_rewind like approach).2. It was a planned switchover. During this entire process, there are no writes to M1 (before Switchover) or S1 (after promote). 3. timeline history file is also accessible to the Old Master (M1) after S1 was promoted. No transactions, so no WALs generated, may be 1 or 2 considering timeout, etc. It looks like a clean approach, but do you think there could be a problem with this approach of rebuilding Old Master as a Slave ? Is this approach still okay ? Thanks,Avinash Vallarapu.",
"msg_date": "Mon, 29 Apr 2019 00:28:31 -0300",
"msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Do you see any problems with this procedure for Old Master rebuild as\n a Slave upon switchover ?"
},
{
"msg_contents": "Hello.\n\nAt Mon, 29 Apr 2019 00:28:31 -0300, Avinash Kumar <avinash.vallarapu@gmail.com> wrote in <CAN0Tujf0JJnC8RqbqBLX1xez9SGDSZhi6oc3JhB06WcDk1TTAQ@mail.gmail.com>\n> Hi Team,\n> \n> Let us say we have a Master (M1) and a Slave (S1) in replication using\n> Streaming Replication.\n> \n> I stopped all my writes from Application and i switched a WAL and made sure\n> it is replicated to Slave.\n> I have then shutdown M1. And ran a promote on S1.\n> Now S1 is my new Master with a new timeline.\n> \n> Now, in order to let M1 replicate changes from S1 (Master) as a Slave, i am\n> able to succeed with the following approach.\n> \n> Add recovery_target_timeline = 'latest' and then have the appropriate\n> entries such as primary_conninfo, standby_mode in the recovery.conf and\n> start the M1 using pg_ctl.\n> \n> I see that it M1 (Old Master) is able to catch up with S1 (New Master). And\n> replication is going fine.\n> Have you ever faced or think of a problem with this approach ?\n> \n> Points to note are :\n> 1. Master was neatly SHUTDOWN after shutting down writes. So, it has not\n> diverged. (If it is diverged, i would of course need a pg_rewind like\n> approach).\n> 2. It was a planned switchover. During this entire process, there are no\n> writes to M1 (before Switchover) or S1 (after promote).\n\nNo normal backends remain at the time of the final\ncheckpoint. And walsender terminates after the final checkpoint\nand archiving are done. So that is assured by design if no\ntrouble happens elsewhere.\n\n> 3. timeline history file is also accessible to the Old Master (M1) after S1\n> was promoted. No transactions, so no WALs generated, may be 1 or 2\n> considering timeout, etc.\n\nNote that no transactons doesn't mean no WALs. There're WAL\nrecords that have roots in other than transatcion activities like\nRUNNING_XACTS. (This doesn't deny the discussion above.)\n\n> It looks like a clean approach, but do you think there could be a problem\n> with this approach of rebuilding Old Master as a Slave ? Is this approach\n> still okay ?\n\nIt's just my personal view, but I don't fully trust on the\nassumption. pg_rewind does nothing if the two servers didn't\ndiverge. So I think there's no reason to hesitate to run\npg_rewind to make sure the new standby can be used safely as-is\nin the case. (Note that pg_rewind misdiagnoses that the new\nmaster is on the same timeline with the old master before the\nfirst checkpoint finishes after standby's promotion.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 07 May 2019 12:43:25 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Do you see any problems with this procedure for Old Master\n rebuild as a Slave upon switchover ?"
}
] |
[
{
"msg_contents": "Reading code I noticed that we in a few rare instances use strdup() in frontend\nutilities instead of pg_strdup(). Is there a reason for not using pg_strdup()\nconsistently as per the attached patch?\n\ncheers ./daniel",
"msg_date": "Mon, 29 Apr 2019 11:47:27 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Plain strdup() in frontend code"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 11:47:27AM +0000, Daniel Gustafsson wrote:\n> Reading code I noticed that we in a few rare instances use strdup() in frontend\n> utilities instead of pg_strdup(). Is there a reason for not using pg_strdup()\n> consistently as per the attached patch?\n\nI think that it is good practice to encourage its use, so making\nthings more consistent is a good idea. While on it, we could also\nswitch psql's do_lo_import() which uses a malloc() to\npg_malloc_extended() with MCXT_ALLOC_NO_OOM. GetPrivilegesToDelete()\nin pg_ctl also has an instance of malloc() with a similar failure\nmode.\n--\nMichael",
"msg_date": "Mon, 29 Apr 2019 22:01:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Plain strdup() in frontend code"
},
{
"msg_contents": "On Monday, April 29, 2019 3:01 PM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Apr 29, 2019 at 11:47:27AM +0000, Daniel Gustafsson wrote:\n>\n> > Reading code I noticed that we in a few rare instances use strdup() in frontend\n> > utilities instead of pg_strdup(). Is there a reason for not using pg_strdup()\n> > consistently as per the attached patch?\n>\n> I think that it is good practice to encourage its use, so making\n> things more consistent is a good idea. While on it, we could also\n> switch psql's do_lo_import() which uses a malloc() to\n> pg_malloc_extended() with MCXT_ALLOC_NO_OOM. GetPrivilegesToDelete()\n> in pg_ctl also has an instance of malloc() with a similar failure\n> mode.\n\nGood point, I've updated the patch to include those as well.\n\n\ncheers ./daniel",
"msg_date": "Mon, 29 Apr 2019 13:35:12 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Plain strdup() in frontend code"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 01:35:12PM +0000, Daniel Gustafsson wrote:\n> Good point, I've updated the patch to include those as well.\n\nI have been reviewing this patch, and the change in pg_waldump is\nactually a good thing, as we could finish with a crash if strdup()\nreturns NULL as the pointer gets directly used, and there would be an\nassertion failure in open_file_in_directory().\n\nparseAclItem() in dumputils.c gets changed so as we would not return\nfalse on OOM anymore. I think that the current code is a bug as\nparseAclItem() should return false to the caller only on a parsing\nerror so the caller can get confused between the OOM on strdup() and a\nparsing problem.\n\nIn short, as presented, the patch looks acceptable to me. Are there\nany objections to apply it on HEAD?\n--\nMichael",
"msg_date": "Tue, 30 Apr 2019 10:23:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Plain strdup() in frontend code"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 10:23:51AM +0900, Michael Paquier wrote:\n> In short, as presented, the patch looks acceptable to me. Are there\n> any objections to apply it on HEAD?\n\nAnd committed.\n--\nMichael",
"msg_date": "Sat, 4 May 2019 16:33:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Plain strdup() in frontend code"
}
] |
[
{
"msg_contents": "Two random typos spotted while perusing code in src/bin.\n\ncheers ./daniel",
"msg_date": "Mon, 29 Apr 2019 13:40:26 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Typofixes in src/bin"
},
{
"msg_contents": "On Mon, Apr 29, 2019 at 01:40:26PM +0000, Daniel Gustafsson wrote:\n> Two random typos spotted while perusing code in src/bin.\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Mon, 29 Apr 2019 23:54:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typofixes in src/bin"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nI wanted to get some thoughts about a type-specific performance problem \nwe hit\nthrough our application tier.\n\nThe full conversation is here: \nhttps://github.com/npgsql/npgsql/issues/2283\n\nBasically, if a table exists with a PK which is CHAR(n) and a query is \nsent with\nVARCHAR or CHAR then it uses an Index Scan. If the query is sent with \nTEXT as the\ntype then postgresql casts the column to TEXT (rather than the value to \nCHAR) and\nit does a Seq Scan.\n\nSo far this has only showed itself on npgsql (I've been unable to \nreproduce on\nother drivers), I think it's because npgsql only sends TEXT whereas \nother drivers\ntend to send VARCHAR (other drivers includes the official JDBC driver).\n\nI guess the root question is: is TEXT supposed to be identical to \nVARCHAR in all scenarios?\n\nThanks,\nRob\n\n\n",
"msg_date": "Mon, 29 Apr 2019 17:40:01 +0100",
"msg_from": "Rob <postgresql@mintsoft.net>",
"msg_from_op": true,
"msg_subject": "CHAR vs NVARCHAR vs TEXT performance"
},
{
"msg_contents": "Rob <postgresql@mintsoft.net> writes:\n> Basically, if a table exists with a PK which is CHAR(n) and a query is\n> sent with VARCHAR or CHAR then it uses an Index Scan. If the query is\n> sent with TEXT as the type then postgresql casts the column to TEXT\n> (rather than the value to CHAR) and it does a Seq Scan.\n\nYeah, this is an artifact of the fact that text is considered a\n\"preferred type\" so it wins out in the parser's choice of which\ntype to promote to. See\n\nhttps://www.postgresql.org/docs/current/typeconv-oper.html\n\n> I guess the root question is: is TEXT supposed to be identical to \n> VARCHAR in all scenarios?\n\nIt's not for this purpose, because varchar isn't a preferred type.\n\nFWIW, my recommendation for this sort of thing is almost always\nto not use CHAR(n). The use-case for that datatype pretty much\ndisappeared with the last IBM Model 029 card punch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Apr 2019 13:43:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CHAR vs NVARCHAR vs TEXT performance"
},
{
"msg_contents": "I agree in principle, however in this particular scenario it's not\nour schema so we're a little reluctant to migrate the types etc.\n\nWe're in a bit of a bad place because the combination of NHibernate\n+ npgsql3/4 + this table = seqScans everywhere. Basically when npgsql\nchanged their default type for strings from VARCHAR to TEXT it caused\nthis behaviour.\n\nI suppose the follow up question is: should drivers\ndefault to sending types that are preferred by postgres (i.e. TEXT)\nrather than compatible types (VARCHAR). If so, is there a reason why\nthe JDBC driver doesn't send TEXT (possibly a question for the JDBC\nguys rather than here)?\n\nThanks,\nRob\n\nOn 2019-04-30 00:16, Thomas Munro wrote:\n> On Tue, Apr 30, 2019 at 5:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> FWIW, my recommendation for this sort of thing is almost always\n>> to not use CHAR(n). The use-case for that datatype pretty much\n>> disappeared with the last IBM Model 029 card punch.\n> \n> +1 on the recommendation for PostgreSQL.\n> \n> I do think it's useful on slightly more recent IBM technology than the\n> 029 though. It's been a few years since I touched it, but DB2 manuals\n> and experts in this decade recommended fixed size types in some\n> circumstances, and they might in theory be useful on any\n> in-place-update system (and maybe us in some future table AM?). For\n> example, you can completely exclude the possibility of having to spill\n> to another page when updating (DB2 DBAs measure and complain about\n> rate of 'overflow' page usage which they consider failure and we\n> consider SOP), you can avoid wasting space on the length (at the cost\n> of wasting space on trailing spaces, if the contents vary in length),\n> you can get O(1) access to fixed sized attributes (perhaps even\n> updating single attributes). These aren't nothing, and I've seen DB2\n> DBAs get TPS improvements from that kind of stuff. (From memory this\n> type of thing was also a reason to think carefully about which tables\n> should use compression, because the fixed size space guarantees went\n> out the window.).\n\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:43:20 +0100",
"msg_from": "Rob <postgresql@mintsoft.net>",
"msg_from_op": true,
"msg_subject": "Re: CHAR vs NVARCHAR vs TEXT performance"
},
{
"msg_contents": ">\n>\n> > On Tue, Apr 30, 2019 at 5:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> FWIW, my recommendation for this sort of thing is almost always\n> >> to not use CHAR(n). The use-case for that datatype pretty much\n> >> disappeared with the last IBM Model 029 card punch.\n> ...\n>\n>\n>\nPerhaps the \"tip\" on the character datatype page (\nhttps://www.postgresql.org/docs/11/datatype-character.html) should be\nupdated as the statement \"There is no performance difference among these\nthree types...\" could easily lead a reader down the wrong path. The\nstatement may be true if one assumes the planner is able to make an optimal\nchoice but clearly there are cases that prevent that. If the situation is\nbetter explained elsewhere in the documentation then just a link to that\nexplanation may be all that is needed.\n\nCheers,\nSteve\n\n\n> On Tue, Apr 30, 2019 at 5:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> FWIW, my recommendation for this sort of thing is almost always\n>> to not use CHAR(n). The use-case for that datatype pretty much\n>> disappeared with the last IBM Model 029 card punch....\n\nPerhaps the \"tip\" on the character datatype page (https://www.postgresql.org/docs/11/datatype-character.html) should be updated as the statement \"There is no performance difference among these three types...\" could easily lead a reader down the wrong path. The statement may be true if one assumes the planner is able to make an optimal choice but clearly there are cases that prevent that. If the situation is better explained elsewhere in the documentation then just a link to that explanation may be all that is needed.Cheers,Steve",
"msg_date": "Tue, 30 Apr 2019 10:59:26 -0700",
"msg_from": "Steve Crawford <scrawford@pinpointresearch.com>",
"msg_from_op": false,
"msg_subject": "Re: CHAR vs NVARCHAR vs TEXT performance"
}
] |
[
{
"msg_contents": "\nHello devs,\n\nOn my SSD Ubuntu laptop, with postgres-distributed binaries and unmodified \ndefault settings using local connections:\n\n ## pg 11.2\n > time pgbench -i -s 100\n ...\n done in 31.51 s\n # (drop tables 0.00 s, create tables 0.01 s, generate 21.30 s, vacuum 3.32 s, primary keys 6.88 s).\n # real 0m31.524s\n\n ## pg 12devel (cd3e2746)\n > time pgbench -i -s 100\n # done in 38.68 s\n # (drop tables 0.00 s, create tables 0.02 s, generate 29.70 s, vacuum 2.92 s, primary keys 6.04 s).\n real 0m38.695s\n\nThat is an overall +20% regression, and about 40% on the generate phase \nalone. This is not a fluke, repeating the procedure shows similar results.\n\nIs it the same for other people out there, or is it only something related \nto my setup?\n\nWhat change could explain such a significant performance regression?\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 30 Apr 2019 07:12:03 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "performance regression when filling in a table"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 07:12:03 +0200, Fabien COELHO wrote:\n> On my SSD Ubuntu laptop, with postgres-distributed binaries and unmodified\n> default settings using local connections:\n\n> ## pg 11.2\n> > time pgbench -i -s 100\n> ...\n> done in 31.51 s\n> # (drop tables 0.00 s, create tables 0.01 s, generate 21.30 s, vacuum 3.32 s, primary keys 6.88 s).\n> # real 0m31.524s\n> \n> ## pg 12devel (cd3e2746)\n> > time pgbench -i -s 100\n> # done in 38.68 s\n> # (drop tables 0.00 s, create tables 0.02 s, generate 29.70 s, vacuum 2.92 s, primary keys 6.04 s).\n> real 0m38.695s\n> \n> That is an overall +20% regression, and about 40% on the generate phase\n> alone. This is not a fluke, repeating the procedure shows similar results.\n> \n> Is it the same for other people out there, or is it only something related\n> to my setup?\n> \n> What change could explain such a significant performance regression?\n\nI think the pre-release packages have had assertions enabled at some\npoint. I suggest checking that. If it's not that, profiles would be\nhelpful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 00:13:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: performance regression when filling in a table"
},
{
"msg_contents": "\nHello Andres,\n\n>> ## pg 11.2 done in 31.51 s\n>> ## pg 12devel (cd3e2746) real 0m38.695s\n>>\n>> What change could explain such a significant performance regression?\n>\n> I think the pre-release packages have had assertions enabled at some\n> point. I suggest checking that. If it's not that, profiles would be\n> helpful.\n\nThanks for the pointer.\n\nAfter some more tests based on versions compiled from sources, the \nsituation is different, and I was (maybe) mostly identifying another \neffect not related to postgres version.\n\nThe effect is that the first generation seems to take more time, but \ndropping the table and regenerating again much less, with a typical 40% \nperformance improvement between first and second run, independently of the \nversion. The reported figures above where comparisons between first for \npg12 and second or later for pg11.\n\nSo I was wrong, there is no significant performance regression per se, \nthe two versions behave mostly the same.\n\nI'm interested if someone has an explanation about why the first run is so \nbad or others are so good. My wide guess is that there is some space reuse \nunder the hood, although I do not know enough about the details to \nconfirm.\n\nA few relatively bad news nevertheless:\n\nPerformances are quite unstable, with index generation on the same scale \n100 data taking anything from 6 to 15 seconds over runs.\n\nDoing a VACUUM and checksums interact badly: vacuum time jumps from 3 \nseconds to 30 seconds:-(\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:32:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: performance regression when filling in a table"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 12:32:13 +0200, Fabien COELHO wrote:\n> The effect is that the first generation seems to take more time, but\n> dropping the table and regenerating again much less, with a typical 40%\n> performance improvement between first and second run, independently of the\n> version. The reported figures above where comparisons between first for pg12\n> and second or later for pg11.\n\nYea, that's pretty normal. The likely difference is that in the repeated\ncase you'll have WAL files ready to be recycled. I'd assume that the\ndifference between the runs would be much smaller if used unlogged\ntables (or WAL on a ramdisk or somesuch).\n\n\n> Performances are quite unstable, with index generation on the same scale 100\n> data taking anything from 6 to 15 seconds over runs.\n\nHow comparable are the runs? Are you restarting postgres inbetween?\nPerform checkpoints?\n\n\n> Doing a VACUUM and checksums interact badly: vacuum time jumps from 3\n> seconds to 30 seconds:-(\n\nHm, that's more than I normally see. What exactly did you do there?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:49:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: performance regression when filling in a table"
},
{
"msg_contents": "\nHello Andres,\n\n>> The effect is that the first generation seems to take more time, but\n>> dropping the table and regenerating again much less, with a typical 40%\n>> performance improvement between first and second run, independently of the\n>> version. The reported figures above where comparisons between first for pg12\n>> and second or later for pg11.\n>\n> Yea, that's pretty normal. The likely difference is that in the repeated\n> case you'll have WAL files ready to be recycled. I'd assume that the\n> difference between the runs would be much smaller if used unlogged\n> tables (or WAL on a ramdisk or somesuch).\n\nI tried unlogged, and indeed the first run is no different from subsequent \nones.\n\n>> Performances are quite unstable, with index generation on the same scale 100\n>> data taking anything from 6 to 15 seconds over runs.\n>\n> How comparable are the runs?\n\nSee below for a taste.\n\n> Are you restarting postgres inbetween?\n\nNope. Trying once did not change the measures.\n\n> Perform checkpoints?\n\nNope, but with the default settings there is one avery five minutes. I'm \nnot sure a checkpoint should have a significant impact on a COPY \ninitialization.\n\n>> Doing a VACUUM and checksums interact badly: vacuum time jumps from 3\n>> seconds to 30 seconds:-(\n>\n> Hm, that's more than I normally see. What exactly did you do there?\n\nI simply ran \"pgbench -i -s 100\" on master, with \nhttps://commitfest.postgresql.org/23/2085/ thrown in for detailed stats.\n\nWithout checksums:\n\n # init\n 37.68 s (drop tables 0.00 s, create tables 0.02 s, generate 27.12 s, vacuum 2.97 s, primary keys 7.56 s)\n 30.53 s (drop tables 0.25 s, create tables 0.01 s, generate 16.64 s, vacuum 3.47 s, primary keys 10.16 s)\n 36.31 s (drop tables 0.25 s, create tables 0.01 s, generate 18.94 s, vacuum 3.40 s, primary keys 13.71 s)\n 31.34 s (drop tables 0.23 s, create tables 0.01 s, generate 19.07 s, vacuum 3.00 s, primary keys 9.03 s)\n # reinit\n 38.25 s (drop tables 0.00 s, create tables 0.03 s, generate 29.33 s, vacuum 3.10 s, primary keys 5.80 s)\n 35.16 s (drop tables 0.25 s, create tables 0.01 s, generate 17.62 s, vacuum 2.62 s, primary keys 14.67 s)\n 29.15 s (drop tables 0.25 s, create tables 0.01 s, generate 17.35 s, vacuum 2.98 s, primary keys 8.55 s)\n 32.70 s (drop tables 0.25 s, create tables 0.01 s, generate 21.49 s, vacuum 2.65 s, primary keys 8.29 s)\n # reinit\n 42.39 s (drop tables 0.00 s, create tables 0.03 s, generate 33.98 s, vacuum 2.16 s, primary keys 6.23 s)\n 31.24 s (drop tables 0.24 s, create tables 0.01 s, generate 17.34 s, vacuum 4.74 s, primary keys 8.91 s)\n 26.91 s (drop tables 0.24 s, create tables 0.01 s, generate 16.83 s, vacuum 2.89 s, primary keys 6.94 s)\n 29.00 s (drop tables 0.25 s, create tables 0.01 s, generate 17.78 s, vacuum 2.97 s, primary keys 7.98 s)\n # init\n 37.68 s (drop tables 0.00 s, create tables 0.02 s, generate 27.12 s, vacuum 2.97 s, primary keys 7.56 s)\n 30.53 s (drop tables 0.25 s, create tables 0.01 s, generate 16.64 s, vacuum 3.47 s, primary keys 10.16 s)\n 36.31 s (drop tables 0.25 s, create tables 0.01 s, generate 18.94 s, vacuum 3.40 s, primary keys 13.71 s)\n 31.34 s (drop tables 0.23 s, create tables 0.01 s, generate 19.07 s, vacuum 3.00 s, primary keys 9.03 s)\n # reinit\n 38.25 s (drop tables 0.00 s, create tables 0.03 s, generate 29.33 s, vacuum 3.10 s, primary keys 5.80 s)\n 35.16 s (drop tables 0.25 s, create tables 0.01 s, generate 17.62 s, vacuum 2.62 s, primary keys 14.67 s)\n 29.15 s (drop tables 0.25 s, create tables 0.01 s, generate 17.35 s, vacuum 2.98 s, primary keys 8.55 s)\n 32.70 s (drop tables 0.25 s, create tables 0.01 s, generate 21.49 s, vacuum 2.65 s, primary keys 8.29 s)\n # reinit\n 42.39 s (drop tables 0.00 s, create tables 0.03 s, generate 33.98 s, vacuum 2.16 s, primary keys 6.23 s)\n 31.24 s (drop tables 0.24 s, create tables 0.01 s, generate 17.34 s, vacuum 4.74 s, primary keys 8.91 s)\n 26.91 s (drop tables 0.24 s, create tables 0.01 s, generate 16.83 s, vacuum 2.89 s, primary keys 6.94 s)\n 29.00 s (drop tables 0.25 s, create tables 0.01 s, generate 17.78 s, vacuum 2.97 s, primary keys 7.98 s)\n\nWith checksum enabled:\n\n # init\n 73.84 s (drop tables 0.00 s, create tables 0.03 s, generate 32.81 s, vacuum 34.95 s, primary keys 6.06 s)\n 61.49 s (drop tables 0.24 s, create tables 0.01 s, generate 18.55 s, vacuum 33.26 s, primary keys 9.42 s)\n 62.79 s (drop tables 0.24 s, create tables 0.01 s, generate 21.08 s, vacuum 33.50 s, primary keys 7.96 s)\n 58.77 s (drop tables 0.23 s, create tables 0.06 s, generate 21.98 s, vacuum 31.21 s, primary keys 5.30 s)\n # restart\n 63.77 s (drop tables 0.04 s, create tables 0.02 s, generate 17.37 s, vacuum 40.84 s, primary keys 5.51 s)\n 64.48 s (drop tables 0.22 s, create tables 0.01 s, generate 19.84 s, vacuum 33.43 s, primary keys 10.98 s)\n 64.10 s (drop tables 0.23 s, create tables 0.01 s, generate 22.11 s, vacuum 33.17 s, primary keys 8.57 s)\n # reinit\n 71.65 s (drop tables 0.00 s, create tables 0.03 s, generate 34.23 s, vacuum 31.67 s, primary keys 5.72 s)\n 64.33 s (drop tables 0.23 s, create tables 0.01 s, generate 21.31 s, vacuum 36.58 s, primary keys 6.20 s)\n 62.06 s (drop tables 0.23 s, create tables 0.02 s, generate 19.15 s, vacuum 37.34 s, primary keys 5.32 s)\n\nDetailed figure are not visibly different (other reported figures about \nchecksum vs no checksum suggested a few percent impact), but for VACUUM \nwhere it is closer to a thousand percent. Cassert is off, this is not the \nissue. Hmmm.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 1 May 2019 03:53:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: performance regression when filling in a table"
}
] |
[
{
"msg_contents": "With master (3dbb317d32f4f084ef7badaed8ef36f5c9b85fe6) I'm getting this: \nvisena=# CREATE INDEX origo_email_part_hdrvl_value_idx ON \npublic.origo_email_part_headervalue USING btree \n(lower(substr((header_value)::text, 0, 1000)) varchar_pattern_ops);\n psql: ERROR: failed to add item to the index page\nThe schema looks like this: create table origo_email_part_headervalue ( \nentity_idBIGSERIAL PRIMARY KEY, version int8 not null, header_value varchar NOT \nNULL, header_id int8 references origo_email_part_header (entity_id), value_index\nint NOT NULL DEFAULT0, UNIQUE (header_id, value_index) ); CREATE INDEX \norigo_email_part_hdrvl_hdr_id_idxON origo_email_part_headervalue (header_id); \nCREATE INDEXorigo_email_part_hdrvl_value_idx ON origo_email_part_headervalue (\nlower(substr(header_value, 0, 1000)) varchar_pattern_ops); (haven't tried any \nother version so I'm not sure when this started to happen) -- Andreas Joseph \nKrogh",
"msg_date": "Tue, 30 Apr 2019 11:03:15 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "ERROR: failed to add item to the index page"
},
{
"msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> With master (3dbb317d32f4f084ef7badaed8ef36f5c9b85fe6) I'm getting this: \n> visena=# CREATE INDEX origo_email_part_hdrvl_value_idx ON \n> public.origo_email_part_headervalue USING btree \n> (lower(substr((header_value)::text, 0, 1000)) varchar_pattern_ops);\n> psql: ERROR: failed to add item to the index page\n\nHm, your example works for me on HEAD.\n\nUsually, the first thing to suspect when you're tracking HEAD and get\nbizarre failures is that you have a messed-up build. Before spending\nany time diagnosing more carefully, do \"make distclean\", reconfigure,\nrebuild, reinstall, then see if problem is still there.\n\n(In theory, you can avoid this sort of failure with appropriate use\nof --enable-depend, but personally I don't trust that too much.\nI find that with ccache + autoconf cache + parallel build, rebuilding\ncompletely is fast enough that it's something I just do routinely\nafter any git pull. I'd rather use up my remaining brain cells on\nother kinds of problems...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:43:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 15:43:16, skrev Tom Lane <tgl@sss.pgh.pa.us \n<mailto:tgl@sss.pgh.pa.us>>: Andreas Joseph Krogh <andreas@visena.com> writes:\n > With master (3dbb317d32f4f084ef7badaed8ef36f5c9b85fe6) I'm getting this:\n > visena=# CREATE INDEX origo_email_part_hdrvl_value_idx ON\n > public.origo_email_part_headervalue USING btree\n > (lower(substr((header_value)::text, 0, 1000)) varchar_pattern_ops);\n > psql: ERROR: failed to add item to the index page\n\n Hm, your example works for me on HEAD.\n\n Usually, the first thing to suspect when you're tracking HEAD and get\n bizarre failures is that you have a messed-up build. Before spending\n any time diagnosing more carefully, do \"make distclean\", reconfigure,\n rebuild, reinstall, then see if problem is still there.\n\n (In theory, you can avoid this sort of failure with appropriate use\n of --enable-depend, but personally I don't trust that too much.\n I find that with ccache + autoconf cache + parallel build, rebuilding\n completely is fast enough that it's something I just do routinely\n after any git pull. I'd rather use up my remaining brain cells on\n other kinds of problems...)\n\n regards, tom lane I built with this: make distclean && ./configure \n--prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && make -j \n8 install-world-contrib-recurse install-world-doc-recurse\nIt's probably caused by the data: visena=# select count(*) from \norigo_email_part_headervalue;\n count\n ----------\n 14609516\n (1 row)\nI'll see if I can create a self contained example. \n --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 15:49:48 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> I built with this: make distclean && ./configure \n> --prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && make -j \n> 8 install-world-contrib-recurse install-world-doc-recurse\n\n--with-llvm, eh? Does it reproduce without that? What platform is\nthis on, what LLVM version?\n\n> I'll see if I can create a self contained example. \n\nPlease.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:53:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 15:53:50, skrev Tom Lane <tgl@sss.pgh.pa.us \n<mailto:tgl@sss.pgh.pa.us>>: Andreas Joseph Krogh <andreas@visena.com> writes:\n > I built with this: make distclean && ./configure\n > --prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && \nmake -j\n > 8 install-world-contrib-recurse install-world-doc-recurse\n\n --with-llvm, eh? Does it reproduce without that? What platform is\n this on, what LLVM version?\n\n > I'll see if I can create a self contained example.\n\n Please.\n\n regards, tom lane Ubuntu 19.04 $ llvm-config --version \n 8.0.0\n\"--with-llvm\" was something I had from when pg-11 was master. It might not be \nneeded anymore? I'm trying a fresh build without --with-llvm and reload of data \nnow. \n --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 16:03:04 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Sv: Re: Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 16:03:04, skrev Andreas Joseph Krogh <\nandreas@visena.com <mailto:andreas@visena.com>>: På tirsdag 30. april 2019 kl. \n15:53:50, skrev Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>>: \nAndreas Joseph Krogh <andreas@visena.com> writes:\n > I built with this: make distclean && ./configure\n > --prefix=$HOME/programs/postgresql-master --with-openssl --with-llvm && \nmake -j\n > 8 install-world-contrib-recurse install-world-doc-recurse\n\n --with-llvm, eh? Does it reproduce without that? What platform is\n this on, what LLVM version?\n\n > I'll see if I can create a self contained example.\n\n Please.\n\n regards, tom lane Ubuntu 19.04 $ llvm-config --version \n 8.0.0\n\"--with-llvm\" was something I had from when pg-11 was master. It might not be \nneeded anymore? I'm trying a fresh build without --with-llvm and reload of data \nnow. Yep, happens without --with-llvm also. I'll try to load only the necessary \ntable(s) to reproduce. --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 16:27:05 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Sv: Sv: Re: Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "\nPlease fix or abstain from using the MUA that produces this monstrosity\nof a Subject: \"Sv: Sv: Re: Sv: Re: ERROR: failed to add item to the\nindex page\"\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:34:31 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Sv: Sv: Re: Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 16:27:05, skrev Andreas Joseph Krogh <\nandreas@visena.com <mailto:andreas@visena.com>>: [snip] Yep, happens without \n--with-llvm also. I'll try to load only the necessary table(s) to reproduce. I \nhave a 1.4GB dump (only one table) which reliably reproduces this error.\n Shall I share it off-list? --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 18:44:19 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 9:44 AM Andreas Joseph Krogh\n<andreas@visena.com> wrote:\n> I have a 1.4GB dump (only one table) which reliably reproduces this error.\n> Shall I share it off-list?\n\nI would be quite interested in this, too, since there is a chance that\nit's my bug.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:45:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> I have a 1.4GB dump (only one table) which reliably reproduces this error.\n> Shall I share it off-list? --\n\nThat's awfully large :-(. How do you have in mind to transmit it?\n\nMaybe you could write a short script that generates dummy data\nto reproduce the problem?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:47:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 9:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andreas Joseph Krogh <andreas@visena.com> writes:\n> > I have a 1.4GB dump (only one table) which reliably reproduces this error.\n> > Shall I share it off-list? --\n>\n> That's awfully large :-(. How do you have in mind to transmit it?\n\nI've send dumps that were larger than that by providing a Google drive\nlink. Something like that should work reasonably well.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:48:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 18:48:45, skrev Peter Geoghegan <pg@bowt.ie \n<mailto:pg@bowt.ie>>: On Tue, Apr 30, 2019 at 9:47 AM Tom Lane \n<tgl@sss.pgh.pa.us> wrote:\n > Andreas Joseph Krogh <andreas@visena.com> writes:\n > > I have a 1.4GB dump (only one table) which reliably reproduces this error.\n > > Shall I share it off-list? --\n >\n > That's awfully large :-(. How do you have in mind to transmit it?\n\n I've send dumps that were larger than that by providing a Google drive\n link. Something like that should work reasonably well. I've sent you guys a \nlink (Google Drive) off-list. \n --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 18:56:02 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 9:56 AM Andreas Joseph Krogh <andreas@visena.com> wrote:\n> I've sent you guys a link (Google Drive) off-list.\n\nI'll start investigating the problem right away.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 09:59:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 9:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'll start investigating the problem right away.\n\nI have found what the problem is. I simply neglected to make a\nconservative assumption about suffix truncation needing to add a heap\nTID to a leaf page's new high key in nbtsort.c (following commit\ndd299df8189), even though I didn't make the same mistake in\nnbtsplitloc.c. Not sure how I managed to make such a basic error.\n\nAndreas' test case works fine with the attached patch. I won't push a\nfix for this today.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 30 Apr 2019 10:58:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 19:58:31, skrev Peter Geoghegan <pg@bowt.ie \n<mailto:pg@bowt.ie>>: On Tue, Apr 30, 2019 at 9:59 AM Peter Geoghegan \n<pg@bowt.ie> wrote:\n > I'll start investigating the problem right away.\n\n I have found what the problem is. I simply neglected to make a\n conservative assumption about suffix truncation needing to add a heap\n TID to a leaf page's new high key in nbtsort.c (following commit\n dd299df8189), even though I didn't make the same mistake in\n nbtsplitloc.c. Not sure how I managed to make such a basic error.\n\n Andreas' test case works fine with the attached patch. I won't push a\n fix for this today.\n\n --\n Peter Geoghegan Nice, thanks! --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 20:54:45 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 11:54 AM Andreas Joseph Krogh\n<andreas@visena.com> wrote:\n> Nice, thanks!\n\nThanks for the report!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 11:55:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 20:54:45 +0200, Andreas Joseph Krogh wrote:\n> P� tirsdag 30. april 2019 kl. 19:58:31, skrev Peter Geoghegan <pg@bowt.ie \n> <mailto:pg@bowt.ie>>: On Tue, Apr 30, 2019 at 9:59 AM Peter Geoghegan \n> <pg@bowt.ie> wrote:\n> > I'll start investigating the problem right away.\n> \n> I have found what the problem is. I simply neglected to make a\n> conservative assumption about suffix truncation needing to add a heap\n> TID to a leaf page's new high key in nbtsort.c (following commit\n> dd299df8189), even though I didn't make the same mistake in\n> nbtsplitloc.c. Not sure how I managed to make such a basic error.\n> \n> Andreas' test case works fine with the attached patch. I won't push a\n> fix for this today.\n> \n> --\n> Peter Geoghegan Nice, thanks! --\n> Andreas Joseph Krogh\n\nAndreas, unfortunately your emails are pretty unreadable. Check the\nquoted email, and the web archive:\n\nhttps://www.postgresql.org/message-id/VisenaEmail.41.51d7719d814a1f54.16a6f98a5e9%40tc7-visena\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 11:59:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 20:59:43, skrev Andres Freund <andres@anarazel.de \n<mailto:andres@anarazel.de>>: [...]\n Andreas, unfortunately your emails are pretty unreadable. Check the\n quoted email, and the web archive:\n\n \nhttps://www.postgresql.org/message-id/VisenaEmail.41.51d7719d814a1f54.16a6f98a5e9%40tc7-visena\n\n Greetings,\n\n Andres Freund\nI know that the text-version is quite unreadable, especially when quoting. My \nMUA is web-based and uses CKEditor for composing, and it doesn't care much to \ntry to format the text/plain version (I know because I've written it, yes and \nhave yet to fix the Re: Sv: Re: Sv: subject issue...). But it has tons of \nbenefits CRM- and usage-wise so I prefer to use it. But - how use text/plain \nthese days:-) --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 21:23:21 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 21:23:21 +0200, Andreas Joseph Krogh wrote:\n> P� tirsdag 30. april 2019 kl. 20:59:43, skrev Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>>: [...]\n> Andreas, unfortunately your emails are pretty unreadable. Check the\n> quoted email, and the web archive:\n> \n> \n> https://www.postgresql.org/message-id/VisenaEmail.41.51d7719d814a1f54.16a6f98a5e9%40tc7-visena\n> \n> Greetings,\n> \n> Andres Freund\n\n> I know that the text-version is quite unreadable, especially when quoting. My \n> MUA is web-based and uses CKEditor for composing, and it doesn't care much to \n> try to format the text/plain version (I know because I've written it, yes and \n> have yet to fix the Re: Sv: Re: Sv: subject issue...). But it has tons of \n> benefits CRM- and usage-wise so I prefer to use it. But - how use text/plain \n> these days:-) --\n\nThe standard on pg lists is to write in a manner that's usable for both\ntext mail readers and the archive. Doesn't terribly matter to the\noccasional one-off poster on -general, but you're not that... So please\ntry to write readable mails for the PG lists.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 12:26:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På tirsdag 30. april 2019 kl. 21:26:52, skrev Andres Freund <andres@anarazel.de \n<mailto:andres@anarazel.de>>: > [...]\n> The standard on pg lists is to write in a manner that's usable for both > \ntext mail readers and the archive. Doesn't terribly matter to the > occasional \none-off poster on -general, but you're not that... So please > try to write \nreadable mails for the PG lists.\n> \n> Greetings,\n > \n> Andres Freund ACK. --\n Andreas Joseph Krogh",
"msg_date": "Tue, 30 Apr 2019 21:33:26 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Sv: Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 10:58 AM Peter Geoghegan <pg@bowt.ie> wrote:j\n> I have found what the problem is. I simply neglected to make a\n> conservative assumption about suffix truncation needing to add a heap\n> TID to a leaf page's new high key in nbtsort.c (following commit\n> dd299df8189), even though I didn't make the same mistake in\n> nbtsplitloc.c. Not sure how I managed to make such a basic error.\n\nAttached is a much more polished version of the same patch. I tried to\nmake clear how the \"page full\" test (the test that has been fixed to\ntake heap TID space for high key into account) is related to other\nclose-by code, such as the tuple space limit budget within\n_bt_check_third_page(), and the code that sets up an actual call to\n_bt_truncate().\n\nI'll wait a few days before pushing this. This version doesn't feel\ntoo far off being committable. I tested it with some of the CREATE\nINDEX tests that I developed during development of the nbtree unique\nkeys project, including a test with tuples that are precisely at the\n1/3 of a page threshold. The new definition of 1/3 of a page takes\nhigh key heap TID overhead into account -- see _bt_check_third_page().\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 30 Apr 2019 18:28:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 6:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is a much more polished version of the same patch. I tried to\n> make clear how the \"page full\" test (the test that has been fixed to\n> take heap TID space for high key into account) is related to other\n> close-by code, such as the tuple space limit budget within\n> _bt_check_third_page(), and the code that sets up an actual call to\n> _bt_truncate().\n\nPushed, though final version does the test a little differently. It\nadds the required heap TID space to itupsz, rather than subtracting it\nfrom pgspc. This is actually representative of the underlying logic,\nand avoids unsigned underflow.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 2 May 2019 12:38:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: failed to add item to the index page"
},
{
"msg_contents": "På torsdag 02. mai 2019 kl. 21:38:02, skrev Peter Geoghegan <pg@bowt.ie>:\n > Pushed, though final version does the test a little differently. It\n > adds the required heap TID space to itupsz, rather than subtracting it\n > from pgspc. This is actually representative of the underlying logic,\n > and avoids unsigned underflow. Thanks! \n --\n Andreas Joseph Krogh",
"msg_date": "Thu, 2 May 2019 21:41:46 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: failed to add item to the index page"
}
] |
[
{
"msg_contents": "Hello, hackers\n\nwe witnessed this slightly misleading error in production and it took us a while to figure out what was taking place.\nBelow are reproduction steps:\n\n\n-- setup\ncreate table trun(cate int4);\n\n-- session 1\nbegin;\ntruncate table trun;\n\n-- session 2\ngrant insert on table trun to postgres;\n\n-- session 1\nend;\n\n-- session 2:\nERROR: XX000: tuple concurrently updated\nLOCATION: simple_heap_update, heapam.c:4474\n\nApparently the tuple in question is the pg_class entry of the table being truncated. I didn't look too deep into the cause, but I'm certain the error message could be improved at least.\n\nRegards,\nNick.\n\n\n",
"msg_date": "Tue, 30 Apr 2019 05:26:32 -0400",
"msg_from": "nickb <nickb@imap.cc>",
"msg_from_op": true,
"msg_subject": "ERROR: tuple concurrently updated when modifying privileges"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 11:26 AM nickb <nickb@imap.cc> wrote:\n\n> Hello, hackers\n>\n> we witnessed this slightly misleading error in production and it took us a\n> while to figure out what was taking place.\n> Below are reproduction steps:\n>\n>\n> -- setup\n> create table trun(cate int4);\n>\n> -- session 1\n> begin;\n> truncate table trun;\n>\n> -- session 2\n> grant insert on table trun to postgres;\n>\n> -- session 1\n> end;\n>\n> -- session 2:\n> ERROR: XX000: tuple concurrently updated\n> LOCATION: simple_heap_update, heapam.c:4474\n>\n> Apparently the tuple in question is the pg_class entry of the table being\n> truncated. I didn't look too deep into the cause, but I'm certain the error\n> message could be improved at least.\n>\n\nHaving thought about this a bit, I think the best solution would be to have\ngrant take out an access share lock to the tables granted. This would\nprevent concurrent alter table operations from altering the schema\nunderneath the grant as well, and thus possibly cause other race conditions.\n\nAny thoughts?\n\n>\n> Regards,\n> Nick.\n>\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Tue, Apr 30, 2019 at 11:26 AM nickb <nickb@imap.cc> wrote:Hello, hackers\n\nwe witnessed this slightly misleading error in production and it took us a while to figure out what was taking place.\nBelow are reproduction steps:\n\n\n-- setup\ncreate table trun(cate int4);\n\n-- session 1\nbegin;\ntruncate table trun;\n\n-- session 2\ngrant insert on table trun to postgres;\n\n-- session 1\nend;\n\n-- session 2:\nERROR: XX000: tuple concurrently updated\nLOCATION: simple_heap_update, heapam.c:4474\n\nApparently the tuple in question is the pg_class entry of the table being truncated. I didn't look too deep into the cause, but I'm certain the error message could be improved at least.Having thought about this a bit, I think the best solution would be to have grant take out an access share lock to the tables granted. This would prevent concurrent alter table operations from altering the schema underneath the grant as well, and thus possibly cause other race conditions.Any thoughts? \n\nRegards,\nNick.\n\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Tue, 14 May 2019 08:08:05 +0200",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: tuple concurrently updated when modifying privileges"
},
{
"msg_contents": "On Tue, May 14, 2019 at 08:08:05AM +0200, Chris Travers wrote:\n> Having thought about this a bit, I think the best solution would be to have\n> grant take out an access share lock to the tables granted. This would\n> prevent concurrent alter table operations from altering the schema\n> underneath the grant as well, and thus possibly cause other race conditions.\n> \n> Any thoughts?\n\n\"tuple concurrently updated\" is an error message which should never be\nuser-facing, and unfortunately there are many scenarios where it can\nbe triggered by playing with concurrent DDLs:\nhttps://postgr.es/m/20171228063004.GB6181@paquier.xyz\n\nIf you have an idea of patch, could you write it? Having an isolation\ntest case would be nice as well.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 16:11:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: tuple concurrently updated when modifying privileges"
},
{
"msg_contents": "On Tue, May 14, 2019 at 9:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 14, 2019 at 08:08:05AM +0200, Chris Travers wrote:\n> > Having thought about this a bit, I think the best solution would be to\n> have\n> > grant take out an access share lock to the tables granted. This would\n> > prevent concurrent alter table operations from altering the schema\n> > underneath the grant as well, and thus possibly cause other race\n> conditions.\n> >\n> > Any thoughts?\n>\n> \"tuple concurrently updated\" is an error message which should never be\n> user-facing, and unfortunately there are many scenarios where it can\n> be triggered by playing with concurrent DDLs:\n> https://postgr.es/m/20171228063004.GB6181@paquier.xyz\n>\n> If you have an idea of patch, could you write it? Having an isolation\n> test case would be nice as well.\n>\n\nI will give Nick a chance to do the patch if he wants it (I have reached\nout). Otherwise sure.\n\nI did notice one more particularly exotic corner case that is not resolved\nby this proposed fix.\n\nIf you have two transactions with try to grant onto the same pg entity\n(table etc) *both* will typically fail on the same error.\n\nI am not sure that is a bad thing because I am not sure how concurrent\ngrants are supposed to work with MVCC but I think that would require a\nfundamentally different approach.\n\n\n> --\n> Michael\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Tue, May 14, 2019 at 9:11 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 14, 2019 at 08:08:05AM +0200, Chris Travers wrote:\n> Having thought about this a bit, I think the best solution would be to have\n> grant take out an access share lock to the tables granted. This would\n> prevent concurrent alter table operations from altering the schema\n> underneath the grant as well, and thus possibly cause other race conditions.\n> \n> Any thoughts?\n\n\"tuple concurrently updated\" is an error message which should never be\nuser-facing, and unfortunately there are many scenarios where it can\nbe triggered by playing with concurrent DDLs:\nhttps://postgr.es/m/20171228063004.GB6181@paquier.xyz\n\nIf you have an idea of patch, could you write it? Having an isolation\ntest case would be nice as well.I will give Nick a chance to do the patch if he wants it (I have reached out). Otherwise sure.I did notice one more particularly exotic corner case that is not resolved by this proposed fix.If you have two transactions with try to grant onto the same pg entity (table etc) *both* will typically fail on the same error.I am not sure that is a bad thing because I am not sure how concurrent grants are supposed to work with MVCC but I think that would require a fundamentally different approach. \n--\nMichael\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Tue, 14 May 2019 11:39:12 +0200",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: tuple concurrently updated when modifying privileges"
}
] |
[
{
"msg_contents": "I have this two message patches that I've been debating with myself\nabout:\n\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -1282,7 +1282,7 @@ heap_getnext(TableScanDesc sscan, ScanDirection direction)\n \tif (unlikely(sscan->rs_rd->rd_tableam != GetHeapamTableAmRoutine()))\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n-\t\t\t\t errmsg(\"only heap AM is supported\")));\n+\t\t\t\t errmsg(\"only heap table access method is supported\")));\n \n\nI think the original is not great, but I'm not sure that the new is much\nbetter either. I think this message says \"only AMs that behave using\nthe heapam routines are supported\"; we cannot say use the literal\n\"heapam\" AM name because, as the comment two lines above says, it's\npossible to copy the AM with a different name and it would be\nacceptable. OTOH maybe this code will not survive for long, so it\ndoesn't matter much that the message is 100% correct; perhaps we should\njust change errmsg() to errmsg_internal() and be done with it.\n\n\ndiff --git a/src/backend/access/table/tableamapi.c b/src/backend/access/table/tableamapi.c\nindex 0053dc95cab..c8b7598f785 100644\n--- a/src/backend/access/table/tableamapi.c\n+++ b/src/backend/access/table/tableamapi.c\n@@ -103,7 +103,8 @@ check_default_table_access_method(char **newval, void **extra, GucSource source)\n {\n \tif (**newval == '\\0')\n \t{\n-\t\tGUC_check_errdetail(\"default_table_access_method may not be empty.\");\n+\t\tGUC_check_errdetail(\"%s may not be empty.\",\n+\t\t\t\t\t\t\t\"default_table_access_method\");\n \t\treturn false;\n \t}\n\nMy problem here is not really the replacement of the name to %s, but the\n\"may not be\" part of it. We don't use \"may not be\" anywhere else; most\nplaces seem to use \"foo cannot be X\" and a small number of other places\nuse \"foo must not be Y\". I'm not able to judge which of the two is\nbetter (so change all messages to use that form), or if there's a\nsemantic difference and if so which one to use in this case.\n\nI do notice that there's no place where we would benefit from changing\nthe \"foo\" part to %s --that is, there are no duplicate strings-- but I'd\nchange it anyway to reduce the chance of typos in translations.\n\ngit grep -E 'err[a-z]*\\(.* (can|must |may )not be ' -- *.c\n\ncontrib/amcheck/verify_nbtree.c:\t\t\t\t\t\t errmsg(\"index \\\"%s\\\" cannot be verified using transaction snapshot\",\ncontrib/fuzzystrmatch/fuzzystrmatch.c:\t\t\t\t errmsg(\"output cannot be empty string\")));\ncontrib/jsonb_plpython/jsonb_plpython.c:\t\t\t\t (errmsg(\"Python type \\\"%s\\\" cannot be transformed to jsonb\",\ncontrib/pg_prewarm/pg_prewarm.c:\t\t\t\t errmsg(\"relation cannot be null\")));\ncontrib/pg_prewarm/pg_prewarm.c:\t\t\t\t (errmsg(\"prewarm type cannot be null\"))));\ncontrib/pg_prewarm/pg_prewarm.c:\t\t\t\t (errmsg(\"relation fork cannot be null\"))));\ncontrib/tcn/tcn.c:\t\t\t\t errmsg(\"triggered_change_notification: must not be called with more than one parameter\")));\ncontrib/tsm_system_rows/tsm_system_rows.c:\t\t\t\t errmsg(\"sample size must not be negative\")));\ncontrib/tsm_system_time/tsm_system_time.c:\t\t\t\t errmsg(\"sample collection time must not be negative\")));\nsrc/backend/access/brin/brin.c:\t\t\t\t errhint(\"BRIN control functions cannot be executed during recovery.\")));\nsrc/backend/access/brin/brin.c:\t\t\t\t errhint(\"BRIN control functions cannot be executed during recovery.\")));\nsrc/backend/access/common/tupdesc.c:\t\t\t\t\t errmsg(\"column \\\"%s\\\" cannot be declared SETOF\",\nsrc/backend/access/gin/ginfast.c:\t\t\t\t errhint(\"GIN pending list cannot be cleaned up during recovery.\")));\nsrc/backend/access/hash/hashinsert.c:\t\t\t\t errhint(\"Values larger than a buffer page cannot be indexed.\")));\nsrc/backend/access/nbtree/nbtutils.c:\t\t\t errhint(\"Values larger than 1/3 of a buffer page cannot be indexed.\\n\"\nsrc/backend/access/spgist/spgdoinsert.c:\t\t\t\t errhint(\"Values larger than a buffer page cannot be indexed.\")));\nsrc/backend/access/spgist/spgutils.c:\t\t\t\t errhint(\"Values larger than a buffer page cannot be indexed.\")));\nsrc/backend/access/table/tableamapi.c:\t\tGUC_check_errdetail(\"%s may not be empty.\",\nsrc/backend/access/transam/xact.c:\t\t\t\t errmsg(\"%s cannot be executed from a function\", stmtType)));\nsrc/backend/access/transam/xlog.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlog.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\"))));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"WAL control functions cannot be executed during recovery.\")));\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"%s cannot be executed during recovery.\",\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errhint(\"%s cannot be executed during recovery.\",\nsrc/backend/access/transam/xlogfuncs.c:\t\t\t\t errmsg(\"\\\"wait_seconds\\\" cannot be negative or equal zero\")));\nsrc/backend/catalog/aclchk.c:\t\t\t\t\t\t errmsg(\"default privileges cannot be set for columns\")));\nsrc/backend/catalog/dependency.c:\t\t\t\t\t\t\t errmsg(\"constant of the type %s cannot be used here\",\nsrc/backend/catalog/heap.c:\t\t\t\t\t errmsg(\"composite type %s cannot be made a member of itself\",\nsrc/backend/catalog/index.c:\t\t\t\t\t errmsg(\"primary keys cannot be expressions\")));\nsrc/backend/catalog/index.c:\t\t\t\t errmsg(\"shared indexes cannot be created after initdb\")));\nsrc/backend/catalog/objectaddress.c:\t\t\t\t\t errmsg(\"large object OID may not be null\")));\nsrc/backend/catalog/pg_aggregate.c:\t\t\t\t\t errmsg(\"final function with extra arguments must not be declared STRICT\")));\nsrc/backend/catalog/pg_aggregate.c:\t\t\t\t\t errmsg(\"combine function with transition type %s must not be declared STRICT\",\nsrc/backend/catalog/pg_aggregate.c:\t\t\t\t\t\t errmsg(\"final function with extra arguments must not be declared STRICT\")));\nsrc/backend/catalog/pg_operator.c:\t\t\t\t\t errmsg(\"operator cannot be its own negator or sort operator\")));\nsrc/backend/catalog/pg_publication.c:\t\t\t\t errdetail(\"System tables cannot be added to publications.\")));\nsrc/backend/catalog/pg_publication.c:\t\t\t\t errmsg(\"table \\\"%s\\\" cannot be replicated\",\nsrc/backend/catalog/pg_publication.c:\t\t\t\t errdetail(\"Temporary and unlogged relations cannot be replicated.\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate msfunc must not be specified without mstype\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate minvfunc must not be specified without mstype\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate mfinalfunc must not be specified without mstype\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate msspace must not be specified without mstype\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate minitcond must not be specified without mstype\")));\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t errmsg(\"aggregate transition data type cannot be %s\",\nsrc/backend/commands/aggregatecmds.c:\t\t\t\t\t\t errmsg(\"aggregate transition data type cannot be %s\",\nsrc/backend/commands/async.c:\t\t\t\t errmsg(\"channel name cannot be empty\")));\nsrc/backend/commands/async.c:\t\t\t\t errhint(\"The NOTIFY queue cannot be emptied until that process ends its current transaction.\")\nsrc/backend/commands/cluster.c:\t\t\t errdetail(\"%.0f dead row versions cannot be removed yet.\\n\"\nsrc/backend/commands/collationcmds.c:\t\t\t\t\t errmsg(\"collation \\\"default\\\" cannot be copied\")));\nsrc/backend/commands/copy.c:\t\t\t\t errmsg(\"COPY delimiter cannot be newline or carriage return\")));\nsrc/backend/commands/copy.c:\t\t\t\t errmsg(\"COPY delimiter cannot be \\\"%s\\\"\", cstate->delim)));\nsrc/backend/commands/copy.c:\t\t\t\t\t\t\t\t errdetail(\"Generated columns cannot be used in COPY.\")));\nsrc/backend/commands/dbcommands.c:\t\t\t\t\t errmsg(\"pg_global cannot be used as default tablespace\")));\nsrc/backend/commands/dbcommands.c:\t\t\t\t errmsg(\"current database cannot be renamed\")));\nsrc/backend/commands/dbcommands.c:\t\t\t\t errmsg(\"pg_global cannot be used as default tablespace\")));\nsrc/backend/commands/dbcommands.c:\t\t\t\t\t errmsg(\"option \\\"%s\\\" cannot be specified with other options\",\nsrc/backend/commands/extension.c:\t\t\t\t errdetail(\"Extension names must not be empty.\")));\nsrc/backend/commands/extension.c:\t\t\t\t errdetail(\"Version names must not be empty.\")));\nsrc/backend/commands/extension.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be set in a secondary extension control file\",\nsrc/backend/commands/extension.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be set in a secondary extension control file\",\nsrc/backend/commands/extension.c:\t\t\t\t errmsg(\"parameter \\\"schema\\\" cannot be specified when \\\"relocatable\\\" is true\")));\nsrc/backend/commands/functioncmds.c:\t\t\t\t\t errmsg(\"type modifier cannot be specified for shell type \\\"%s\\\"\",\nsrc/backend/commands/functioncmds.c:\t\t\t\t\t errmsg(\"cast function must not be volatile\")));\nsrc/backend/commands/functioncmds.c:\t\t\t\t\t errmsg(\"domain data types must not be marked binary-compatible\")));\nsrc/backend/commands/functioncmds.c:\t\t\t\t errmsg(\"transform function must not be volatile\")));\nsrc/backend/commands/indexcmds.c:\t\t\t\t\t\t errdetail(\"%s constraints cannot be used when partition keys include expressions.\",\nsrc/backend/commands/matview.c:\t\t\t\t errmsg(\"CONCURRENTLY cannot be used when the materialized view is not populated\")));\nsrc/backend/commands/matview.c:\t\t\t\t errmsg(\"CONCURRENTLY and WITH NO DATA options cannot be used together\")));\nsrc/backend/commands/opclasscmds.c:\t\t\t\t\t errmsg(\"storage type cannot be different from data type for access method \\\"%s\\\"\",\nsrc/backend/commands/opclasscmds.c:\t\t\t\t\t\t errmsg(\"STORAGE cannot be specified in ALTER OPERATOR FAMILY\")));\nsrc/backend/commands/operatorcmds.c:\t\t\t\t\t errmsg(\"operator attribute \\\"%s\\\" cannot be changed\",\nsrc/backend/commands/policy.c:\t\t\t\t errmsg(\"WITH CHECK cannot be applied to SELECT or DELETE\")));\nsrc/backend/commands/portalcmds.c:\t\t\t\t errmsg(\"invalid cursor name: must not be empty\")));\nsrc/backend/commands/portalcmds.c:\t\t\t\t errmsg(\"invalid cursor name: must not be empty\")));\nsrc/backend/commands/portalcmds.c:\t\t\t\t errmsg(\"invalid cursor name: must not be empty\")));\nsrc/backend/commands/prepare.c:\t\t\t\t errmsg(\"invalid statement name: must not be empty\")));\nsrc/backend/commands/prepare.c:\t\t\t\t\t errmsg(\"utility statements cannot be prepared\")));\nsrc/backend/commands/prepare.c:\t\t\t\t\t errmsg(\"parameter $%d of type %s cannot be coerced to the expected type %s\",\nsrc/backend/commands/publicationcmds.c:\t\t\t\t errdetail(\"Tables cannot be added to or dropped from FOR ALL TABLES publications.\")));\nsrc/backend/commands/sequence.c:\t\t\t\t\t errmsg(\"INCREMENT must not be zero\")));\nsrc/backend/commands/sequence.c:\t\t\t\t errmsg(\"START value (%s) cannot be less than MINVALUE (%s)\",\nsrc/backend/commands/sequence.c:\t\t\t\t errmsg(\"START value (%s) cannot be greater than MAXVALUE (%s)\",\nsrc/backend/commands/sequence.c:\t\t\t\t errmsg(\"RESTART value (%s) cannot be less than MINVALUE (%s)\",\nsrc/backend/commands/sequence.c:\t\t\t\t errmsg(\"RESTART value (%s) cannot be greater than MAXVALUE (%s)\",\nsrc/backend/commands/statscmds.c:\t\t\t\t\t errmsg(\"column \\\"%s\\\" cannot be used in statistics because its type %s has no default btree operator class\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t errmsg(\"foreign key constraint \\\"%s\\\" cannot be implemented\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t\t errmsg(\"column \\\"%s\\\" cannot be cast automatically to type %s\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t\t errmsg(\"generation expression for column \\\"%s\\\" cannot be cast automatically to type %s\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t\t errmsg(\"default for column \\\"%s\\\" cannot be cast automatically to type %s\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t errmsg(\"index \\\"%s\\\" cannot be used as replica identity because column %d is a system column\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t\t errmsg(\"index \\\"%s\\\" cannot be used as replica identity because column \\\"%s\\\" is nullable\",\nsrc/backend/commands/tablecmds.c:\t\t\t\t errdetail(\"Unlogged relations cannot be replicated.\")));\nsrc/backend/commands/trigger.c:\t\t\t\t\t\t errmsg(\"transition tables cannot be specified for triggers with more than one event\")));\nsrc/backend/commands/trigger.c:\t\t\t\t\t\t errmsg(\"transition tables cannot be specified for triggers with column lists\")));\nsrc/backend/commands/trigger.c:\t\t\t\t\t\t\t errmsg(\"NEW TABLE cannot be specified multiple times\")));\nsrc/backend/commands/trigger.c:\t\t\t\t\t\t\t errmsg(\"OLD TABLE cannot be specified multiple times\")));\nsrc/backend/commands/trigger.c:\t\t\t\t\t errmsg(\"OLD TABLE name and NEW TABLE name cannot be the same\")));\nsrc/backend/commands/typecmds.c:\t\t\t\t\t errmsg(\"array element type cannot be %s\",\nsrc/backend/commands/typecmds.c:\t\t\t\t\t\t\t errmsg(\"check constraints for domains cannot be marked NO INHERIT\")));\nsrc/backend/commands/typecmds.c:\t\t\t\t errmsg(\"range subtype cannot be %s\",\nsrc/backend/commands/user.c:\t\t\t\t\t errmsg(\"current user cannot be dropped\")));\nsrc/backend/commands/user.c:\t\t\t\t\t errmsg(\"current user cannot be dropped\")));\nsrc/backend/commands/user.c:\t\t\t\t\t errmsg(\"session user cannot be dropped\")));\nsrc/backend/commands/user.c:\t\t\t\t\t errmsg(\"role \\\"%s\\\" cannot be dropped because some objects depend on it\",\nsrc/backend/commands/user.c:\t\t\t\t errmsg(\"session user cannot be renamed\")));\nsrc/backend/commands/user.c:\t\t\t\t errmsg(\"current user cannot be renamed\")));\nsrc/backend/commands/user.c:\t\t\t\t\t errmsg(\"column names cannot be included in GRANT/REVOKE ROLE\")));\nsrc/backend/commands/vacuum.c:\t\t\t\t errmsg(\"%s cannot be executed from VACUUM or ANALYZE\",\nsrc/backend/commands/vacuum.c:\t\t\t\t errmsg(\"VACUUM option DISABLE_PAGE_SKIPPING cannot be used with FULL\")));\nsrc/backend/commands/variable.c:\t\t\tGUC_check_errmsg(\"SET TRANSACTION ISOLATION LEVEL must not be called in a subtransaction\");\nsrc/backend/commands/variable.c:\t\tGUC_check_errmsg(\"SET TRANSACTION [NOT] DEFERRABLE cannot be called within a subtransaction\");\nsrc/backend/commands/view.c:\t\t\t\t errmsg(\"views cannot be unlogged because they do not have storage\")));\nsrc/backend/executor/execExpr.c:\t\t\t\t\t\t\t\t errmsg(\"window function calls cannot be nested\")));\nsrc/backend/executor/execExprInterp.c:\t\t\t\t\t\t errdetail(\"Array with element type %s cannot be \"\nsrc/backend/executor/execExprInterp.c:\t\t\t\t\t errmsg(\"array subscript in assignment must not be null\")));\nsrc/backend/executor/nodeAgg.c:\t\t\t\t errmsg(\"aggregate function calls cannot be nested\")));\nsrc/backend/executor/nodeAgg.c:\t\t\t\t\t errmsg(\"combine function with transition type %s must not be declared STRICT\",\nsrc/backend/executor/nodeLimit.c:\t\t\t\t\t\t errmsg(\"OFFSET must not be negative\")));\nsrc/backend/executor/nodeLimit.c:\t\t\t\t\t\t errmsg(\"LIMIT must not be negative\")));\nsrc/backend/executor/nodeSamplescan.c:\t\t\t\t\t errmsg(\"TABLESAMPLE parameter cannot be null\")));\nsrc/backend/executor/nodeSamplescan.c:\t\t\t\t\t errmsg(\"TABLESAMPLE REPEATABLE parameter cannot be null\")));\nsrc/backend/executor/nodeTableFuncscan.c:\t\t\t\t\t errmsg(\"namespace URI must not be null\")));\nsrc/backend/executor/nodeTableFuncscan.c:\t\t\t\t errmsg(\"row filter expression must not be null\")));\nsrc/backend/executor/nodeTableFuncscan.c:\t\t\t\t\t\t\t errmsg(\"column filter expression must not be null\"),\nsrc/backend/executor/nodeWindowAgg.c:\t\t\t\t\t\t errmsg(\"frame starting offset must not be null\")));\nsrc/backend/executor/nodeWindowAgg.c:\t\t\t\t\t\t\t errmsg(\"frame starting offset must not be negative\")));\nsrc/backend/executor/nodeWindowAgg.c:\t\t\t\t\t\t errmsg(\"frame ending offset must not be null\")));\nsrc/backend/executor/nodeWindowAgg.c:\t\t\t\t\t\t\t errmsg(\"frame ending offset must not be negative\")));\nsrc/backend/libpq/be-fsstubs.c:\t\t\t\t errmsg(\"requested length cannot be negative\")));\nsrc/backend/libpq/be-secure-openssl.c:\t\t\t\t\t errmsg(\"private key file \\\"%s\\\" cannot be reloaded because it requires a passphrase\",\nsrc/backend/libpq/hba.c:\t\t\t\t\t errmsg(\"list of RADIUS servers cannot be empty\"),\nsrc/backend/libpq/hba.c:\t\t\t\t\t errmsg(\"list of RADIUS secrets cannot be empty\"),\nsrc/backend/optimizer/plan/initsplan.c:\t\t\t\t\t errmsg(\"%s cannot be applied to the nullable side of an outer join\",\nsrc/backend/parser/analyze.c:\t\t\t\t errmsg(\"%s cannot be applied to VALUES\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t errmsg(\"materialized views may not be defined using bound parameters\")));\nsrc/backend/parser/analyze.c:\t\t\t\t\t errmsg(\"materialized views cannot be UNLOGGED\")));\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to a join\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to a function\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to a table function\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to VALUES\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to a WITH query\",\nsrc/backend/parser/analyze.c:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be applied to a named tuplestore\",\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t errmsg(\"current database cannot be changed\"),\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t\t errmsg(\"frame start cannot be UNBOUNDED FOLLOWING\"),\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t\t errmsg(\"frame start cannot be UNBOUNDED FOLLOWING\"),\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t\t errmsg(\"frame end cannot be UNBOUNDED PRECEDING\"),\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be used as a role name here\",\nsrc/backend/parser/gram.y:\t\t\t\t\t\t\t\t\t errmsg(\"%s cannot be used as a role name here\",\nsrc/backend/parser/gram.y:\t\t\t\t\t errmsg(\"%s constraints cannot be marked DEFERRABLE\",\nsrc/backend/parser/gram.y:\t\t\t\t\t errmsg(\"%s constraints cannot be marked DEFERRABLE\",\nsrc/backend/parser/gram.y:\t\t\t\t\t errmsg(\"%s constraints cannot be marked NOT VALID\",\nsrc/backend/parser/gram.y:\t\t\t\t\t errmsg(\"%s constraints cannot be marked NO INHERIT\",\nsrc/backend/parser/parse_agg.c:\t\t\t\t errmsg(\"aggregate function calls cannot be nested\"),\nsrc/backend/parser/parse_agg.c:\t\t\t\t\t errmsg(\"aggregate function calls cannot be nested\"),\nsrc/backend/parser/parse_agg.c:\t\t\t\t errmsg(\"window function calls cannot be nested\"),\nsrc/backend/parser/parse_clause.c:\t\t\t\t errmsg(\"relation \\\"%s\\\" cannot be the target of a modifying statement\",\nsrc/backend/parser/parse_clause.c:\t\t\t\t\t errmsg(\"WITH ORDINALITY cannot be used with a column definition list\"),\nsrc/backend/parser/parse_clause.c:\t\t\t\t\t\t errmsg(\"column \\\"%s\\\" cannot be declared SETOF\",\nsrc/backend/parser/parse_coerce.c:\t\t\t\t\t\t errmsg(\"%s types %s and %s cannot be matched\",\nsrc/backend/parser/parse_relation.c:\t\t\t\t errhint(\"There is an entry for table \\\"%s\\\", but it cannot be referenced from this part of the query.\",\nsrc/backend/parser/parse_relation.c:\t\t\t\t\t\t errdetail(\"There is a WITH item named \\\"%s\\\", but it cannot be referenced from this part of the query.\",\nsrc/backend/parser/parse_relation.c:\t\t\t\t\t\t\t errmsg(\"column \\\"%s\\\" cannot be declared SETOF\",\nsrc/backend/parser/parse_relation.c:\t\t\t\t errhint(\"There is an entry for table \\\"%s\\\", but it cannot be referenced from this part of the query.\",\nsrc/backend/parser/parse_relation.c:\t\t\t\t errhint(\"There is a column named \\\"%s\\\" in table \\\"%s\\\", but it cannot be referenced from this part of the query.\",\nsrc/backend/parser/parse_type.c:\t\t\t\t errmsg(\"type modifier cannot be specified for shell type \\\"%s\\\"\",\nsrc/backend/parser/parse_utilcmd.c:\t\t\t\t errmsg(\"specified value cannot be cast to type %s for column \\\"%s\\\"\",\nsrc/backend/parser/scan.l:\t\t\t\t\t\t\t\t errdetail(\"String constants with Unicode escapes cannot be used when standard_conforming_strings is off.\"),\nsrc/backend/parser/scan.l:\t\tyyerror(\"Unicode escape values cannot be used for code point values above 007F when the server encoding is not UTF8\");\nsrc/backend/parser/scan.l:\t\t\tyyerror(\"Unicode escape values cannot be used for code point values above 007F when the server encoding is not UTF8\");\nsrc/backend/postmaster/bgworker.c:\t\t\t\t errmsg(\"background worker \\\"%s\\\": parallel workers may not be configured for restart\",\nsrc/backend/postmaster/postmaster.c:\t\t\t\t(errmsg(\"WAL archival cannot be enabled when wal_level is \\\"minimal\\\"\")));\nsrc/backend/replication/logical/logical.c:\t\t\t\t errmsg(\"logical decoding cannot be used while in recovery\")));\nsrc/backend/replication/logical/logicalfuncs.c:\t\t\t\t errmsg(\"slot name must not be null\")));\nsrc/backend/replication/logical/logicalfuncs.c:\t\t\t\t errmsg(\"options array must not be null\")));\nsrc/backend/rewrite/rewriteDefine.c:\t\t\t\t\t\t errhint(\"In particular, the table cannot be involved in any foreign key relationships.\")));\nsrc/backend/rewrite/rewriteHandler.c:\t\t\t\t\t errmsg(\"INSERT with ON CONFLICT clause cannot be used with table that has INSERT or UPDATE rules\")));\nsrc/backend/rewrite/rewriteHandler.c:\t\t\t\t\t errmsg(\"WITH cannot be used in a query that is rewritten by rules into multiple queries\")));\nsrc/backend/storage/lmgr/predicate.c:\t\t\t\t errmsg(\"a snapshot-importing transaction must not be READ ONLY DEFERRABLE\")));\nsrc/backend/utils/adt/acl.c:\t\t\t\t errmsg(\"grant options cannot be granted back to your own grantor\")));\nsrc/backend/utils/adt/array_userfuncs.c:\t\t\t\t\t errmsg(\"initial position must not be null\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t\t errmsg(\"upper bound cannot be less than lower bound\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t\t errmsg(\"upper bound cannot be less than lower bound\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t\t\t errmsg(\"upper bound cannot be less than lower bound\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t\t\t errmsg(\"upper bound cannot be less than lower bound\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t errmsg(\"dimension array or low bound array cannot be null\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t errmsg(\"dimension array or low bound array cannot be null\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t errmsg(\"dimension values cannot be null\")));\nsrc/backend/utils/adt/arrayfuncs.c:\t\t\t\t\t errmsg(\"dimension values cannot be null\")));\nsrc/backend/utils/adt/date.c:\t\t\t\t errmsg(\"TIME(%d)%s precision must not be negative\",\nsrc/backend/utils/adt/float.c:\t\t\t\t errmsg(\"operand, lower bound, and upper bound cannot be NaN\")));\nsrc/backend/utils/adt/genfile.c:\t\t\t\t\t errmsg(\"requested length cannot be negative\")));\nsrc/backend/utils/adt/genfile.c:\t\t\t\t\t errmsg(\"requested length cannot be negative\")));\nsrc/backend/utils/adt/genfile.c:\t\t\t\t\t errmsg(\"requested length cannot be negative\")));\nsrc/backend/utils/adt/geo_ops.c:\t\t\t\t errmsg(\"open path cannot be converted to polygon\")));\nsrc/backend/utils/adt/json.c:\t\t\t\t\t\t\t\t errdetail(\"\\\\u0000 cannot be converted to text.\"),\nsrc/backend/utils/adt/json.c:\t\t\t\t\t\t\t\t errdetail(\"Unicode escape values cannot be used for code point values above 007F when the server encoding is not UTF8.\"),\nsrc/backend/utils/adt/json.c:\t\t\t\t errmsg(\"field name must not be null\")));\nsrc/backend/utils/adt/json.c:\t\t\t\t\t errmsg(\"argument %d cannot be null\", i + 1),\nsrc/backend/utils/adt/jsonb.c:\t\t\t\t\t errmsg(\"argument %d: key must not be null\", i + 1)));\nsrc/backend/utils/adt/jsonb.c:\t\t\t\t errmsg(\"field name must not be null\")));\nsrc/backend/utils/adt/jsonpath_scan.l:\t\t\t\t errdetail(\"\\\\u0000 cannot be converted to text.\")));\nsrc/backend/utils/adt/jsonpath_scan.l:\t\t\t\t errdetail(\"Unicode escape values cannot be used for code \"\nsrc/backend/utils/adt/misc.c:\t\t\t\t\t\t errdetail(\"Quoted identifier must not be empty.\")));\nsrc/backend/utils/adt/numeric.c:\t\t\t\t\t errmsg(\"start value cannot be NaN\")));\nsrc/backend/utils/adt/numeric.c:\t\t\t\t\t errmsg(\"stop value cannot be NaN\")));\nsrc/backend/utils/adt/numeric.c:\t\t\t\t\t\t errmsg(\"step size cannot be NaN\")));\nsrc/backend/utils/adt/numeric.c:\t\t\t\t errmsg(\"operand, lower bound, and upper bound cannot be NaN\")));\nsrc/backend/utils/adt/rangetypes.c:\t\t\t\t errmsg(\"range constructor flags argument must not be null\")));\nsrc/backend/utils/adt/timestamp.c:\t\t\t\t errmsg(\"TIMESTAMP(%d)%s precision must not be negative\",\nsrc/backend/utils/adt/timestamp.c:\t\t\t\t errmsg(\"timestamp cannot be NaN\")));\nsrc/backend/utils/adt/timestamp.c:\t\t\t\t\t errmsg(\"INTERVAL(%d) precision must not be negative\",\nsrc/backend/utils/adt/tsvector_op.c:\t\t\t\t\t errmsg(\"configuration column \\\"%s\\\" must not be null\",\nsrc/backend/utils/adt/varlena.c:\t\t\t\t\t errmsg(\"null values cannot be formatted as an SQL identifier\")));\nsrc/backend/utils/adt/xml.c:\t\t\t\t errdetail(\"XML processing instruction target name cannot be \\\"%s\\\".\", target)));\nsrc/backend/utils/adt/xml.c:\t\t\t\t errmsg(\"row path filter must not be empty string\")));\nsrc/backend/utils/adt/xml.c:\t\t\t\t errmsg(\"column path filter must not be empty string\")));\nsrc/backend/utils/misc/guc-file.l:\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc-file.l:\t\t\trecord_config_file_error(psprintf(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed now\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be set after connection start\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed without restarting the server\",\nsrc/backend/utils/misc/guc.c:\t\t\t\t\t errmsg(\"parameter \\\"%s\\\" cannot be changed\",\nsrc/backend/utils/misc/guc.c:\t\tGUC_check_errdetail(\"\\\"temp_buffers\\\" cannot be changed after any temporary tables have been accessed in the session.\");\nsrc/backend/utils/mmgr/portalmem.c:\t\t\t\t errmsg(\"portal \\\"%s\\\" cannot be run\", portal->name)));\nsrc/bin/initdb/initdb.c:\t\tpg_log_error(\"password prompt and password file cannot be specified together\");\nsrc/bin/pg_basebackup/pg_basebackup.c:\t\t\tpg_log_error(\"--no-slot cannot be used with slot name\");\nsrc/bin/pg_ctl/pg_ctl.c:\t\twrite_stderr(_(\"%s: cannot be run as root\\n\"\nsrc/bin/pg_dump/pg_dump.c:\t\tpg_log_error(\"options -s/--schema-only and -a/--data-only cannot be used together\");\nsrc/bin/pg_dump/pg_dump.c:\t\tpg_log_error(\"options -c/--clean and -a/--data-only cannot be used together\");\nsrc/bin/pg_dump/pg_dumpall.c:\t\tpg_log_error(\"option --exclude-database cannot be used together with -g/--globals-only, -r/--roles-only or -t/--tablespaces-only\");\nsrc/bin/pg_dump/pg_dumpall.c:\t\tpg_log_error(\"options -g/--globals-only and -r/--roles-only cannot be used together\");\nsrc/bin/pg_dump/pg_dumpall.c:\t\tpg_log_error(\"options -g/--globals-only and -t/--tablespaces-only cannot be used together\");\nsrc/bin/pg_dump/pg_dumpall.c:\t\tpg_log_error(\"options -r/--roles-only and -t/--tablespaces-only cannot be used together\");\nsrc/bin/pg_dump/pg_restore.c:\t\t\tpg_log_error(\"options -d/--dbname and -f/--file cannot be used together\");\nsrc/bin/pg_dump/pg_restore.c:\t\tpg_log_error(\"options -s/--schema-only and -a/--data-only cannot be used together\");\nsrc/bin/pg_dump/pg_restore.c:\t\tpg_log_error(\"options -c/--clean and -a/--data-only cannot be used together\");\nsrc/bin/pg_dump/pg_restore.c:\t\tpg_log_error(\"options -C/--create and -1/--single-transaction cannot be used together\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"transaction ID epoch (-e) must not be -1\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"transaction ID (-x) must not be 0\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"OID (-o) must not be 0\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"multitransaction ID (-m) must not be 0\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"oldest multitransaction ID (-m) must not be 0\");\nsrc/bin/pg_resetwal/pg_resetwal.c:\t\t\t\t\tpg_log_error(\"multitransaction offset (-O) must not be -1\");\nsrc/bin/psql/command.c:\t\t\t\tpg_log_error(\"\\\\pset: csv_fieldsep cannot be a double quote, a newline, or a carriage return\");\nsrc/bin/psql/command.c:\t\tpg_log_error(\"\\\\watch cannot be used with an empty query\");\nsrc/bin/psql/common.c:\t\t\tpg_log_error(\"\\\\watch cannot be used with an empty query\");\nsrc/bin/psql/common.c:\t\t\tpg_log_error(\"\\\\watch cannot be used with COPY\");\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"GET STACKED DIAGNOSTICS cannot be used outside an exception handler\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"lower bound of FOR loop cannot be null\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"upper bound of FOR loop cannot be null\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t\t errmsg(\"BY value of FOR loop cannot be null\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"FOREACH expression must not be null\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"FOREACH loop variable must not be of an array type\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t errmsg(\"RAISE without parameters cannot be used outside an exception handler\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t\t errmsg(\"RAISE statement option cannot be null\")));\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t\t\t\t errmsg(\"null value cannot be assigned to variable \\\"%s\\\" declared NOT NULL\",\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t\t\t\t\t errmsg(\"null value cannot be assigned to variable \\\"%s\\\" declared NOT NULL\",\nsrc/pl/plpgsql/src/pl_exec.c:\t\t\t\t\t\t\t\t errmsg(\"array subscript in assignment must not be null\")));\nsrc/pl/plpgsql/src/pl_gram.y:\t\t\t\t\t\t\t\t\t\t errmsg(\"block label \\\"%s\\\" cannot be used in CONTINUE\",\nsrc/pl/plpgsql/src/pl_gram.y:\t\t\t\t\t\t\t\t\t\t errmsg(\"EXIT cannot be used outside a loop, unless it has a label\") :\nsrc/pl/plpgsql/src/pl_gram.y:\t\t\t\t\t\t\t\t\t\t errmsg(\"CONTINUE cannot be used outside a loop\"),\nsrc/pl/plpgsql/src/pl_gram.y:\t\t\t\t\t\t\t errmsg(\"record variable cannot be part of multiple-item INTO list\"),\nsrc/pl/plpgsql/src/pl_handler.c:\t\t\t\tGUC_check_errdetail(\"Key word \\\"%s\\\" cannot be combined with other key words.\", tok);\nsrc/pl/plpython/plpy_exec.c:\t\t\t\t\t\t\t errmsg(\"returned object cannot be iterated\"),\nsrc/pl/tcl/pltcl.c:\t\t\t\t errmsg(\"function \\\"%s\\\" must not be SECURITY DEFINER\",\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 10:58:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "message style"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 10:58:13 -0400, Alvaro Herrera wrote:\n> I have this two message patches that I've been debating with myself\n> about:\n> \n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -1282,7 +1282,7 @@ heap_getnext(TableScanDesc sscan, ScanDirection direction)\n> \tif (unlikely(sscan->rs_rd->rd_tableam != GetHeapamTableAmRoutine()))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> -\t\t\t\t errmsg(\"only heap AM is supported\")));\n> +\t\t\t\t errmsg(\"only heap table access method is supported\")));\n> \n> \n> I think the original is not great\n\nAgreed.\n\n\n> but I'm not sure that the new is much\n> better either. I think this message says \"only AMs that behave using\n> the heapam routines are supported\"; we cannot say use the literal\n> \"heapam\" AM name because, as the comment two lines above says, it's\n> possible to copy the AM with a different name and it would be\n> acceptable.\n\nI'm not sure that's something worth being bothered about - the only\nreason to do that is for testing. I don't think that needs to be\nrefelected in error messages.\n\n\n> OTOH maybe this code will not survive for long, so it\n> doesn't matter much that the message is 100% correct; perhaps we should\n> just change errmsg() to errmsg_internal() and be done with it.\n\nI'd suspect some of them will survive for a while. What should a heap\nspecific pageinspect function do if not called for heap etc?\n\n\n> diff --git a/src/backend/access/table/tableamapi.c b/src/backend/access/table/tableamapi.c\n> index 0053dc95cab..c8b7598f785 100644\n> --- a/src/backend/access/table/tableamapi.c\n> +++ b/src/backend/access/table/tableamapi.c\n> @@ -103,7 +103,8 @@ check_default_table_access_method(char **newval, void **extra, GucSource source)\n> {\n> \tif (**newval == '\\0')\n> \t{\n> -\t\tGUC_check_errdetail(\"default_table_access_method may not be empty.\");\n> +\t\tGUC_check_errdetail(\"%s may not be empty.\",\n> +\t\t\t\t\t\t\t\"default_table_access_method\");\n> \t\treturn false;\n> \t}\n> \n> My problem here is not really the replacement of the name to %s, but the\n> \"may not be\" part of it. We don't use \"may not be\" anywhere else; most\n> places seem to use \"foo cannot be X\" and a small number of other places\n> use \"foo must not be Y\". I'm not able to judge which of the two is\n> better (so change all messages to use that form), or if there's a\n> semantic difference and if so which one to use in this case.\n\nNo idea about what's better here either. I don't think there's an\nintentional semantic difference.\n\nThanks for looking at this!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 08:09:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: message style"
},
{
"msg_contents": "Replying to myself to resend to the list, since my previous attempt\nseems to have been eaten by a grue.\n\n\nOn Tue, Apr 30, 2019 at 12:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 30, 2019 at 10:58 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > My problem here is not really the replacement of the name to %s, but the\n> > \"may not be\" part of it. We don't use \"may not be\" anywhere else; most\n> > places seem to use \"foo cannot be X\" and a small number of other places\n> > use \"foo must not be Y\". I'm not able to judge which of the two is\n> > better (so change all messages to use that form), or if there's a\n> > semantic difference and if so which one to use in this case.\n>\n> The message style guidelines specifically discourage the use of \"may\",\n> IMHO for good reason. \"mumble may not be flidgy\" could be trying to\n> tell you that something is impermissible, as in \"the children may not\n> run in the house.\" But it could also be warning you that something is\n> doubtful, as in \"the children may not receive Easter candy this year\n> because there is a worldwide chocolate shortage.\" Sometimes the same\n> sentence can be read either way, like \"this table may not be\n> truncated,\" which can mean either that TRUNCATE is going to fail if\n> run in the future, or that it is unclear whether TRUNCATE was already\n> run in the past.\n>\n> As far as \"cannot\" and \"must not\" is murkier, but it looks to me as\n> though we prefer \"cannot\" to \"must not\" about 9:1, so most often\n> \"cannot\" is the right thing, but not always. The distinction seems to\n> be that we use \"cannot\" to talk about things that we are unwilling or\n> unable to do in the future, whereas \"must not\" is used to admonish the\n> user about what has already taken place. Consider:\n>\n> array must not contain nulls\n> header key must not contain newlines\n> cast function must not return a set\n> interval time zone \\\"%s\\\" must not include months or days\n> function \\\"%s\\\" must not be SECURITY DEFINER\n>\n> vs.\n>\n> cannot drop %s because %s requires it\n> cannot PREPARE a transaction that has manipulated logical replication workers\n> cannot reindex temporary tables of other sessions\n> cannot inherit from partitioned table \\\"%s\\\"\n>\n> The first set of messages are essentially complaints about the past.\n> The array shouldn't have contained nulls, but it did! The header key\n> should not have contained newlines, but it did! The cast function\n> should not return a set, but it does! Hence, we are sad and are\n> throwing an error. The second set are statements that we are\n> unwilling or unable to proceed, but they don't necessarily carry the\n> connotation that there is a problem already in the past. You've just\n> asked for something you are not going to get.\n>\n> I think principle that still leaves some ambiguity. For example, you\n> could phrase that second of the \"cannot\" message as \"must not try to\n> PREPARE a transaction that has manipulated logical replication\n> workers.\" That's grammatical and everything, but it sounds a bit\n> accusatory, like the user is in trouble or something. I think that's\n> probably why we tend to prefer \"cannot\" in most cases. But sometimes\n> that would lead to a longer or less informative message. For example,\n> you can't just change\n>\n> function \\\"%s\\\" must not be SECURITY DEFINER\n>\n> to\n>\n> function \\\"%s\\\" can not be SECURITY DEFINER\n>\n> ...because the user will rightly respond \"well, I guess it can,\n> because it is.\" We could say\n>\n> can not execute security definer functions from PL/Tcl\n>\n> ...but that sucks because we now have no reasonable place to put the\n> function name. We could say\n>\n> can not execute security definer function \\\"%s\\\" from PL/Tcl\n>\n> ...but that also sucks because now the message only says that this one\n> particular security definer function cannot be executed, rather than\n> saying that ALL security definer functions cannot be executed. To\n> really get there, you'd have to do something like\n>\n> function \"\\%s\" cannot be executed by PL/Tcl because it is a security\n> definer function\n>\n> ...which is fine, but kinda long. On the plus side it's more clear\n> about the source of the error (PL/Tcl) than the current message which\n> doesn't state that explicitly, so perhaps it's an improvement anyway,\n> but the general point is that sometimes I think there is no succinct\n> way of expressing the complaint clearly without using \"must not\".\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Apr 2019 20:31:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: message style"
},
{
"msg_contents": "While reading another thread that attempted to link to this email, I\ndiscovered that this email never made it to the list archives. I am\ntrying again.\n\nOn Tue, Apr 30, 2019 at 12:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 30, 2019 at 10:58 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > My problem here is not really the replacement of the name to %s, but the\n> > \"may not be\" part of it. We don't use \"may not be\" anywhere else; most\n> > places seem to use \"foo cannot be X\" and a small number of other places\n> > use \"foo must not be Y\". I'm not able to judge which of the two is\n> > better (so change all messages to use that form), or if there's a\n> > semantic difference and if so which one to use in this case.\n>\n> The message style guidelines specifically discourage the use of \"may\",\n> IMHO for good reason. \"mumble may not be flidgy\" could be trying to\n> tell you that something is impermissible, as in \"the children may not\n> run in the house.\" But it could also be warning you that something is\n> doubtful, as in \"the children may not receive Easter candy this year\n> because there is a worldwide chocolate shortage.\" Sometimes the same\n> sentence can be read either way, like \"this table may not be\n> truncated,\" which can mean either that TRUNCATE is going to fail if\n> run in the future, or that it is unclear whether TRUNCATE was already\n> run in the past.\n>\n> As far as \"cannot\" and \"must not\" is murkier, but it looks to me as\n> though we prefer \"cannot\" to \"must not\" about 9:1, so most often\n> \"cannot\" is the right thing, but not always. The distinction seems to\n> be that we use \"cannot\" to talk about things that we are unwilling or\n> unable to do in the future, whereas \"must not\" is used to admonish the\n> user about what has already taken place. Consider:\n>\n> array must not contain nulls\n> header key must not contain newlines\n> cast function must not return a set\n> interval time zone \\\"%s\\\" must not include months or days\n> function \\\"%s\\\" must not be SECURITY DEFINER\n>\n> vs.\n>\n> cannot drop %s because %s requires it\n> cannot PREPARE a transaction that has manipulated logical replication workers\n> cannot reindex temporary tables of other sessions\n> cannot inherit from partitioned table \\\"%s\\\"\n>\n> The first set of messages are essentially complaints about the past.\n> The array shouldn't have contained nulls, but it did! The header key\n> should not have contained newlines, but it did! The cast function\n> should not return a set, but it does! Hence, we are sad and are\n> throwing an error. The second set are statements that we are\n> unwilling or unable to proceed, but they don't necessarily carry the\n> connotation that there is a problem already in the past. You've just\n> asked for something you are not going to get.\n>\n> I think principle that still leaves some ambiguity. For example, you\n> could phrase that second of the \"cannot\" message as \"must not try to\n> PREPARE a transaction that has manipulated logical replication\n> workers.\" That's grammatical and everything, but it sounds a bit\n> accusatory, like the user is in trouble or something. I think that's\n> probably why we tend to prefer \"cannot\" in most cases. But sometimes\n> that would lead to a longer or less informative message. For example,\n> you can't just change\n>\n> function \\\"%s\\\" must not be SECURITY DEFINER\n>\n> to\n>\n> function \\\"%s\\\" can not be SECURITY DEFINER\n>\n> ...because the user will rightly respond \"well, I guess it can,\n> because it is.\" We could say\n>\n> can not execute security definer functions from PL/Tcl\n>\n> ...but that sucks because we now have no reasonable place to put the\n> function name. We could say\n>\n> can not execute security definer function \\\"%s\\\" from PL/Tcl\n>\n> ...but that also sucks because now the message only says that this one\n> particular security definer function cannot be executed, rather than\n> saying that ALL security definer functions cannot be executed. To\n> really get there, you'd have to do something like\n>\n> function \"\\%s\" cannot be executed by PL/Tcl because it is a security\n> definer function\n>\n> ...which is fine, but kinda long. On the plus side it's more clear\n> about the source of the error (PL/Tcl) than the current message which\n> doesn't state that explicitly, so perhaps it's an improvement anyway,\n> but the general point is that sometimes I think there is no succinct\n> way of expressing the complaint clearly without using \"must not\".\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 17:32:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: message style"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at https://www.postgresql.org/message-id/20190430070552.jzqgcy4ihalx7nur%40alap3.anarazel.de\nI noticed that\n\n/*\n * ReindexIndex\n *\t\tRecreate a specific index.\n */\nvoid\nReindexIndex(RangeVar *indexRelation, int options, bool concurrent)\n{\n\tOid\t\t\tindOid;\n\tOid\t\t\theapOid = InvalidOid;\n\tRelation\tirel;\n\tchar\t\tpersistence;\n\n\t/*\n\t * Find and lock index, and check permissions on table; use callback to\n\t * obtain lock on table first, to avoid deadlock hazard. The lock level\n\t * used here must match the index lock obtained in reindex_index().\n\t */\n\tindOid = RangeVarGetRelidExtended(indexRelation,\n\t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : AccessExclusiveLock,\n\t\t\t\t\t\t\t\t\t 0,\n\t\t\t\t\t\t\t\t\t RangeVarCallbackForReindexIndex,\n\t\t\t\t\t\t\t\t\t (void *) &heapOid);\n\ndoesn't pass concurrent-ness to RangeVarCallbackForReindexIndex(). Which\nthen goes on to lock the table\n\nstatic void\nRangeVarCallbackForReindexIndex(const RangeVar *relation,\n\t\t\t\t\t\t\t\tOid relId, Oid oldRelId, void *arg)\n\n\t\tif (OidIsValid(*heapOid))\n\t\t\tLockRelationOid(*heapOid, ShareLock);\n\nwithout knowing that it should use ShareUpdateExclusive. Which\ne.g. ReindexTable knows:\n\n\t/* The lock level used here should match reindex_relation(). */\n\theapOid = RangeVarGetRelidExtended(relation,\n\t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : ShareLock,\n\t\t\t\t\t\t\t\t\t 0,\n\t\t\t\t\t\t\t\t\t RangeVarCallbackOwnsTable, NULL);\n\nso there's a lock upgrade hazard.\n\nCreating a table\nCREATE TABLE blarg(id serial primary key);\nand then using pgbench to reindex it:\nREINDEX INDEX CONCURRENTLY blarg_pkey;\n\nindeed proves that there's a problem:\n\n2019-04-30 08:12:58.679 PDT [30844][7/925] ERROR: 40P01: deadlock detected\n2019-04-30 08:12:58.679 PDT [30844][7/925] DETAIL: Process 30844 waits for ShareUpdateExclusiveLock on relation 50661 of database 13408; blocked by process 30848.\n\tProcess 30848 waits for ShareUpdateExclusiveLock on relation 50667 of database 13408; blocked by process 30844.\n\tProcess 30844: REINDEX INDEX CONCURRENTLY blarg_pkey;\n\tProcess 30848: REINDEX INDEX CONCURRENTLY blarg_pkey;\n2019-04-30 08:12:58.679 PDT [30844][7/925] HINT: See server log for query details.\n2019-04-30 08:12:58.679 PDT [30844][7/925] LOCATION: DeadLockReport, deadlock.c:1140\n2019-04-30 08:12:58.679 PDT [30844][7/925] STATEMENT: REINDEX INDEX CONCURRENTLY blarg_pkey;\n\nI assume the fix woudl be to pass a struct {LOCKMODE lockmode; Oid\nheapOid;} to RangeVarCallbackForReindexIndex().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 08:17:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-04-30 17:17, Andres Freund wrote:\n> \tindOid = RangeVarGetRelidExtended(indexRelation,\n> \t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : AccessExclusiveLock,\n> \t\t\t\t\t\t\t\t\t 0,\n> \t\t\t\t\t\t\t\t\t RangeVarCallbackForReindexIndex,\n> \t\t\t\t\t\t\t\t\t (void *) &heapOid);\n> \n> doesn't pass concurrent-ness to RangeVarCallbackForReindexIndex(). Which\n> then goes on to lock the table\n> \n> static void\n> RangeVarCallbackForReindexIndex(const RangeVar *relation,\n> \t\t\t\t\t\t\t\tOid relId, Oid oldRelId, void *arg)\n> \n> \t\tif (OidIsValid(*heapOid))\n> \t\t\tLockRelationOid(*heapOid, ShareLock);\n> \n> without knowing that it should use ShareUpdateExclusive. Which\n> e.g. ReindexTable knows:\n> \n> \t/* The lock level used here should match reindex_relation(). */\n> \theapOid = RangeVarGetRelidExtended(relation,\n> \t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : ShareLock,\n> \t\t\t\t\t\t\t\t\t 0,\n> \t\t\t\t\t\t\t\t\t RangeVarCallbackOwnsTable, NULL);\n> \n> so there's a lock upgrade hazard.\n\nConfirmed.\n\nWhat seems weird to me is that the existing callback argument heapOid\nisn't used at all. It seems to have been like that since the original\ncommit of the callback infrastructure. Therefore also, this code\n\n if (relId != oldRelId && OidIsValid(oldRelId))\n {\n /* lock level here should match reindex_index() heap lock */\n UnlockRelationOid(*heapOid, ShareLock);\n\nin RangeVarCallbackForReindexIndex() can't ever do anything useful.\n\nPatch to remove the unused code attached; but needs some checking for\nthis dubious conditional block.\n\nThoughts?\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 2 May 2019 10:44:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-05-02 10:44:44 +0200, Peter Eisentraut wrote:\n> On 2019-04-30 17:17, Andres Freund wrote:\n> > \tindOid = RangeVarGetRelidExtended(indexRelation,\n> > \t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : AccessExclusiveLock,\n> > \t\t\t\t\t\t\t\t\t 0,\n> > \t\t\t\t\t\t\t\t\t RangeVarCallbackForReindexIndex,\n> > \t\t\t\t\t\t\t\t\t (void *) &heapOid);\n> > \n> > doesn't pass concurrent-ness to RangeVarCallbackForReindexIndex(). Which\n> > then goes on to lock the table\n> > \n> > static void\n> > RangeVarCallbackForReindexIndex(const RangeVar *relation,\n> > \t\t\t\t\t\t\t\tOid relId, Oid oldRelId, void *arg)\n> > \n> > \t\tif (OidIsValid(*heapOid))\n> > \t\t\tLockRelationOid(*heapOid, ShareLock);\n> > \n> > without knowing that it should use ShareUpdateExclusive. Which\n> > e.g. ReindexTable knows:\n> > \n> > \t/* The lock level used here should match reindex_relation(). */\n> > \theapOid = RangeVarGetRelidExtended(relation,\n> > \t\t\t\t\t\t\t\t\t concurrent ? ShareUpdateExclusiveLock : ShareLock,\n> > \t\t\t\t\t\t\t\t\t 0,\n> > \t\t\t\t\t\t\t\t\t RangeVarCallbackOwnsTable, NULL);\n> > \n> > so there's a lock upgrade hazard.\n> \n> Confirmed.\n> \n> What seems weird to me is that the existing callback argument heapOid\n> isn't used at all. It seems to have been like that since the original\n> commit of the callback infrastructure. Therefore also, this code\n\nHm? But that's a different callback from the one used from\nreindex_index()? reindex_relation() uses the\nRangeVarCallbackOwnsTable() callback and passes in NULL as the argument,\nwhereas reindex_index() passses in RangeVarCallbackForReindexIndex() and\npasses in &heapOid?\n\nAnd RangeVarCallbackForReindexIndex() pretty clearly sets it *heapOid:\n\n\t\t * Lock level here should match reindex_index() heap lock. If the OID\n\t\t * isn't valid, it means the index as concurrently dropped, which is\n\t\t * not a problem for us; just return normally.\n\t\t */\n\t\t*heapOid = IndexGetRelation(relId, true);\n\n\n> Patch to remove the unused code attached; but needs some checking for\n> this dubious conditional block.\n> \n> Thoughts?\n\nI might miss something here, and it's actually unused. But if so the fix\nwould be to make it being used, because it's actually\nimportant. Otherwise ReindexIndex() becomes racy or has even more\ndeadlock hazards.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 07:33:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-05-02 16:33, Andres Freund wrote:\n> And RangeVarCallbackForReindexIndex() pretty clearly sets it *heapOid:\n> \n> \t\t * Lock level here should match reindex_index() heap lock. If the OID\n> \t\t * isn't valid, it means the index as concurrently dropped, which is\n> \t\t * not a problem for us; just return normally.\n> \t\t */\n> \t\t*heapOid = IndexGetRelation(relId, true);\n\nIt sets it but uses it only internally. There is no code path that\npasses in a non-zero heapOid, and there is no code path that does\nanything with the heapOid passed back out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 22:39:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 22:39:15 +0200, Peter Eisentraut wrote:\n> On 2019-05-02 16:33, Andres Freund wrote:\n> > And RangeVarCallbackForReindexIndex() pretty clearly sets it *heapOid:\n> > \n> > \t\t * Lock level here should match reindex_index() heap lock. If the OID\n> > \t\t * isn't valid, it means the index as concurrently dropped, which is\n> > \t\t * not a problem for us; just return normally.\n> > \t\t */\n> > \t\t*heapOid = IndexGetRelation(relId, true);\n> \n> It sets it but uses it only internally. There is no code path that\n> passes in a non-zero heapOid, and there is no code path that does\n> anything with the heapOid passed back out.\n\nRangeVarGetRelidExtended() can call the callback multiple times, if\nthere are any concurrent schema changes. That's why it's unlocking the\npreviously locked heap oid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 13:42:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-05-02 22:42, Andres Freund wrote:\n> RangeVarGetRelidExtended() can call the callback multiple times, if\n> there are any concurrent schema changes. That's why it's unlocking the\n> previously locked heap oid.\n\nAh that explains it then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 May 2019 09:33:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-05-02 10:44, Peter Eisentraut wrote:\n>> so there's a lock upgrade hazard.\n> Confirmed.\n\nHere is a patch along the lines of your sketch. I cleaned up the\nvariable naming a bit too.\n\nREINDEX CONCURRENTLY is still deadlock prone because of\nWaitForOlderSnapshots(), so this doesn't actually fix your test case,\nbut that seems unrelated to this particular issue.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 3 May 2019 09:37:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 09:37:07 +0200, Peter Eisentraut wrote:\n> REINDEX CONCURRENTLY is still deadlock prone because of\n> WaitForOlderSnapshots(), so this doesn't actually fix your test case,\n> but that seems unrelated to this particular issue.\n\nRight.\n\nI've not tested the change, but it looks reasonable to me. The change\nof moving the logic the reset of *heapOid to the unlock perhaps is\ndebatable, but I think it's OK.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 08:23:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On Fri, May 03, 2019 at 08:23:21AM -0700, Andres Freund wrote:\n> I've not tested the change, but it looks reasonable to me. The change\n> of moving the logic the reset of *heapOid to the unlock perhaps is\n> debatable, but I think it's OK.\n\nI have not checked the patch in details yet, but it strikes me that\nwe should have an isolation test case which does the following:\n- Take a lock on the table created, without committing yet the\ntransaction where the lock is taken.\n- Run two REINDEX CONCURRENTLY in two other sessions.\n- Commit the first transaction.\nThe result should be no deadlocks happening in the two sessions\nrunning the reindex. I can see the deadlock easily with three psql\nsessions, running manually the queries.\n--\nMichael",
"msg_date": "Sat, 4 May 2019 21:59:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On Sat, May 04, 2019 at 09:59:20PM +0900, Michael Paquier wrote:\n> The result should be no deadlocks happening in the two sessions\n> running the reindex. I can see the deadlock easily with three psql\n> sessions, running manually the queries.\n\n+ * If the OID isn't valid, it means the index as concurrently dropped,\n+ * which is not a problem for us; just return normally.\nTypo here s/as/is/.\n\nI have looked closer at the patch and the change proposed looks good\nto me.\n\nNow, what do we do about the potential deadlock issues in\nWaitForOlderSnapshots? The attached is an isolation test able to\nreproduce the deadlock within WaitForOlderSnapshots() with two\nparallel REINDEX CONCURRENTLY. I'd like to think that the best way to\ndo that would be to track in vacuumFlags the backends running a\nREINDEX and just exclude them from GetCurrentVirtualXIDs() because\nwe don't actually care about missing index entries in this case like\nVACUUM. But it looks also to me that is issue is broader and goes\ndown to utility commands which can take a lock on a table which cannot\nbe run in transaction blocks, hence code paths used by CREATE INDEX\nCONCURRENTLY and DROP INDEX CONCURRENTLY could also cause a similar\ndeadlock, no?\n--\nMichael",
"msg_date": "Tue, 7 May 2019 12:07:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On Tue, May 07, 2019 at 12:07:56PM +0900, Michael Paquier wrote:\n> Now, what do we do about the potential deadlock issues in\n> WaitForOlderSnapshots? The attached is an isolation test able to\n> reproduce the deadlock within WaitForOlderSnapshots() with two\n> parallel REINDEX CONCURRENTLY. I'd like to think that the best way to\n> do that would be to track in vacuumFlags the backends running a\n> REINDEX and just exclude them from GetCurrentVirtualXIDs() because\n> we don't actually care about missing index entries in this case like\n> VACUUM. But it looks also to me that is issue is broader and goes\n> down to utility commands which can take a lock on a table which cannot\n> be run in transaction blocks, hence code paths used by CREATE INDEX\n> CONCURRENTLY and DROP INDEX CONCURRENTLY could also cause a similar\n> deadlock, no?\n\nMore to the point, one can just do that without REINDEX:\n- session 1:\ncreate table aa (a int);\nbegin;\nlock aa in row exclusive mode;\n- session 2:\ncreate index concurrently aai on aa(a); --blocks\n- session 3:\ncreate index concurrently aai2 on aa(a); --blocks\n- session 1:\ncommit;\n\nThen session 2 deadlocks while session 3 finishes correctly. I don't\nknow if this is a class of problems we'd want to address for v12, but\nif we do then CIC (and DROP INDEX CONCURRENTLY?) could benefit from\nit.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 12:25:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 12:25:43 +0900, Michael Paquier wrote:\n> On Tue, May 07, 2019 at 12:07:56PM +0900, Michael Paquier wrote:\n> > Now, what do we do about the potential deadlock issues in\n> > WaitForOlderSnapshots? The attached is an isolation test able to\n> > reproduce the deadlock within WaitForOlderSnapshots() with two\n> > parallel REINDEX CONCURRENTLY. I'd like to think that the best way to\n> > do that would be to track in vacuumFlags the backends running a\n> > REINDEX and just exclude them from GetCurrentVirtualXIDs() because\n> > we don't actually care about missing index entries in this case like\n> > VACUUM. But it looks also to me that is issue is broader and goes\n> > down to utility commands which can take a lock on a table which cannot\n> > be run in transaction blocks, hence code paths used by CREATE INDEX\n> > CONCURRENTLY and DROP INDEX CONCURRENTLY could also cause a similar\n> > deadlock, no?\n> \n> More to the point, one can just do that without REINDEX:\n> - session 1:\n> create table aa (a int);\n> begin;\n> lock aa in row exclusive mode;\n> - session 2:\n> create index concurrently aai on aa(a); --blocks\n> - session 3:\n> create index concurrently aai2 on aa(a); --blocks\n> - session 1:\n> commit;\n> \n> Then session 2 deadlocks while session 3 finishes correctly. I don't\n> know if this is a class of problems we'd want to address for v12, but\n> if we do then CIC (and DROP INDEX CONCURRENTLY?) could benefit from\n> it.\n\nThis seems like a pre-existing issue to me. We probably should improve\nthat, but I don't think it has to be tied to 12.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:04:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-07 12:25:43 +0900, Michael Paquier wrote:\n>> Then session 2 deadlocks while session 3 finishes correctly. I don't\n>> know if this is a class of problems we'd want to address for v12, but\n>> if we do then CIC (and DROP INDEX CONCURRENTLY?) could benefit from\n>> it.\n\n> This seems like a pre-existing issue to me. We probably should improve\n> that, but I don't think it has to be tied to 12.\n\nYeah. CREATE INDEX CONCURRENTLY has always had a deadlock hazard,\nso it's hardly surprising that REINDEX CONCURRENTLY does too.\nI don't think that fixing that is in-scope for v12, even if we had\nan idea how to do it, which we don't.\n\nWe do need to fix the wrong-lock-level problem of course, but\nthat seems straightforward.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 18:45:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On Tue, May 07, 2019 at 06:45:36PM -0400, Tom Lane wrote:\n> Yeah. CREATE INDEX CONCURRENTLY has always had a deadlock hazard,\n> so it's hardly surprising that REINDEX CONCURRENTLY does too.\n> I don't think that fixing that is in-scope for v12, even if we had\n> an idea how to do it, which we don't.\n\nThe most straight-forward approach I can think of would be to\ndetermine if non-transactional commands taking a lock on a table can\nbe safely skipped or not when checking for older snapshots than the\nminimum where the index is marked as valid. That's quite complex to\ntarget v12, so I agree to keep it out of the stability work.\n\n> We do need to fix the wrong-lock-level problem of course, but\n> that seems straightforward.\n\nSure.\n--\nMichael",
"msg_date": "Wed, 8 May 2019 08:43:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
},
{
"msg_contents": "On 2019-05-07 05:07, Michael Paquier wrote:\n> On Sat, May 04, 2019 at 09:59:20PM +0900, Michael Paquier wrote:\n>> The result should be no deadlocks happening in the two sessions\n>> running the reindex. I can see the deadlock easily with three psql\n>> sessions, running manually the queries.\n> \n> + * If the OID isn't valid, it means the index as concurrently dropped,\n> + * which is not a problem for us; just return normally.\n> Typo here s/as/is/.\n> \n> I have looked closer at the patch and the change proposed looks good\n> to me.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 May 2019 14:33:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Heap lock levels for REINDEX INDEX CONCURRENTLY not quite right?"
}
] |
[
{
"msg_contents": "I'd have thought that disabling enable_partition_pruning would,\num, disable partition pruning. It does not:\n\nregression=# create table p (a int) partition by list (a);\nCREATE TABLE\nregression=# create table p1 partition of p for values in (1);\nCREATE TABLE\nregression=# create table p2 partition of p for values in (2);\nCREATE TABLE\nregression=# explain select * from p1 where a = 3; \n QUERY PLAN \n----------------------------------------------------\n Seq Scan on p1 (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 3)\n(2 rows)\n\nregression=# set enable_partition_pruning TO off;\nSET\nregression=# explain select * from p1 where a = 3;\n QUERY PLAN \n----------------------------------------------------\n Seq Scan on p1 (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 3)\n(2 rows)\n\n\nThe fact that we fail to prune the first child is a separate issue\ndriven by some ruleutils.c limitations, cf\nhttps://www.postgresql.org/message-id/flat/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp\n\nMy point here is that the second EXPLAIN should have shown scanning\nboth partitions, shouldn't it?\n\n(v11 behaves the same as HEAD here; didn't try v10.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 14:35:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Turning off enable_partition_pruning doesn't"
},
{
"msg_contents": "On 2019-Apr-30, Tom Lane wrote:\n\n> regression=# explain select * from p1 where a = 3; \n\nBut you're reading from the partition, not from the partitioned table ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 14:43:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Turning off enable_partition_pruning doesn't"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-30, Tom Lane wrote:\n>> regression=# explain select * from p1 where a = 3; \n\n> But you're reading from the partition, not from the partitioned table ...\n\nArgh! Where's my brown paper bag?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Apr 2019 15:10:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Turning off enable_partition_pruning doesn't"
}
] |
[
{
"msg_contents": "Hi Bruce\n\nI saw this commit;\ncommit ad23adc5a169b114f9ff325932cbf2ce1c5e69c1\n|Author: Bruce Momjian <bruce@momjian.us>\n|Date: Tue Apr 30 14:06:57 2019 -0400\n|\n| doc: improve PG 12 to_timestamp()/to_date() wording\n\nwhich cleans up language added at cf984672.\n\nCan I suggest this additional change, which is updated and extracted from my\nlarger set of documentation fixes.\n\nJustin\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 96fafdd..b420585 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -6400,20 +6400,20 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\\d+)(.*)){1,1}');\n </para>\n <para>\n If <literal>FX</literal> is specified, a separator in the template string\n- matches exactly one character in input string. Notice we don't insist the\n- input string character be the same as the template string separator.\n+ matches exactly one character in the input string. But note that the\n+ input string character is not required to be the same as the separator from the template string.\n For example, <literal>to_timestamp('2000/JUN', 'FXYYYY MON')</literal>\n works, but <literal>to_timestamp('2000/JUN', 'FXYYYY MON')</literal>\n- returns an error because the second template string space is consumed\n- by the letter <literal>J</literal> in the input string.\n+ returns an error because the second space in the template string consumes\n+ the letter <literal>M</literal> from the input string.\n </para>\n </listitem>\n \n <listitem>\n <para>\n A <literal>TZH</literal> template pattern can match a signed number.\n- Without the <literal>FX</literal> option, it can lead to ambiguity in\n- interpretation of the minus sign, which can also be interpreted as a separator.\n+ Without the <literal>FX</literal> option, minus signs may be ambiguous,\n+ and could be interpreted as a separator.\n This ambiguity is resolved as follows: If the number of separators before\n <literal>TZH</literal> in the template string is less than the number of\n separators before the minus sign in the input string, the minus sign",
"msg_date": "Tue, 30 Apr 2019 13:36:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve PG 12 to_timestamp()/to_date() wording"
},
{
"msg_contents": "Hi!\n\nI'd like to add couple of comments from my side.\n\nOn Tue, Apr 30, 2019 at 9:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I saw this commit;\n> commit ad23adc5a169b114f9ff325932cbf2ce1c5e69c1\n> |Author: Bruce Momjian <bruce@momjian.us>\n> |Date: Tue Apr 30 14:06:57 2019 -0400\n> |\n> | doc: improve PG 12 to_timestamp()/to_date() wording\n>\n> which cleans up language added at cf984672.\n>\n> Can I suggest this additional change, which is updated and extracted from my\n> larger set of documentation fixes.\n>\n> Justin\n>\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 96fafdd..b420585 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -6400,20 +6400,20 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\\d+)(.*)){1,1}');\n> </para>\n> <para>\n> If <literal>FX</literal> is specified, a separator in the template string\n> - matches exactly one character in input string. Notice we don't insist the\n> - input string character be the same as the template string separator.\n> + matches exactly one character in the input string. But note that the\n> + input string character is not required to be the same as the separator from the template string.\n\nLooks good for me.\n\n> For example, <literal>to_timestamp('2000/JUN', 'FXYYYY MON')</literal>\n> works, but <literal>to_timestamp('2000/JUN', 'FXYYYY MON')</literal>\n> - returns an error because the second template string space is consumed\n> - by the letter <literal>J</literal> in the input string.\n> + returns an error because the second space in the template string consumes\n> + the letter <literal>M</literal> from the input string.\n> </para>\n> </listitem>\n\nWhy <literal>M</literal>? There is no letter \"M\" is input string.\nThe issue here is that we already consumed \"J\" from \"JUN\" and trying\nto match \"UN\" to \"MON\". So, I think we should live\n<literal>J</literal> here. The rest of this change looks good.\n\n> <listitem>\n> <para>\n> A <literal>TZH</literal> template pattern can match a signed number.\n> - Without the <literal>FX</literal> option, it can lead to ambiguity in\n> - interpretation of the minus sign, which can also be interpreted as a separator.\n> + Without the <literal>FX</literal> option, minus signs may be ambiguous,\n> + and could be interpreted as a separator.\n> This ambiguity is resolved as follows: If the number of separators before\n> <literal>TZH</literal> in the template string is less than the number of\n> separators before the minus sign in the input string, the minus sign\n\nLooks good for me.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 30 Apr 2019 21:48:14 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve PG 12 to_timestamp()/to_date() wording"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 09:48:14PM +0300, Alexander Korotkov wrote:\n> I'd like to add couple of comments from my side.\n\n> > - returns an error because the second template string space is consumed\n> > - by the letter <literal>J</literal> in the input string.\n> > + returns an error because the second space in the template string consumes\n> > + the letter <literal>M</literal> from the input string.\n> \n> Why <literal>M</literal>? There is no letter \"M\" is input string.\n> The issue here is that we already consumed \"J\" from \"JUN\" and trying\n> to match \"UN\" to \"MON\". So, I think we should live\n> <literal>J</literal> here. The rest of this change looks good.\n\nSeems like I confused myself while resolving rebase conflict.\n\nThanks for checking.\n\nJustin\n\n\n",
"msg_date": "Tue, 30 Apr 2019 19:14:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve PG 12 to_timestamp()/to_date() wording"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 07:14:04PM -0500, Justin Pryzby wrote:\n> On Tue, Apr 30, 2019 at 09:48:14PM +0300, Alexander Korotkov wrote:\n> > I'd like to add couple of comments from my side.\n> \n> > > - returns an error because the second template string space is consumed\n> > > - by the letter <literal>J</literal> in the input string.\n> > > + returns an error because the second space in the template string consumes\n> > > + the letter <literal>M</literal> from the input string.\n> > \n> > Why <literal>M</literal>? There is no letter \"M\" is input string.\n> > The issue here is that we already consumed \"J\" from \"JUN\" and trying\n> > to match \"UN\" to \"MON\". So, I think we should live\n> > <literal>J</literal> here. The rest of this change looks good.\n> \n> Seems like I confused myself while resolving rebase conflict.\n> \n> Thanks for checking.\n\nFind attached updated patch, which seems to still be needed.\n\nThis was subsumed and now extracted from a larger patch, from which Michael at\none point applied a few hunks.\nI have some minor updates based on review from Andres, but there didn't seem to\nbe much interest so I haven't pursued it.\nhttps://www.postgresql.org/message-id/20190520182001.GA25675%40telsasoft.com\n\nJustin",
"msg_date": "Sat, 6 Jul 2019 15:24:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve PG 12 to_timestamp()/to_date() wording"
},
{
"msg_contents": "On Sat, Jul 6, 2019 at 03:24:25PM -0500, Justin Pryzby wrote:\n> On Tue, Apr 30, 2019 at 07:14:04PM -0500, Justin Pryzby wrote:\n> > On Tue, Apr 30, 2019 at 09:48:14PM +0300, Alexander Korotkov wrote:\n> > > I'd like to add couple of comments from my side.\n> > \n> > > > - returns an error because the second template string space is consumed\n> > > > - by the letter <literal>J</literal> in the input string.\n> > > > + returns an error because the second space in the template string consumes\n> > > > + the letter <literal>M</literal> from the input string.\n> > > \n> > > Why <literal>M</literal>? There is no letter \"M\" is input string.\n> > > The issue here is that we already consumed \"J\" from \"JUN\" and trying\n> > > to match \"UN\" to \"MON\". So, I think we should live\n> > > <literal>J</literal> here. The rest of this change looks good.\n> > \n> > Seems like I confused myself while resolving rebase conflict.\n> > \n> > Thanks for checking.\n> \n> Find attached updated patch, which seems to still be needed.\n> \n> This was subsumed and now extracted from a larger patch, from which Michael at\n> one point applied a few hunks.\n> I have some minor updates based on review from Andres, but there didn't seem to\n> be much interest so I haven't pursued it.\n> https://www.postgresql.org/message-id/20190520182001.GA25675%40telsasoft.com\n\nPatch applied back through PG 12. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 8 Jul 2019 23:04:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve PG 12 to_timestamp()/to_date() wording"
}
] |
[
{
"msg_contents": "Comment for table_complete_speculative() says\n\n/*\n * Complete \"speculative insertion\" started in the same transaction.\nIf\n * succeeded is true, the tuple is fully inserted, if false, it's\nremoved.\n */\nstatic inline void\ntable_complete_speculative(Relation rel, TupleTableSlot *slot,\n uint32 specToken, bool succeeded)\n{\n rel->rd_tableam->tuple_complete_speculative(rel, slot, specToken,\n succeeded);\n}\n\nbut code really refers to succeeded as failure. Since that argument is\npassed as specConflict, means conflict happened and hence the tuple\nshould be removed. It would be better to fix the code to match the\ncomment as in AM layer its better to deal with succeeded to finish the\ninsertion and not other way round.\n\ndiff --git a/src/backend/access/heap/heapam_handler.c\nb/src/backend/access/heap/heapam_handler.c\nindex 4d179881f27..241639cfc20 100644\n--- a/src/backend/access/heap/heapam_handler.c\n+++ b/src/backend/access/heap/heapam_handler.c\n@@ -282,7 +282,7 @@ heapam_tuple_complete_speculative(Relation\nrelation, TupleTableSlot *slot,\n HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);\n\n /* adjust the tuple's state accordingly */\n- if (!succeeded)\n+ if (succeeded)\n heap_finish_speculative(relation, &slot->tts_tid);\n else\n heap_abort_speculative(relation, &slot->tts_tid);\ndiff --git a/src/backend/executor/nodeModifyTable.c\nb/src/backend/executor/nodeModifyTable.c\nindex 444c0c05746..d545bbce8a2 100644\n--- a/src/backend/executor/nodeModifyTable.c\n+++ b/src/backend/executor/nodeModifyTable.c\n@@ -556,7 +556,7 @@ ExecInsert(ModifyTableState *mtstate,\n\n /* adjust the tuple's state accordingly */\n table_complete_speculative(resultRelationDesc, slot,\n-\n specToken, specConflict);\n+\n specToken, !specConflict);\n\n /*\n * Wake up anyone waiting for our decision.\nThey will re-check\n\n- Ashwin and Melanie\n\n\n",
"msg_date": "Tue, 30 Apr 2019 11:53:38 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Match table_complete_speculative() code to comment"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 11:53:38 -0700, Ashwin Agrawal wrote:\n> Comment for table_complete_speculative() says\n> \n> /*\n> * Complete \"speculative insertion\" started in the same transaction.\n> If\n> * succeeded is true, the tuple is fully inserted, if false, it's\n> removed.\n> */\n> static inline void\n> table_complete_speculative(Relation rel, TupleTableSlot *slot,\n> uint32 specToken, bool succeeded)\n> {\n> rel->rd_tableam->tuple_complete_speculative(rel, slot, specToken,\n> succeeded);\n> }\n> \n> but code really refers to succeeded as failure. Since that argument is\n> passed as specConflict, means conflict happened and hence the tuple\n> should be removed. It would be better to fix the code to match the\n> comment as in AM layer its better to deal with succeeded to finish the\n> insertion and not other way round.\n> \n> diff --git a/src/backend/access/heap/heapam_handler.c\n> b/src/backend/access/heap/heapam_handler.c\n> index 4d179881f27..241639cfc20 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -282,7 +282,7 @@ heapam_tuple_complete_speculative(Relation\n> relation, TupleTableSlot *slot,\n> HeapTuple tuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);\n> \n> /* adjust the tuple's state accordingly */\n> - if (!succeeded)\n> + if (succeeded)\n> heap_finish_speculative(relation, &slot->tts_tid);\n> else\n> heap_abort_speculative(relation, &slot->tts_tid);\n> diff --git a/src/backend/executor/nodeModifyTable.c\n> b/src/backend/executor/nodeModifyTable.c\n> index 444c0c05746..d545bbce8a2 100644\n> --- a/src/backend/executor/nodeModifyTable.c\n> +++ b/src/backend/executor/nodeModifyTable.c\n> @@ -556,7 +556,7 @@ ExecInsert(ModifyTableState *mtstate,\n> \n> /* adjust the tuple's state accordingly */\n> table_complete_speculative(resultRelationDesc, slot,\n> -\n> specToken, specConflict);\n> +\n> specToken, !specConflict);\n\nAnd pushed, as https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=aa4b8c61d2cd57b53be03defb04d59b232a0e150\nwith the part that wasn't covered by tests now covered by\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=08e2edc0767ab6e619970f165cb34d4673105f23\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 12:22:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Match table_complete_speculative() code to comment"
}
] |
[
{
"msg_contents": "Today, while poking around the table_complete_speculative code which Ashwin\nmentioned in [1], we were trying to understand when exactly we would\ncomplete a\nspeculative insert by aborting.\n\nWe added a logging message to heapam_tuple_complete_speculative before it\ncalls\nheap_abort_speculative and ran the regression and isolation tests to see\nwhat\ntest failed so we knew how to exercise this codepath.\nNo tests failed, so we spent some time trying to understand when 'succeeded'\nwould be true coming into heap_tuple_complete_speculative.\n\nEventually, we figured out that if one transaction speculatively inserts a\ntuple into a table with a unique index and then pauses before inserting the\nvalue into the index, and while it is paused, another transaction\nsuccessfully\ninserts a value which would conflict with that value, it would result in an\naborted speculative insertion.\n\nt1(id,val)\nunique index t1(id)\n\ns1: insert into t1 values(1, 'someval') on conflict(id) do update set val =\n'someotherval';\ns1: pause in ExecInsert before calling ExecInsertIndexTuples\ns2: insert into t1 values(1, 'someval');\ns2: continue\n\nWe don't know of a way to add this scenario to the current isolation\nframework.\n\nCan anyone think of a good way to put this codepath under test?\n\n- Melanie & Ashwin\n\n[1]\nhttps://www.postgresql.org/message-id/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR%2BF1Y4w%40mail.gmail.com\n\nToday, while poking around the table_complete_speculative code which Ashwinmentioned in [1], we were trying to understand when exactly we would complete aspeculative insert by aborting.We added a logging message to heapam_tuple_complete_speculative before it callsheap_abort_speculative and ran the regression and isolation tests to see whattest failed so we knew how to exercise this codepath.No tests failed, so we spent some time trying to understand when 'succeeded'would be true coming into heap_tuple_complete_speculative.Eventually, we figured out that if one transaction speculatively inserts atuple into a table with a unique index and then pauses before inserting thevalue into the index, and while it is paused, another transaction successfullyinserts a value which would conflict with that value, it would result in anaborted speculative insertion.t1(id,val)unique index t1(id)s1: insert into t1 values(1, 'someval') on conflict(id) do update set val = 'someotherval';s1: pause in ExecInsert before calling ExecInsertIndexTupless2: insert into t1 values(1, 'someval');s2: continueWe don't know of a way to add this scenario to the current isolation framework.Can anyone think of a good way to put this codepath under test?- Melanie & Ashwin[1] https://www.postgresql.org/message-id/CALfoeitk7-TACwYv3hCw45FNPjkA86RfXg4iQ5kAOPhR%2BF1Y4w%40mail.gmail.com",
"msg_date": "Tue, 30 Apr 2019 17:15:55 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 17:15:55 -0700, Melanie Plageman wrote:\n> Today, while poking around the table_complete_speculative code which Ashwin\n> mentioned in [1], we were trying to understand when exactly we would\n> complete a\n> speculative insert by aborting.\n\n(FWIW, it's on my todo queue to look at this)\n\n\n> We added a logging message to heapam_tuple_complete_speculative before it\n> calls\n> heap_abort_speculative and ran the regression and isolation tests to see\n> what\n> test failed so we knew how to exercise this codepath.\n> No tests failed, so we spent some time trying to understand when 'succeeded'\n> would be true coming into heap_tuple_complete_speculative.\n> \n> Eventually, we figured out that if one transaction speculatively inserts a\n> tuple into a table with a unique index and then pauses before inserting the\n> value into the index, and while it is paused, another transaction\n> successfully\n> inserts a value which would conflict with that value, it would result in an\n> aborted speculative insertion.\n> \n> t1(id,val)\n> unique index t1(id)\n> \n> s1: insert into t1 values(1, 'someval') on conflict(id) do update set val =\n> 'someotherval';\n> s1: pause in ExecInsert before calling ExecInsertIndexTuples\n> s2: insert into t1 values(1, 'someval');\n> s2: continue\n> \n> We don't know of a way to add this scenario to the current isolation\n> framework.\n\n> Can anyone think of a good way to put this codepath under test?\n\nNot easily so - that's why the ON CONFLICT patch didn't add code\ncoverage for it :(. I wonder if you could whip something up by having\nanother non-unique expression index, where the expression acquires a\nadvisory lock? If that advisory lock where previously acquired by\nanother session, that should allow to write a reliable isolation test?\n\nAlternatively, as a fallback, there's a short pgbench test, I wonder if we\ncould just adapt that to use ON CONFLICT UPDATE?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 17:22:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 5:22 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Not easily so - that's why the ON CONFLICT patch didn't add code\n> coverage for it :(. I wonder if you could whip something up by having\n> another non-unique expression index, where the expression acquires a\n> advisory lock? If that advisory lock where previously acquired by\n> another session, that should allow to write a reliable isolation test?\n>\n>\nSo, I took a look at one of the existing tests that does something like what\nyou mentioned and tried the following:\n----------\ncreate table t1(key int, val text);\ncreate unique index t1_uniq_idx on t1(key);\ncreate or replace function t1_lock_func(int) returns int immutable language\nsql AS\n'select pg_advisory_xact_lock_shared(1); select $1';\ncreate index t1_lock_idx ON t1(t1_lock_func(key));\n----------\ns1:\nbegin isolation level read committed;\ninsert into t1 values(1, 'someval');\ns2:\nset default_transaction_isolation = 'read committed';\ninsert into t1 values(1, 'anyval') on conflict(key) do update set val =\n'updatedval';\n----------\n\nSo, the above doesn't work because s2 waits to acquire the lock in the first\nphase of the speculative insert -- when it is just checking the index,\nbefore\ninserting to the table and before inserting to the index.\n\nThen when the s1 is committed, we won't execute the speculative insert code\nat\nall and will go into ExecOnConflictUpdate instead.\n\nMaybe I just need a different kind of advisory lock to allow\nExecCheckIndexConstraints to be able to check the index here. I figured it\nis a\nread operation, so a shared advisory lock should be okay, but it seems like\nit\nis not okay\n\nWithout knowing any of the context, on an initial pass of debugging, I did\nnotice that, in the initial check of the index by s2, XactLockTableWait is\ncalled with reason_wait as XLTW_InsertIndex (even though we are just trying\nto\ncheck it, so maybe it knows our intentions:))\n\nIs there something I can do in the test to allow my check to go\nthrough but the insert to have to wait?\n\n-- \nMelanie Plageman\n\nOn Tue, Apr 30, 2019 at 5:22 PM Andres Freund <andres@anarazel.de> wrote:\nNot easily so - that's why the ON CONFLICT patch didn't add code\ncoverage for it :(. I wonder if you could whip something up by having\nanother non-unique expression index, where the expression acquires a\nadvisory lock? If that advisory lock where previously acquired by\nanother session, that should allow to write a reliable isolation test?\nSo, I took a look at one of the existing tests that does something like whatyou mentioned and tried the following:----------create table t1(key int, val text);create unique index t1_uniq_idx on t1(key);create or replace function t1_lock_func(int) returns int immutable language sql AS'select pg_advisory_xact_lock_shared(1); select $1';create index t1_lock_idx ON t1(t1_lock_func(key));----------s1:begin isolation level read committed;insert into t1 values(1, 'someval');s2:set default_transaction_isolation = 'read committed';insert into t1 values(1, 'anyval') on conflict(key) do update set val = 'updatedval';----------So, the above doesn't work because s2 waits to acquire the lock in the firstphase of the speculative insert -- when it is just checking the index, beforeinserting to the table and before inserting to the index.Then when the s1 is committed, we won't execute the speculative insert code atall and will go into ExecOnConflictUpdate instead.Maybe I just need a different kind of advisory lock to allowExecCheckIndexConstraints to be able to check the index here. I figured it is aread operation, so a shared advisory lock should be okay, but it seems like itis not okayWithout knowing any of the context, on an initial pass of debugging, I didnotice that, in the initial check of the index by s2, XactLockTableWait iscalled with reason_wait as XLTW_InsertIndex (even though we are just trying tocheck it, so maybe it knows our intentions:))Is there something I can do in the test to allow my check to gothrough but the insert to have to wait? -- Melanie Plageman",
"msg_date": "Tue, 30 Apr 2019 18:34:42 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 5:16 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Can anyone think of a good way to put this codepath under test?\n\nDuring the initial development of ON CONFLICT, speculative insertion\nitself was tested using custom stress testing that you can still get\nhere:\n\nhttps://github.com/petergeoghegan/jjanes_upsert\n\nI'm not sure that this is something that you can adopt, but I\ncertainly found it very useful at the time. It tests whether or not\nthere is agreement among concurrent speculative inserters, and whether\nor not there are \"unprincipled deadlocks\" (user hostile deadlocks that\ncannot be fixed by reordering something in application code).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Apr 2019 18:43:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn April 30, 2019 6:43:08 PM PDT, Peter Geoghegan <pg@bowt.ie> wrote:\n>On Tue, Apr 30, 2019 at 5:16 PM Melanie Plageman\n><melanieplageman@gmail.com> wrote:\n>> Can anyone think of a good way to put this codepath under test?\n>\n>During the initial development of ON CONFLICT, speculative insertion\n>itself was tested using custom stress testing that you can still get\n>here:\n>\n>https://github.com/petergeoghegan/jjanes_upsert\n>\n>I'm not sure that this is something that you can adopt, but I\n>certainly found it very useful at the time. It tests whether or not\n>there is agreement among concurrent speculative inserters, and whether\n>or not there are \"unprincipled deadlocks\" (user hostile deadlocks that\n>cannot be fixed by reordering something in application code).\n\nI think we want a deterministic case. I recall asking for that back then...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 30 Apr 2019 19:09:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, May 1, 2019 at 12:16 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> s1: insert into t1 values(1, 'someval') on conflict(id) do update set val = 'someotherval';\n> s1: pause in ExecInsert before calling ExecInsertIndexTuples\n> s2: insert into t1 values(1, 'someval');\n> s2: continue\n>\n> We don't know of a way to add this scenario to the current isolation framework.\n>\n> Can anyone think of a good way to put this codepath under test?\n\nHi Melanie,\n\nI think it'd be nice to have a set of macros that can create wait\npoints in the C code that isolation tests can control, in a special\nbuild. Perhaps there could be shm hash table of named wait points in\nshared memory; if DEBUG_WAIT_POINT(\"foo\") finds that \"foo\" is not\npresent, it continues, but if it finds an entry it waits for it to go\naway. Then isolation tests could add/remove names and signal a\ncondition variable to release waiters.\n\nI contemplated that while working on SKIP LOCKED, which had a bunch of\nweird edge cases that I tested by inserting throw-away wait-point code\nlike this:\n\nhttps://www.postgresql.org/message-id/CADLWmXXss83oiYD0pn_SfQfg%2ByNEpPbPvgDb8w6Fh--jScSybA%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 May 2019 14:13:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 7:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> I think it'd be nice to have a set of macros that can create wait\n> points in the C code that isolation tests can control, in a special\n> build. Perhaps there could be shm hash table of named wait points in\n> shared memory; if DEBUG_WAIT_POINT(\"foo\") finds that \"foo\" is not\n> present, it continues, but if it finds an entry it waits for it to go\n> away. Then isolation tests could add/remove names and signal a\n> condition variable to release waiters.\n>\n> I contemplated that while working on SKIP LOCKED, which had a bunch of\n> weird edge cases that I tested by inserting throw-away wait-point code\n> like this:\n>\n>\n> https://www.postgresql.org/message-id/CADLWmXXss83oiYD0pn_SfQfg%2ByNEpPbPvgDb8w6Fh--jScSybA%40mail.gmail.com\n>\n> Yes, I agree it would be nice to have a framework like this.\n\nGreenplum actually has a fault injection framework that, I believe, works\nsimilarly to what you are describing -- i.e. sets a variable in shared\nmemory.\nThere is an extension, gp_inject_fault, which allows you to set the faults.\nAndreas Scherbaum wrote a blog post about how to use it [1].\n\nThe Greenplum implementation is not documented particularly well in the\ncode,\nbut, it is something that folks working on Greenplum have talked about\nmodifying\nand proposing to Postgres.\n\n[1] http://engineering.pivotal.io/post/testing_greenplum_database_using_fault_injection/\n\n\n-- \nMelanie Plageman\n\nOn Tue, Apr 30, 2019 at 7:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:I think it'd be nice to have a set of macros that can create wait\npoints in the C code that isolation tests can control, in a special\nbuild. Perhaps there could be shm hash table of named wait points in\nshared memory; if DEBUG_WAIT_POINT(\"foo\") finds that \"foo\" is not\npresent, it continues, but if it finds an entry it waits for it to go\naway. Then isolation tests could add/remove names and signal a\ncondition variable to release waiters.\n\nI contemplated that while working on SKIP LOCKED, which had a bunch of\nweird edge cases that I tested by inserting throw-away wait-point code\nlike this:\n\nhttps://www.postgresql.org/message-id/CADLWmXXss83oiYD0pn_SfQfg%2ByNEpPbPvgDb8w6Fh--jScSybA%40mail.gmail.com\nYes, I agree it would be nice to have a framework like this.Greenplum actually has a fault injection framework that, I believe, workssimilarly to what you are describing -- i.e. sets a variable in shared memory.There is an extension, gp_inject_fault, which allows you to set the faults.Andreas Scherbaum wrote a blog post about how to use it [1].The Greenplum implementation is not documented particularly well in the code,but, it is something that folks working on Greenplum have talked about modifyingand proposing to Postgres.[1] http://engineering.pivotal.io/post/testing_greenplum_database_using_fault_injection/ -- Melanie Plageman",
"msg_date": "Wed, 1 May 2019 11:18:57 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-30 18:34:42 -0700, Melanie Plageman wrote:\n> On Tue, Apr 30, 2019 at 5:22 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> >\n> > Not easily so - that's why the ON CONFLICT patch didn't add code\n> > coverage for it :(. I wonder if you could whip something up by having\n> > another non-unique expression index, where the expression acquires a\n> > advisory lock? If that advisory lock where previously acquired by\n> > another session, that should allow to write a reliable isolation test?\n> >\n> >\n> So, I took a look at one of the existing tests that does something like what\n> you mentioned and tried the following:\n> ----------\n> create table t1(key int, val text);\n> create unique index t1_uniq_idx on t1(key);\n> create or replace function t1_lock_func(int) returns int immutable language\n> sql AS\n> 'select pg_advisory_xact_lock_shared(1); select $1';\n> create index t1_lock_idx ON t1(t1_lock_func(key));\n> ----------\n> s1:\n> begin isolation level read committed;\n> insert into t1 values(1, 'someval');\n> s2:\n> set default_transaction_isolation = 'read committed';\n> insert into t1 values(1, 'anyval') on conflict(key) do update set val =\n> 'updatedval';\n> ----------\n>\n> So, the above doesn't work because s2 waits to acquire the lock in the first\n> phase of the speculative insert -- when it is just checking the index,\n> before\n> inserting to the table and before inserting to the index.\n\nCouldn't that be addressed by having t1_lock_func() acquire two locks?\nOne for blocking during the initial index probe, and one for the\nspeculative insertion?\n\nI'm imagining something like\n\nif (pg_try_advisory_xact_lock(1))\n pg_advisory_xact_lock(2);\nelse\n pg_advisory_xact_lock(1);\n\nin t1_lock_func. If you then make the session something roughly like\n\ns1: pg_advisory_xact_lock(1);\ns1: pg_advisory_xact_lock(2);\n\ns2: upsert t1 <blocking for 1>\ns1: pg_advisory_xact_unlock(1);\ns2: <continuing>\ns2: <blocking for 2>\ns1: insert into t1 values(1, 'someval');\ns1: pg_advisory_xact_unlock(2);\ns2: <continuing>\ns2: spec-conflict\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 11:41:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 11:41:48 -0700, Andres Freund wrote:\n> I'm imagining something like\n> \n> if (pg_try_advisory_xact_lock(1))\n> pg_advisory_xact_lock(2);\n> else\n> pg_advisory_xact_lock(1);\n> \n> in t1_lock_func. If you then make the session something roughly like\n> \n> s1: pg_advisory_xact_lock(1);\n> s1: pg_advisory_xact_lock(2);\n> \n> s2: upsert t1 <blocking for 1>\n> s1: pg_advisory_xact_unlock(1);\n> s2: <continuing>\n> s2: <blocking for 2>\n> s1: insert into t1 values(1, 'someval');\n> s1: pg_advisory_xact_unlock(2);\n> s2: <continuing>\n> s2: spec-conflict\n\nNeeded to be slightly more complicated than that, but not that much. See\nthe attached test. What do you think?\n\nI think we should apply something like this (minus the WARNING, of\ncourse). It's a bit complicated, but it seems worth covering this\nspecial case.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 10 May 2019 14:40:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 14:40:38 -0700, Andres Freund wrote:\n> On 2019-05-01 11:41:48 -0700, Andres Freund wrote:\n> > I'm imagining something like\n> > \n> > if (pg_try_advisory_xact_lock(1))\n> > pg_advisory_xact_lock(2);\n> > else\n> > pg_advisory_xact_lock(1);\n> > \n> > in t1_lock_func. If you then make the session something roughly like\n> > \n> > s1: pg_advisory_xact_lock(1);\n> > s1: pg_advisory_xact_lock(2);\n> > \n> > s2: upsert t1 <blocking for 1>\n> > s1: pg_advisory_xact_unlock(1);\n> > s2: <continuing>\n> > s2: <blocking for 2>\n> > s1: insert into t1 values(1, 'someval');\n> > s1: pg_advisory_xact_unlock(2);\n> > s2: <continuing>\n> > s2: spec-conflict\n> \n> Needed to be slightly more complicated than that, but not that much. See\n> the attached test. What do you think?\n> \n> I think we should apply something like this (minus the WARNING, of\n> course). It's a bit complicated, but it seems worth covering this\n> special case.\n\nAnd pushed. Let's see what the buildfarm says.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 14 May 2019 12:19:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Tue, May 14, 2019 at 12:19 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-10 14:40:38 -0700, Andres Freund wrote:\n> > On 2019-05-01 11:41:48 -0700, Andres Freund wrote:\n> > > I'm imagining something like\n> > >\n> > > if (pg_try_advisory_xact_lock(1))\n> > > pg_advisory_xact_lock(2);\n> > > else\n> > > pg_advisory_xact_lock(1);\n> > >\n> > > in t1_lock_func. If you then make the session something roughly like\n> > >\n> > > s1: pg_advisory_xact_lock(1);\n> > > s1: pg_advisory_xact_lock(2);\n> > >\n> > > s2: upsert t1 <blocking for 1>\n> > > s1: pg_advisory_xact_unlock(1);\n> > > s2: <continuing>\n> > > s2: <blocking for 2>\n> > > s1: insert into t1 values(1, 'someval');\n> > > s1: pg_advisory_xact_unlock(2);\n> > > s2: <continuing>\n> > > s2: spec-conflict\n> >\n> > Needed to be slightly more complicated than that, but not that much. See\n> > the attached test. What do you think?\n> >\n> > I think we should apply something like this (minus the WARNING, of\n> > course). It's a bit complicated, but it seems worth covering this\n> > special case.\n>\n> And pushed. Let's see what the buildfarm says.\n>\n> Regards,\n>\n> Andres\n>\n\nSo, I recognize this has already been merged. However, after reviewing the\ntest,\nI believe there is a typo in the second permutation.\n\n# Test that speculative locks are correctly acquired and released, s2\n# inserts, s1 updates.\n\nI think you meant\n\n# Test that speculative locks are correctly acquired and released, s1\n# inserts, s2 updates.\n\nThough, I'm actually not sure how this permutation is exercising different\ncode than the first permutation.\n\nAlso, it would make the test easier to understand for me if, for instances\nof the\nword \"lock\" in the test description and comments, you specified locktype --\ne.g.\nadvisory lock.\nI got confused between the speculative lock, the object locks on the index\nwhich\nare required for probing or inserting into the index, and the advisory\nlocks.\n\nBelow is a potential re-wording of one of the permutations that is more\nexplicit\nand more clear to me as a reader.\n\n# Test that speculative locks are correctly acquired and released, s2\n# inserts, s1 updates.\npermutation\n # acquire a number of advisory locks, to control execution flow - the\n # blurt_and_lock function acquires advisory locks that allow us to\n # continue after a) the optimistic conflict probe b) after the\n # insertion of the speculative tuple.\n\n \"controller_locks\"\n \"controller_show\"\n # Both sessions will be waiting on advisory locks\n \"s1_upsert\" \"s2_upsert\"\n \"controller_show\"\n # Switch both sessions to wait on the other advisory lock next time\n \"controller_unlock_1_1\" \"controller_unlock_2_1\"\n # Allow both sessions to do the optimistic conflict probe and do the\n # speculative insertion into the table\n # They will then be waiting on another advisory lock when they attempt to\n # update the index\n \"controller_unlock_1_3\" \"controller_unlock_2_3\"\n \"controller_show\"\n # Allow the second session to finish insertion (complete speculative)\n \"controller_unlock_2_2\"\n # This should now show a successful insertion\n \"controller_show\"\n # Allow the first session to finish insertion (abort speculative)\n \"controller_unlock_1_2\"\n # This should now show a successful UPSERT\n \"controller_show\"\n\nI was also wondering: Is it possible that one of the \"controller_unlock_*\"\nfunctions will get called before the session with the upsert has had a\nchance to\nmove forward in its progress and be waiting on that lock?\nThat is, given that we don't check that the sessions are waiting on the\nlocks\nbefore unlocking them, is there a race condition?\n\nI noticed that there is not a test case which would cover the speculative\nwait\ncodepath. This seems much more challenging, however, it does seem like a\nworthwhile test to have.\n\n-- \nMelanie Plageman\n\nOn Tue, May 14, 2019 at 12:19 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-05-10 14:40:38 -0700, Andres Freund wrote:\n> On 2019-05-01 11:41:48 -0700, Andres Freund wrote:\n> > I'm imagining something like\n> > \n> > if (pg_try_advisory_xact_lock(1))\n> > pg_advisory_xact_lock(2);\n> > else\n> > pg_advisory_xact_lock(1);\n> > \n> > in t1_lock_func. If you then make the session something roughly like\n> > \n> > s1: pg_advisory_xact_lock(1);\n> > s1: pg_advisory_xact_lock(2);\n> > \n> > s2: upsert t1 <blocking for 1>\n> > s1: pg_advisory_xact_unlock(1);\n> > s2: <continuing>\n> > s2: <blocking for 2>\n> > s1: insert into t1 values(1, 'someval');\n> > s1: pg_advisory_xact_unlock(2);\n> > s2: <continuing>\n> > s2: spec-conflict\n> \n> Needed to be slightly more complicated than that, but not that much. See\n> the attached test. What do you think?\n> \n> I think we should apply something like this (minus the WARNING, of\n> course). It's a bit complicated, but it seems worth covering this\n> special case.\n\nAnd pushed. Let's see what the buildfarm says.\n\nRegards,\n\nAndres\nSo, I recognize this has already been merged. However, after reviewing the test,I believe there is a typo in the second permutation.# Test that speculative locks are correctly acquired and released, s2# inserts, s1 updates.I think you meant# Test that speculative locks are correctly acquired and released, s1# inserts, s2 updates.Though, I'm actually not sure how this permutation is exercising differentcode than the first permutation.Also, it would make the test easier to understand for me if, for instances of theword \"lock\" in the test description and comments, you specified locktype -- e.g.advisory lock.I got confused between the speculative lock, the object locks on the index whichare required for probing or inserting into the index, and the advisory locks.Below is a potential re-wording of one of the permutations that is more explicitand more clear to me as a reader.# Test that speculative locks are correctly acquired and released, s2# inserts, s1 updates.permutation # acquire a number of advisory locks, to control execution flow - the # blurt_and_lock function acquires advisory locks that allow us to # continue after a) the optimistic conflict probe b) after the # insertion of the speculative tuple. \"controller_locks\" \"controller_show\" # Both sessions will be waiting on advisory locks \"s1_upsert\" \"s2_upsert\" \"controller_show\" # Switch both sessions to wait on the other advisory lock next time \"controller_unlock_1_1\" \"controller_unlock_2_1\" # Allow both sessions to do the optimistic conflict probe and do the # speculative insertion into the table # They will then be waiting on another advisory lock when they attempt to # update the index \"controller_unlock_1_3\" \"controller_unlock_2_3\" \"controller_show\" # Allow the second session to finish insertion (complete speculative) \"controller_unlock_2_2\" # This should now show a successful insertion \"controller_show\" # Allow the first session to finish insertion (abort speculative) \"controller_unlock_1_2\" # This should now show a successful UPSERT \"controller_show\"I was also wondering: Is it possible that one of the \"controller_unlock_*\"functions will get called before the session with the upsert has had a chance tomove forward in its progress and be waiting on that lock?That is, given that we don't check that the sessions are waiting on the locksbefore unlocking them, is there a race condition?I noticed that there is not a test case which would cover the speculative waitcodepath. This seems much more challenging, however, it does seem like aworthwhile test to have.-- Melanie Plageman",
"msg_date": "Wed, 15 May 2019 18:34:15 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 18:34:15 -0700, Melanie Plageman wrote:\n> So, I recognize this has already been merged. However, after reviewing the\n> test,\n> I believe there is a typo in the second permutation.\n> \n> # Test that speculative locks are correctly acquired and released, s2\n> # inserts, s1 updates.\n> \n> I think you meant\n> \n> # Test that speculative locks are correctly acquired and released, s1\n> # inserts, s2 updates.\n\nHm, yea.\n\n\n> Though, I'm actually not sure how this permutation is exercising differen.\n> code than the first permutation.\n\nI was basically just trying to make sure that there's a sensible result\nindependent of which transaction \"wins\", while keeping the command-start\norder the same. Probably not crucial, but seemed like a reasonable\naddition.\n\n\n> Also, it would make the test easier to understand for me if, for instances\n> of the\n> word \"lock\" in the test description and comments, you specified locktype --\n> e.g.\n> advisory lock.\n> I got confused between the speculative lock, the object locks on the index\n> which\n> are required for probing or inserting into the index, and the advisory\n> locks.\n> \n> Below is a potential re-wording of one of the permutations that is more\n> explicit\n> and more clear to me as a reader.\n\nMinor gripe: For the future, it's easier to such changes as a patch as\nwell - otherwise others need to move it to the file and diff it to\ncomment on the changes.\n\n\n> # Test that speculative locks are correctly acquired and released, s2\n> # inserts, s1 updates.\n> permutation\n> # acquire a number of advisory locks, to control execution flow - the\n> # blurt_and_lock function acquires advisory locks that allow us to\n> # continue after a) the optimistic conflict probe b) after the\n> # insertion of the speculative tuple.\n> \n> \"controller_locks\"\n> \"controller_show\"\n> # Both sessions will be waiting on advisory locks\n> \"s1_upsert\" \"s2_upsert\"\n> \"controller_show\"\n> # Switch both sessions to wait on the other advisory lock next time\n> \"controller_unlock_1_1\" \"controller_unlock_2_1\"\n> # Allow both sessions to do the optimistic conflict probe and do the\n> # speculative insertion into the table\n> # They will then be waiting on another advisory lock when they attempt to\n> # update the index\n> \"controller_unlock_1_3\" \"controller_unlock_2_3\"\n> \"controller_show\"\n> # Allow the second session to finish insertion (complete speculative)\n> \"controller_unlock_2_2\"\n> # This should now show a successful insertion\n> \"controller_show\"\n> # Allow the first session to finish insertion (abort speculative)\n> \"controller_unlock_1_2\"\n> # This should now show a successful UPSERT\n> \"controller_show\"\n\n\n> I was also wondering: Is it possible that one of the\n> \"controller_unlock_*\" functions will get called before the session\n> with the upsert has had a chance to move forward in its progress and\n> be waiting on that lock? That is, given that we don't check that the\n> sessions are waiting on the locks before unlocking them, is there a\n> race condition?\n\nIsolationtester only switches between commands when either the command\nfinished, or once it's know to be waiting for a lock. Therefore I don't\nthink this race exists? That logic is in the if (flags & STEP_NONBLOCK)\nblock in isolationtester.c:try_complete_step().\n\nDoes that make sense? Or did I misunderstand your concern?\n\n\n> I noticed that there is not a test case which would cover the speculative\n> wait\n> codepath. This seems much more challenging, however, it does seem like a\n> worthwhile test to have.\n\nShouldn't be that hard to create, I think. I think acquiring another\nlock in a second, non-unique, expression index, ought to do the trick?\nIt probably has to be created after the unique index (so it's later in\nthe\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 18:50:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, May 15, 2019 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > Also, it would make the test easier to understand for me if, for\n> instances\n> > of the\n> > word \"lock\" in the test description and comments, you specified locktype\n> --\n> > e.g.\n> > advisory lock.\n> > I got confused between the speculative lock, the object locks on the\n> index\n> > which\n> > are required for probing or inserting into the index, and the advisory\n> > locks.\n> >\n> > Below is a potential re-wording of one of the permutations that is more\n> > explicit\n> > and more clear to me as a reader.\n>\n> Minor gripe: For the future, it's easier to such changes as a patch as\n> well - otherwise others need to move it to the file and diff it to\n> comment on the changes.\n>\n>\nWill do--attached, though the wording is a rough suggestion.\n\n> I was also wondering: Is it possible that one of the\n> > \"controller_unlock_*\" functions will get called before the session\n> > with the upsert has had a chance to move forward in its progress and\n> > be waiting on that lock? That is, given that we don't check that the\n> > sessions are waiting on the locks before unlocking them, is there a\n> > race condition?\n>\n> Isolationtester only switches between commands when either the command\n> finished, or once it's know to be waiting for a lock. Therefore I don't\n> think this race exists? That logic is in the if (flags & STEP_NONBLOCK)\n> block in isolationtester.c:try_complete_step().\n>\n> Does that make sense? Or did I misunderstand your concern?\n>\n>\nI see. I didn't know what the blocking/waiting logic was in the isolation\nframework. Nevermind, then.\n\n\n>\n> > I noticed that there is not a test case which would cover the speculative\n> > wait\n> > codepath. This seems much more challenging, however, it does seem like a\n> > worthwhile test to have.\n>\n> Shouldn't be that hard to create, I think. I think acquiring another\n> lock in a second, non-unique, expression index, ought to do the trick?\n> It probably has to be created after the unique index (so it's later in\n> the\n>\n>\nI would think that the sequence would be s1 and s2 probe the index, s1 and\ns2\ninsert into the table, s1 updates the index but does not complete the\nspeculative insert and clear the token (pause before\ntable_complete_speculative). s2 is in speculative wait when attempting to\nupdate\nthe index.\n\nSomething like\n\npermutation\n \"controller_locks\"\n \"controller_show\"\n \"s1_upsert\" \"s2_upsert\"\n \"controller_show\"\n \"controller_unlock_1_1\" \"controller_unlock_2_1\"\n \"controller_unlock_1_3\" \"controller_unlock_2_3\"\n \"controller_unlock_1_2\"\n \"s1_magically_pause_before_complete_speculative\"\n # put s2 in speculative wait\n \"controller_unlock_2_2\"\n \"s1_magically_unpause_before_complete_speculative\"\n\nSo, how would another lock on another index keep s1 from clearing the\nspeculative token after it has updated the index?\n\n-- \nMelanie Plageman",
"msg_date": "Wed, 15 May 2019 20:35:49 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 20:35:49 -0700, Melanie Plageman wrote:\n> > > I noticed that there is not a test case which would cover the speculative\n> > > wait\n> > > codepath. This seems much more challenging, however, it does seem like a\n> > > worthwhile test to have.\n> >\n> > Shouldn't be that hard to create, I think. I think acquiring another\n> > lock in a second, non-unique, expression index, ought to do the trick?\n> > It probably has to be created after the unique index (so it's later in\n> > the\n> >\n> >\n> I would think that the sequence would be s1 and s2 probe the index, s1 and\n> s2\n> insert into the table, s1 updates the index but does not complete the\n> speculative insert and clear the token (pause before\n> table_complete_speculative). s2 is in speculative wait when attempting to\n> update\n> the index.\n> \n> Something like\n> \n> permutation\n> \"controller_locks\"\n> \"controller_show\"\n> \"s1_upsert\" \"s2_upsert\"\n> \"controller_show\"\n> \"controller_unlock_1_1\" \"controller_unlock_2_1\"\n> \"controller_unlock_1_3\" \"controller_unlock_2_3\"\n> \"controller_unlock_1_2\"\n> \"s1_magically_pause_before_complete_speculative\"\n> # put s2 in speculative wait\n> \"controller_unlock_2_2\"\n> \"s1_magically_unpause_before_complete_speculative\"\n> \n> So, how would another lock on another index keep s1 from clearing the\n> speculative token after it has updated the index?\n\nIf there were a second index on upserttest, something like CREATE INDEX\nON upserttest((blurt_and_lock2(key))); and blurt_and_lock2 acquired a\nlock on (current_setting('spec.session')::int, 4), ISTM you could cause\na block to happen after the first index (the unique one, used for ON\nCONFLICT) successfully created the index entry, but before\ncomplete_speculative is called. Shouldn't that fulfil your\ns1_magically_pause_before_complete_speculative goal? The\ncontroller_locks would only acquire the (1, 4) lock, thereby *not*\nblocking s2 (or you could just release the lock in a separate step).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 20:46:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, May 15, 2019 at 8:36 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n> On Wed, May 15, 2019 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>>\n>> > I noticed that there is not a test case which would cover the\n>> speculative\n>> > wait\n>> > codepath. This seems much more challenging, however, it does seem like a\n>> > worthwhile test to have.\n>>\n>> Shouldn't be that hard to create, I think. I think acquiring another\n>> lock in a second, non-unique, expression index, ought to do the trick?\n>> It probably has to be created after the unique index (so it's later in\n>> the\n>>\n>> I would think that the sequence would be s1 and s2 probe the index, s1\n> and s2\n> insert into the table, s1 updates the index but does not complete the\n> speculative insert and clear the token (pause before\n> table_complete_speculative). s2 is in speculative wait when attempting to\n> update\n> the index.\n>\n> Something like\n>\n> permutation\n> \"controller_locks\"\n> \"controller_show\"\n> \"s1_upsert\" \"s2_upsert\"\n> \"controller_show\"\n> \"controller_unlock_1_1\" \"controller_unlock_2_1\"\n> \"controller_unlock_1_3\" \"controller_unlock_2_3\"\n> \"controller_unlock_1_2\"\n> \"s1_magically_pause_before_complete_speculative\"\n> # put s2 in speculative wait\n> \"controller_unlock_2_2\"\n> \"s1_magically_unpause_before_complete_speculative\"\n>\n> So, how would another lock on another index keep s1 from clearing the\n> speculative token after it has updated the index?\n>\n\nThe second index would help to hold the session after inserting the tuple\nin unique index but before completing the speculative insert. Hence, helps\nto create the condition easily. I believe order of index insertion is\nhelping here that unique index is inserted and then non-unique index is\ninserted too.\n\nAttaching patch with the test using the idea Andres mentioned and it works\nto excercise the speculative wait.",
"msg_date": "Wed, 15 May 2019 22:32:34 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, May 15, 2019 at 10:32 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> The second index would help to hold the session after inserting the tuple\n> in unique index but before completing the speculative insert. Hence, helps\n> to create the condition easily. I believe order of index insertion is\n> helping here that unique index is inserted and then non-unique index is\n> inserted too.\n>\n>\nOh, cool. I didn't know that execution order would be guaranteed for which\nindex\nto insert into first.\n\n\n> Attaching patch with the test using the idea Andres mentioned and it works\n> to excercise the speculative wait.\n>\n>\nIt looks good.\nI thought it would be helpful to mention why you have s1 create the\nnon-unique\nindex after the permutation has begun. You don't want this index to\ninfluence\nthe behavior of the other permutations--this part makes sense. However, why\nhave\ns1 do it instead of the controller?\n\nI added a couple suggested changes to the comments in the permutation in the\nversion in the patch I attached. Note that I did not update the answer\nfiles.\n(These suggested changes to comments are in distinct from and would be in\naddition to the suggestions I had for the wording of the comments overall\nin the\nabove email I sent).\n\n-- \nMelanie Plageman",
"msg_date": "Thu, 16 May 2019 13:59:47 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-16 13:59:47 -0700, Melanie Plageman wrote:\n> On Wed, May 15, 2019 at 10:32 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> \n> >\n> > The second index would help to hold the session after inserting the tuple\n> > in unique index but before completing the speculative insert. Hence, helps\n> > to create the condition easily. I believe order of index insertion is\n> > helping here that unique index is inserted and then non-unique index is\n> > inserted too.\n> >\n> >\n> Oh, cool. I didn't know that execution order would be guaranteed for which\n> index\n> to insert into first.\n\nIt's not *strictly* speaking *always* well defined. The list of indexes\nis sorted by the oid of the index - so once created, it's\nconsistent. But when the oid assignment wraps around, it'd be the other\nway around. But I think it's ok to disregard that - it'll never happen\nin regression tests run against a new cluster, and you'd have to run\ntests against an installed cluster for a *LONG* time for a *tiny* window\nwhere the wraparound would happen precisely between the creation of the\ntwo indexes.\n\nMakes sense?\n\nI guess we could make that case a tiny bit easier to diagnose in the\nextremely unlikely case it happens by having a step that outputs\nSELECT 'index_a'::regclass::int8 < 'index_b'::regclass::int8;\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 May 2019 14:03:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Thu, May 16, 2019 at 2:03 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-16 13:59:47 -0700, Melanie Plageman wrote:\n> > On Wed, May 15, 2019 at 10:32 PM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> >\n> > >\n> > > The second index would help to hold the session after inserting the\n> tuple\n> > > in unique index but before completing the speculative insert. Hence,\n> helps\n> > > to create the condition easily. I believe order of index insertion is\n> > > helping here that unique index is inserted and then non-unique index is\n> > > inserted too.\n> > >\n> > >\n> > Oh, cool. I didn't know that execution order would be guaranteed for\n> which\n> > index\n> > to insert into first.\n>\n> It's not *strictly* speaking *always* well defined. The list of indexes\n> is sorted by the oid of the index - so once created, it's\n> consistent. But when the oid assignment wraps around, it'd be the other\n> way around. But I think it's ok to disregard that - it'll never happen\n> in regression tests run against a new cluster, and you'd have to run\n> tests against an installed cluster for a *LONG* time for a *tiny* window\n> where the wraparound would happen precisely between the creation of the\n> two indexes.\n>\n> Makes sense?\n>\n\nYep, thanks.\n\n\n> I guess we could make that case a tiny bit easier to diagnose in the\n> extremely unlikely case it happens by having a step that outputs\n> SELECT 'index_a'::regclass::int8 < 'index_b'::regclass::int8;\n>\n>\nGood idea.\nI squashed the changes I suggested in previous emails, Ashwin's patch, my\nsuggested updates to that patch, and the index order check all into one\nupdated\npatch attached.\n\n-- \nMelanie Plageman",
"msg_date": "Thu, 16 May 2019 20:46:11 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Thu, May 16, 2019 at 8:46 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n> I squashed the changes I suggested in previous emails, Ashwin's patch, my\n> suggested updates to that patch, and the index order check all into one\n> updated\n> patch attached.\n>\n>\nI realized that the numbers at the front probably indicate which patch it\nis in\na patch set and not the version, so, if that is the case, a renamed patch --\nsecond version but the only patch needed if you are applying to master.\nIs this right?\n\n-- \nMelanie Plageman",
"msg_date": "Fri, 17 May 2019 09:23:53 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On 2019-May-17, Melanie Plageman wrote:\n\n> I realized that the numbers at the front probably indicate which patch\n> it is in a patch set and not the version, so, if that is the case, a\n> renamed patch -- second version but the only patch needed if you are\n> applying to master. Is this right?\n\nThat's correct. I suggest that \"git format-patch -vN origin/master\",\nwhere the N is the version you're currently posting, generates good\npatch files to attach in email.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 17 May 2019 13:30:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Thu, May 16, 2019 at 8:46 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n> Good idea.\n> I squashed the changes I suggested in previous emails, Ashwin's patch, my\n> suggested updates to that patch, and the index order check all into one\n> updated\n> patch attached.\n>\n>\nI've updated this patch to make it apply on master cleanly. Thanks to\nAlvaro for format-patch suggestion.\n\nThe first patch in the set adds the speculative wait case discussed\nabove from Ashwin's patch.\n\nThe second patch in the set is another suggestion I have. I noticed\nthat the insert-conflict-toast test mentions that it is \"not\nguaranteed to lead to a failed speculative insertion\" and, since it\nseems to be testing the speculative abort but with TOAST tables, I\nthought it might work to kill that spec file and move that test case\ninto insert-conflict-specconflict so the test can utilize the existing\nadvisory locks being used for the other tests in that file to make it\ndeterministic which session succeeds in inserting the tuple.\n\n-- \nMelanie Plageman",
"msg_date": "Wed, 5 Jun 2019 15:49:47 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-05 15:49:47 -0700, Melanie Plageman wrote:\n> On Thu, May 16, 2019 at 8:46 PM Melanie Plageman <melanieplageman@gmail.com>\n> wrote:\n> \n> >\n> > Good idea.\n> > I squashed the changes I suggested in previous emails, Ashwin's patch, my\n> > suggested updates to that patch, and the index order check all into one\n> > updated\n> > patch attached.\n> >\n> >\n> I've updated this patch to make it apply on master cleanly. Thanks to\n> Alvaro for format-patch suggestion.\n\nPlanning to push this, now that v12 is branched off. But only to master, I\ndon't think it's worth backpatching at the moment.\n\n\n> The second patch in the set is another suggestion I have. I noticed\n> that the insert-conflict-toast test mentions that it is \"not\n> guaranteed to lead to a failed speculative insertion\" and, since it\n> seems to be testing the speculative abort but with TOAST tables, I\n> thought it might work to kill that spec file and move that test case\n> into insert-conflict-specconflict so the test can utilize the existing\n> advisory locks being used for the other tests in that file to make it\n> deterministic which session succeeds in inserting the tuple.\n\nSeems like a good plan.\n> diff --git a/src/test/isolation/specs/insert-conflict-specconflict.spec b/src/test/isolation/specs/insert-conflict-specconflict.spec\n> index 3a70484fc2..7f29fb9d02 100644\n> --- a/src/test/isolation/specs/insert-conflict-specconflict.spec\n> +++ b/src/test/isolation/specs/insert-conflict-specconflict.spec\n> @@ -10,7 +10,7 @@ setup\n> {\n> CREATE OR REPLACE FUNCTION blurt_and_lock(text) RETURNS text IMMUTABLE LANGUAGE plpgsql AS $$\n> BEGIN\n> - RAISE NOTICE 'called for %', $1;\n> + RAISE NOTICE 'blurt_and_lock() called for %', $1;\n> \n> \t-- depending on lock state, wait for lock 2 or 3\n> IF pg_try_advisory_xact_lock(current_setting('spec.session')::int, 1) THEN\n> @@ -23,9 +23,16 @@ setup\n> RETURN $1;\n> END;$$;\n> \n> + CREATE OR REPLACE FUNCTION blurt_and_lock2(text) RETURNS text IMMUTABLE LANGUAGE plpgsql AS $$\n> + BEGIN\n> + RAISE NOTICE 'blurt_and_lock2() called for %', $1;\n> + PERFORM pg_advisory_xact_lock(current_setting('spec.session')::int, 4);\n> + RETURN $1;\n> + END;$$;\n> +\n\nAny chance for a bit more descriptive naming than *2? I can live with\nit, but ...\n\n\n> +step \"controller_print_speculative_locks\" { SELECT locktype,classid,objid,mode,granted FROM pg_locks WHERE locktype='speculative\n> +token' ORDER BY granted; }\n\nI think showing the speculative locks is possibly going to be unreliable\n- the release time of speculative locks is IIRC not that reliable. I\nthink it could e.g. happen that speculative locks are held longer\nbecause autovacuum spawned an analyze in the background.\n\n\n> + # Should report s1 is waiting on speculative lock\n> + \"controller_print_speculative_locks\"\n\nHm, I might be missing something, but I don't think it currently\ndoes. Looking at the expected file:\n\n+step controller_print_speculative_locks: SELECT locktype,classid,objid,mode,granted FROM pg_locks WHERE locktype='speculative\n+token' ORDER BY granted;\n+locktype classid objid mode granted \n+\n\nAnd if it showed something, it'd make the test not work, because\nclassid/objid aren't necessarily going to be the same from test to test.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:48:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:48 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > diff --git a/src/test/isolation/specs/insert-conflict-specconflict.spec\n> b/src/test/isolation/specs/insert-conflict-specconflict.spec\n> > index 3a70484fc2..7f29fb9d02 100644\n> > --- a/src/test/isolation/specs/insert-conflict-specconflict.spec\n> > +++ b/src/test/isolation/specs/insert-conflict-specconflict.spec\n> > @@ -10,7 +10,7 @@ setup\n> > {\n> > CREATE OR REPLACE FUNCTION blurt_and_lock(text) RETURNS text\n> IMMUTABLE LANGUAGE plpgsql AS $$\n> > BEGIN\n> > - RAISE NOTICE 'called for %', $1;\n> > + RAISE NOTICE 'blurt_and_lock() called for %', $1;\n> >\n> > -- depending on lock state, wait for lock 2 or 3\n> > IF\n> pg_try_advisory_xact_lock(current_setting('spec.session')::int, 1) THEN\n> > @@ -23,9 +23,16 @@ setup\n> > RETURN $1;\n> > END;$$;\n> >\n> > + CREATE OR REPLACE FUNCTION blurt_and_lock2(text) RETURNS text\n> IMMUTABLE LANGUAGE plpgsql AS $$\n> > + BEGIN\n> > + RAISE NOTICE 'blurt_and_lock2() called for %', $1;\n> > + PERFORM\n> pg_advisory_xact_lock(current_setting('spec.session')::int, 4);\n> > + RETURN $1;\n> > + END;$$;\n> > +\n>\n> Any chance for a bit more descriptive naming than *2? I can live with\n> it, but ...\n>\n>\nTaylor Vesely and I paired on updating this test, and, it became clear\nthat the way that the steps and functions are named makes it very\ndifficult to understand what the test is doing. That is, I helped\nwrite this test and, after a month away, I could no longer understand\nwhat it was doing at all.\n\nWe changed the text of the blurts to \"acquiring advisory lock ...\"\nfrom \"blocking\" because we realized that this would print even when\nthe lock was acquired immediately successfully, which is a little\nmisleading for the reader.\n\nHe's taking a stab at some renaming/refactoring to make it more clear\n(including renaming blurt_and_lock2())\n\n\n>\n> > +step \"controller_print_speculative_locks\" { SELECT\n> locktype,classid,objid,mode,granted FROM pg_locks WHERE\n> locktype='speculative\n> > +token' ORDER BY granted; }\n>\n> I think showing the speculative locks is possibly going to be unreliable\n> - the release time of speculative locks is IIRC not that reliable. I\n> think it could e.g. happen that speculative locks are held longer\n> because autovacuum spawned an analyze in the background.\n>\n>\nI actually think having the \"controller_print_speculative_locks\"\nwouldn't be a problem because we have not released the advisory lock\non 4 in s2 that allows it to complete its speculative insertion and so\ns1 will still be in speculative wait.\n\nThe step that might be a problem if autovacuum delays release of the\nspeculative locks is the \"controller_show\" step, because, at that\npoint, if the lock wasn't released, then s1 would still be waiting and\nwouldn't have updated.\n\n\n>\n> > + # Should report s1 is waiting on speculative lock\n> > + \"controller_print_speculative_locks\"\n>\n> Hm, I might be missing something, but I don't think it currently\n> does. Looking at the expected file:\n>\n\n+step controller_print_speculative_locks: SELECT\n> locktype,classid,objid,mode,granted FROM pg_locks WHERE\n> locktype='speculative\n> +token' ORDER BY granted;\n> +locktype classid objid mode granted\n>\n> +\n>\n>\nOops! due to an errant newline, the query wasn't correct.\n\n\n> And if it showed something, it'd make the test not work, because\n> classid/objid aren't necessarily going to be the same from test to test.\n>\n>\nGood point. In the attached patch, classid/objid columns are removed\nfrom the SELECT list.\n\nMelanie & Taylor",
"msg_date": "Wed, 7 Aug 2019 13:47:17 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 1:47 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n>\n> On Wed, Jul 24, 2019 at 11:48 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> > diff --git a/src/test/isolation/specs/insert-conflict-specconflict.spec\n>> b/src/test/isolation/specs/insert-conflict-specconflict.spec\n>> > index 3a70484fc2..7f29fb9d02 100644\n>> > --- a/src/test/isolation/specs/insert-conflict-specconflict.spec\n>> > +++ b/src/test/isolation/specs/insert-conflict-specconflict.spec\n>> > @@ -10,7 +10,7 @@ setup\n>> > {\n>> > CREATE OR REPLACE FUNCTION blurt_and_lock(text) RETURNS text\n>> IMMUTABLE LANGUAGE plpgsql AS $$\n>> > BEGIN\n>> > - RAISE NOTICE 'called for %', $1;\n>> > + RAISE NOTICE 'blurt_and_lock() called for %', $1;\n>> >\n>> > -- depending on lock state, wait for lock 2 or 3\n>> > IF\n>> pg_try_advisory_xact_lock(current_setting('spec.session')::int, 1) THEN\n>> > @@ -23,9 +23,16 @@ setup\n>> > RETURN $1;\n>> > END;$$;\n>> >\n>> > + CREATE OR REPLACE FUNCTION blurt_and_lock2(text) RETURNS text\n>> IMMUTABLE LANGUAGE plpgsql AS $$\n>> > + BEGIN\n>> > + RAISE NOTICE 'blurt_and_lock2() called for %', $1;\n>> > + PERFORM\n>> pg_advisory_xact_lock(current_setting('spec.session')::int, 4);\n>> > + RETURN $1;\n>> > + END;$$;\n>> > +\n>>\n>> Any chance for a bit more descriptive naming than *2? I can live with\n>> it, but ...\n>>\n>>\n> Taylor Vesely and I paired on updating this test, and, it became clear\n> that the way that the steps and functions are named makes it very\n> difficult to understand what the test is doing. That is, I helped\n> write this test and, after a month away, I could no longer understand\n> what it was doing at all.\n>\n> We changed the text of the blurts to \"acquiring advisory lock ...\"\n> from \"blocking\" because we realized that this would print even when\n> the lock was acquired immediately successfully, which is a little\n> misleading for the reader.\n>\n> He's taking a stab at some renaming/refactoring to make it more clear\n> (including renaming blurt_and_lock2())\n>\n\nSo, Taylor and I had hoped to rename the steps to something more specific\nthat\ntold the story of what this test is doing and made it more clear.\nUnfortunately,\nour attempt to do that didn't work and made step re-use very difficult.\nAlas, we decided the original names were less confusing.\n\nMy idea for renaming blurt_and_lock2() was actually to rename\nblurt_and_lock()\nto blurt_and_lock_123() -- since it always takes a lock on 1,2, or 3.\nThen, I could name the second function, which locks 4, blurt_and_lock_4().\nWhat do you think?\n\nI've attached a rebased patch updated with the new function names.\n\n\n>\n>\n>>\n>> > +step \"controller_print_speculative_locks\" { SELECT\n>> locktype,classid,objid,mode,granted FROM pg_locks WHERE\n>> locktype='speculative\n>> > +token' ORDER BY granted; }\n>>\n>> I think showing the speculative locks is possibly going to be unreliable\n>> - the release time of speculative locks is IIRC not that reliable. I\n>> think it could e.g. happen that speculative locks are held longer\n>> because autovacuum spawned an analyze in the background.\n>>\n>>\n> I actually think having the \"controller_print_speculative_locks\"\n> wouldn't be a problem because we have not released the advisory lock\n> on 4 in s2 that allows it to complete its speculative insertion and so\n> s1 will still be in speculative wait.\n>\n> The step that might be a problem if autovacuum delays release of the\n> speculative locks is the \"controller_show\" step, because, at that\n> point, if the lock wasn't released, then s1 would still be waiting and\n> wouldn't have updated.\n>\n\nSo, what should we do about this? Do you agree that the \"controller_show\"\nstep\nwould be a problem?\n\n-- \nMelanie Plageman",
"msg_date": "Wed, 21 Aug 2019 13:59:00 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, Aug 21, 2019 at 01:59:00PM -0700, Melanie Plageman wrote:\n> So, what should we do about this? Do you agree that the \"controller_show\"\n> step would be a problem?\n\nAndres, it seems to me that this is waiting some input from you.\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 17:41:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-21 13:59:00 -0700, Melanie Plageman wrote:\n> >> > +step \"controller_print_speculative_locks\" { SELECT\n> >> locktype,classid,objid,mode,granted FROM pg_locks WHERE\n> >> locktype='speculative\n> >> > +token' ORDER BY granted; }\n> >>\n> >> I think showing the speculative locks is possibly going to be unreliable\n> >> - the release time of speculative locks is IIRC not that reliable. I\n> >> think it could e.g. happen that speculative locks are held longer\n> >> because autovacuum spawned an analyze in the background.\n> >>\n> >>\n> > I actually think having the \"controller_print_speculative_locks\"\n> > wouldn't be a problem because we have not released the advisory lock\n> > on 4 in s2 that allows it to complete its speculative insertion and so\n> > s1 will still be in speculative wait.\n\nHm. At the very least it'd have to be restricted to only match locks in\nthe same database - e.g. for parallel installcheck it is common for\nthere to be other concurrent tests. I'll add that when committing, no\nneed for a new version.\n\nI'm also a bit concerned that adding the pg_locks query would mean we can't\nrun the test in parallel with others, if we ever finally get around to\nadding a parallel isolationtester schedule (which is really needed, it's\ntoo slow as is).\nhttps://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\nBut I guess we we'll just deal with not running this test in parallel.\n\n\n> > The step that might be a problem if autovacuum delays release of the\n> > speculative locks is the \"controller_show\" step, because, at that\n> > point, if the lock wasn't released, then s1 would still be waiting and\n> > wouldn't have updated.\n> >\n> So, what should we do about this? Do you agree that the \"controller_show\"\n> step would be a problem?\n\nIt hasn't caused failures so far, I think. Or are you saying you think\nit's more likely to cause failures in the added tests?\n\nHad planned to commit now, but I'm not able to think through the state\ntransitions at this hour, apparently :(. I'll try to do it tomorrow\nmorning.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Feb 2020 23:02:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nI'm currently fighting with a race I'm observing in about 1/4 of the\nruns. I get:\n@@ -361,16 +361,18 @@\n locktype mode granted\n\n speculative tokenShareLock f\n speculative tokenExclusiveLock t\n step controller_unlock_2_4: SELECT pg_advisory_unlock(2, 4);\n pg_advisory_unlock\n\n t\n s1: NOTICE: blurt_and_lock_123() called for k1 in session 1\n s1: NOTICE: acquiring advisory lock on 2\n+s1: NOTICE: blurt_and_lock_123() called for k1 in session 1\n+s1: NOTICE: acquiring advisory lock on 2\n step s1_upsert: <... completed>\n step s2_upsert: <... completed>\n step controller_show: SELECT * FROM upserttest;\n key data\n\n k1 inserted s2 with conflict update s1\n(this is the last permutation)\n\nThe issue is basically that s1 goes through the\ncheck_exclusion_or_unique_constraint() conflict check twice.\n\nI added a bit of debugging information (using fprintf, with elog it was\nmuch harder to hit):\n\nSuccess:\n2020-02-07 16:14:56.501 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167003: acquiring xact lock\n2020-02-07 16:14:56.512 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.522 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167002: releasing xact lock 2 3\n1167004: acquired xact lock\n1167004: xid 8466 acquiring 5\n2020-02-07 16:14:56.523 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC70F0: - Heap/INSERT: off 2 flags 0x0C\n2020-02-07 16:14:56.523 PST [1167004][6/1014:8466] NOTICE: blurt_and_lock_123() called for k1 in session 2\n2020-02-07 16:14:56.523 PST [1167004][6/1014:8466] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:14:56.523 PST [1167004][6/1014:8466] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:14:56.523 PST [1167004][6/1014:8466] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167004: acquiring xact lock\n2020-02-07 16:14:56.533 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.544 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.555 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.565 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.576 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.587 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167002: releasing xact lock 2 2\n1167004: acquired xact lock\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC7150: - Btree/NEWROOT: lev 0\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC7190: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] NOTICE: blurt_and_lock_4() called for k1 in session 2\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 3 at RAISE\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] NOTICE: acquiring advisory lock on 4\n2020-02-07 16:14:56.588 PST [1167004][6/1014:8466] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 4 at RAISE\n1167004: acquiring xact lock\n2020-02-07 16:14:56.598 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.609 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.620 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.630 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167002: releasing xact lock 1 2\n1167003: acquired xact lock\n2020-02-07 16:14:56.631 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC71D0: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:14:56.631 PST [1167003][5/1254:8465] NOTICE: blurt_and_lock_4() called for k1 in session 1\n2020-02-07 16:14:56.631 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 3 at RAISE\n2020-02-07 16:14:56.631 PST [1167003][5/1254:8465] NOTICE: acquiring advisory lock on 4\n2020-02-07 16:14:56.631 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 4 at RAISE\n1167003: acquiring xact lock\n1167003: acquired xact lock\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC7230: - Btree/NEWROOT: lev 0\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC7270: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC72A8: - Heap/DELETE: off 1 flags 0x08\n1167003: xid 8465 releasing lock 5\n1167003: retry due to conflict\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167003: acquiring xact lock\n1167003: acquired xact lock\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:14:56.632 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167003: acquiring xact lock\n1167003: acquired xact lock\n1167003: xid 8465 waiting xwait 8466 (xmin 8466 xmax 0) spec 5\n2020-02-07 16:14:56.642 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.653 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.667 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:14:56.677 PST [1167001][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167002: releasing xact lock 2 4\n1167004: acquired xact lock\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC72E8: - Btree/INSERT_LEAF: off 2\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC7318: - Heap/HEAP_CONFIRM: off 2\n1167004: xid 8466 releasing lock 5\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC7348: - Transaction/COMMIT: 2020-02-07 16:14:56.678602-08\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] LOG: xlog flush request 0/9CC7348; write 0/9CBDF58; flush 0/9CBDF58\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] STATEMENT: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock_123(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2';\n2020-02-07 16:14:56.678 PST [1167003][5/1254:8465] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:14:56.678 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:14:56.678 PST [1167003][5/1254:8465] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:14:56.678 PST [1167003][5/1254:8465] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167003: acquiring xact lock\n1167003: acquired xact lock\n2020-02-07 16:14:56.678 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC7380: - Heap/LOCK: off 2: xid 8465: flags 0x00 LOCK_ONLY EXCL_LOCK\n2020-02-07 16:14:56.679 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC73F0: - Heap/HOT_UPDATE: off 2 xmax 8465 flags 0x10 ; new off 3 xmax 8465\n2020-02-07 16:14:56.679 PST [1167003][5/1254:8465] LOG: INSERT @ 0/9CC7420: - Transaction/COMMIT: 2020-02-07 16:14:56.679085-08\n2020-02-07 16:14:56.679 PST [1167003][5/1254:8465] LOG: xlog flush request 0/9CC7420; write 0/9CC7060; flush 0/9CC7060\n2020-02-07 16:14:56.679 PST [1167003][5/1254:8465] STATEMENT: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock_123(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1';\n\nfail:\n1167056: releasing xact lock 2 3\n1167058: acquired xact lock\n1167058: xid 8490 acquiring 5\n2020-02-07 16:16:43.990 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D1D8: - Heap/INSERT: off 2 flags 0x0C\n2020-02-07 16:16:43.990 PST [1167058][6/11:8490] NOTICE: blurt_and_lock_123() called for k1 in session 2\n2020-02-07 16:16:43.990 PST [1167058][6/11:8490] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:16:43.990 PST [1167058][6/11:8490] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:16:43.990 PST [1167058][6/11:8490] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167058: acquiring xact lock\n2020-02-07 16:16:44.000 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.011 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.022 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.033 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.044 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.054 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167056: releasing xact lock 2 2\n1167058: acquired xact lock\n2020-02-07 16:16:44.055 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D238: - Btree/NEWROOT: lev 0\n2020-02-07 16:16:44.055 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D278: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:16:44.056 PST [1167058][6/11:8490] NOTICE: blurt_and_lock_4() called for k1 in session 2\n2020-02-07 16:16:44.056 PST [1167058][6/11:8490] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 3 at RAISE\n2020-02-07 16:16:44.056 PST [1167058][6/11:8490] NOTICE: acquiring advisory lock on 4\n2020-02-07 16:16:44.056 PST [1167058][6/11:8490] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 4 at RAISE\n1167058: acquiring xact lock\n2020-02-07 16:16:44.066 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.076 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.087 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.098 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167056: releasing xact lock 1 2\n1167057: acquired xact lock\n2020-02-07 16:16:44.099 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D2B8: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:16:44.099 PST [1167057][5/13:8489] NOTICE: blurt_and_lock_4() called for k1 in session 1\n2020-02-07 16:16:44.099 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 3 at RAISE\n2020-02-07 16:16:44.099 PST [1167057][5/13:8489] NOTICE: acquiring advisory lock on 4\n2020-02-07 16:16:44.099 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_4(text) line 4 at RAISE\n1167057: acquiring xact lock\n1167057: acquired xact lock\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D318: - Btree/NEWROOT: lev 0\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D358: - Btree/INSERT_LEAF: off 1\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D390: - Heap/DELETE: off 1 flags 0x08\n1167057: xid 8489 releasing lock 5\n1167057: retry due to conflict\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167057: acquiring xact lock\n1167057: acquired xact lock\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:16:44.100 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167057: acquiring xact lock\n1167057: acquired xact lock\n1167057: xid 8489 waiting xwait 8490 (xmin 8490 xmax 0) spec 5\n2020-02-07 16:16:44.110 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.121 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.135 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n2020-02-07 16:16:44.145 PST [1167055][3/0:0] DEBUG: bind <unnamed> to isolationtester_waiting\n1167056: releasing xact lock 2 4\n1167058: acquired xact lock\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D3D0: - Btree/INSERT_LEAF: off 2\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D400: - Heap/HEAP_CONFIRM: off 2\n1167058: xid 8490 releasing lock 5\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D430: - Transaction/COMMIT: 2020-02-07 16:16:44.146767-08\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] LOG: xlog flush request 0/9D2D430; write 0/9D24058; flush 0/9D24058\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] STATEMENT: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock_123(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2';\n2020-02-07 16:16:44.146 PST [1167057][5/13:8489] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:16:44.146 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:16:44.146 PST [1167057][5/13:8489] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:16:44.146 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167057: acquiring xact lock\n1167057: acquired xact lock\n1167057: xid 8489 waiting xwait 8490 (xmin 8490 xmax 0) spec 0\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] NOTICE: blurt_and_lock_123() called for k1 in session 1\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 3 at RAISE\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] NOTICE: acquiring advisory lock on 2\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] CONTEXT: PL/pgSQL function blurt_and_lock_123(text) line 7 at RAISE\n1167057: acquiring xact lock\n1167057: acquired xact lock\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D468: - Heap/LOCK: off 2: xid 8489: flags 0x00 LOCK_ONLY EXCL_LOCK\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D4D8: - Heap/HOT_UPDATE: off 2 xmax 8489 flags 0x10 ; new off 3 xmax 8489\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] LOG: INSERT @ 0/9D2D508: - Transaction/COMMIT: 2020-02-07 16:16:44.147348-08\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] LOG: xlog flush request 0/9D2D508; write 0/9D2D148; flush 0/9D2D148\n2020-02-07 16:16:44.147 PST [1167057][5/13:8489] STATEMENT: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock_123(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1';\n\n\nThe important bit is here the different \"xid .* waiting xwait .* spec*\"\nlines. In the success case we see:\n\n1167003: xid 8465 waiting xwait 8466 (xmin 8466 xmax 0) spec 5\n1167002: releasing xact lock 2 4\n1167004: acquired xact lock\n1167004: xid 8466 releasing lock 5\n2020-02-07 16:14:56.678 PST [1167004][6/1014:8466] LOG: INSERT @ 0/9CC7348: - Transaction/COMMIT: 2020-02-07 16:14:56.678602-08\n1167003: acquired xact lock\n\nIn the failing case we see:\n1167057: xid 8489 waiting xwait 8490 (xmin 8490 xmax 0) spec 5\n1167056: releasing xact lock 2 4\n1167058: acquired xact lock\n1167058: xid 8490 releasing lock 5\n2020-02-07 16:16:44.146 PST [1167058][6/11:8490] LOG: INSERT @ 0/9D2D430: - Transaction/COMMIT: 2020-02-07 16:16:44.146767-08\n1167057: xid 8489 waiting xwait 8490 (xmin 8490 xmax 0) spec 0\n1167057: acquired xact lock\n\n\nI think the issue here is that determines whether s1 can finish its\ncheck_exclusion_or_unique_constraint() check with one retry is whether\nit reaches it does the tuple visibility test before s2's transaction has\nactually marked itself as visible (note that ProcArrayEndTransaction is\nafter RecordTransactionCommit logging the COMMIT above).\n\nI think the fix is quite easy: Ensure that there *always* will be the\nsecond wait iteration on the transaction (in addition to the already\nalways existing wait on the speculative token). Which is just adding\ns2_begin s2_commit steps. Simple, but took me a few hours to\nunderstand :/.\n\nI've attached that portion of my changes. Will interrupt scheduled\nprogramming for a bit of exercise now.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 7 Feb 2020 16:40:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-07 16:40:46 -0800, Andres Freund wrote:\n> I'm currently fighting with a race I'm observing in about 1/4 of the\n> runs. [...]\n> I think the issue here is that determines whether s1 can finish its\n> check_exclusion_or_unique_constraint() check with one retry is whether\n> it reaches it does the tuple visibility test before s2's transaction has\n> actually marked itself as visible (note that ProcArrayEndTransaction is\n> after RecordTransactionCommit logging the COMMIT above).\n> \n> I think the fix is quite easy: Ensure that there *always* will be the\n> second wait iteration on the transaction (in addition to the already\n> always existing wait on the speculative token). Which is just adding\n> s2_begin s2_commit steps. Simple, but took me a few hours to\n> understand :/.\n> \n> I've attached that portion of my changes. Will interrupt scheduled\n> programming for a bit of exercise now.\n\nI've pushed this now. Thanks for the patch, and the review!\n\nI additionally restricted the controller_print_speculative_locks step to\nthe current database and made a bunch of debug output more\nprecise. Survived ~150 runs locally.\n\nLets see what the buildfarm says...\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 11 Feb 2020 16:45:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Tue, Feb 11, 2020 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I additionally restricted the controller_print_speculative_locks step to\n> the current database and made a bunch of debug output more\n> precise. Survived ~150 runs locally.\n>\n> Lets see what the buildfarm says...\n>\n>\nThanks so much for finishing the patch and checking for race\nconditions!\n\n-- \nMelanie Plageman\n\nOn Tue, Feb 11, 2020 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\nI additionally restricted the controller_print_speculative_locks step to\nthe current database and made a bunch of debug output more\nprecise. Survived ~150 runs locally.\n\nLets see what the buildfarm says...\nThanks so much for finishing the patch and checking for raceconditions!-- Melanie Plageman",
"msg_date": "Tue, 11 Feb 2020 17:20:05 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 6:50 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n>\n> On Tue, Feb 11, 2020 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>>\n>> I additionally restricted the controller_print_speculative_locks step to\n>> the current database and made a bunch of debug output more\n>> precise. Survived ~150 runs locally.\n>>\n>> Lets see what the buildfarm says...\n>>\n>\n> Thanks so much for finishing the patch and checking for race\n> conditions!\n>\n\nCan we change the status of CF entry for this patch [1] to committed\nor is there any work pending?\n\n\n[1] - https://commitfest.postgresql.org/27/2200/\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 18:03:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
},
{
"msg_contents": "On 2020-03-24 18:03:57 +0530, Amit Kapila wrote:\n> Can we change the status of CF entry for this patch [1] to committed\n> or is there any work pending?\n\nDone!\n\n\n",
"msg_date": "Tue, 24 Mar 2020 12:13:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding a test for speculative insert abort case"
}
] |
[
{
"msg_contents": "Hi,\n\nI see there's an ongoing discussion about race conditions in walreceiver\nblocking shutdown, so let me start a new thread about a race condition in\nwalsender during shutdown.\n\nThe symptoms are rather simple - 'pg_ctl -m fast shutdown' gets stuck,\nwaiting for walsender processes to catch-up and terminate indefinitely.\n\nThe reason for that is pretty simple - the walsenders are doing logical\ndecoding, and during shutdown they're waiting for WalSndCaughtUp=true.\nBut per XLogSendLogical() this only happens if\n\n if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())\n {\n WalSndCaughtUp = true;\n ...\n }\n\nThat is, we need to get beyong GetFlushRecPtr(). But that may never\nhappen, because there may be incomplete (and unflushed) page in WAL\nbuffers, not forced out by any transaction. So if there's a WAL record\noverflowing to the unflushed page, the walsender will never catch up.\n\nNow, this situation is apparently expected, because WalSndWaitForWal()\ndoes this:\n\n /*\n * If we're shutting down, trigger pending WAL to be written out,\n * otherwise we'd possibly end up waiting for WAL that never gets\n * written, because walwriter has shut down already.\n */\n if (got_STOPPING)\n XLogBackgroundFlush();\n\nbut unfortunately that does not actually do anything, because right at\nthe very beginning XLogBackgroundFlush() does this:\n\n /* back off to last completed page boundary */\n WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;\n\nThat is, it intentionally ignores the incomplete page, which means the\nwalsender can't read the record and reach GetFlushRecPtr().\n\nXLogBackgroundFlush() does this since (at least) 2007, so how come we\nnever had issues with this before?\n\nFirstly, walsenders used for physical replication don't have this issue,\nbecause they don't need to decode the WAL record overflowing to the\nincomplete/unflushed page. So it seems only walsenders used for logical\ndecoding are vulnerable to this.\n\nSecondly, this seems to happen only when a bit of WAL is generated just\nat the right moment during shutdown. I have initially ran into this issue\nwith the failover slots (i.e. the patch that was committed and reverted\nin the 9.6 cycle, IIRC), which is doing exactly this - it's writing the\nslot positions into WAL during shutdown. Which made it pretty trivial to\ntrigger this issue.\n\nBut I don't think we're safe without the failover slots patch, because\nany output plugin can do something similar - say, LogLogicalMessage() or\nsomething like that. I'm not aware of a plugin doing such things, but I\ndon't think it's illegal / prohibited either. (Of course, plugins that\ngenerate WAL won't be useful for decoding on standby in the future.)\n\nSo what I think we should do is to tweak XLogBackgroundFlush() so that\nduring shutdown it skips the back-off to page boundary, and flushes even\nthe last piece of WAL. There are only two callers anyway, so something\nlike XLogBackgroundFlush(bool) would be simple enough.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 1 May 2019 02:28:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": " Hi Tomas,\n\nOn Wed, 1 May 2019 at 02:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> I see there's an ongoing discussion about race conditions in walreceiver\n> blocking shutdown, so let me start a new thread about a race condition in\n> walsender during shutdown.\n>\n> The symptoms are rather simple - 'pg_ctl -m fast shutdown' gets stuck,\n> waiting for walsender processes to catch-up and terminate indefinitely.\n\nI can confirm, during the past couple of years we observed such a\nproblem a few times and this is really annoying.\n\n> The reason for that is pretty simple - the walsenders are doing logical\n> decoding, and during shutdown they're waiting for WalSndCaughtUp=true.\n> But per XLogSendLogical() this only happens if\n>\n> if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())\n> {\n> WalSndCaughtUp = true;\n> ...\n> }\n\nAfter a couple of days investigating and debugging I came to a\nslightly different conclusion, WalSndCaughtUp is set to true in my\ncase.\nSince I am mostly a python person, I decided to use psycopg2 for my\ninvestigation. I took an example from\nhttp://initd.org/psycopg/docs/advanced.html#logical-replication-quick-start\nas a starting point, created a slot and started the script.\nI wasn't running any transactions, therefore the DemoConsumer.__call__\nwas never executed and cursor.send_feedback(flush_lsn=msg.data_start)\nwas never called either. Basically, the only what the psycopg2\ninternals was doing - periodically sending keepalive messages or\nreplying to keepalives sent by postgres. In the postgres debug log\nthey are visible as:\n\n2019-05-01 12:58:32.785 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n\nIf you try to do a fast shutdown of postgres while the script is\nrunning, it will never finish, and in the postgres log you will get\nindefinite stream of messages:\n2019-05-01 13:00:02.880 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n2019-05-01 13:00:02.880 CEST [13939] DEBUG: sending replication keepalive\n2019-05-01 13:00:02.880 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n2019-05-01 13:00:02.880 CEST [13939] DEBUG: sending replication keepalive\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: sending replication keepalive\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: sending replication keepalive\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: sending replication keepalive\n2019-05-01 13:00:02.881 CEST [13939] DEBUG: write 0/0 flush 0/0 apply 0/0\n\nActually, the same problem will happen even in the case when the\nconsumer script receives some message, but not very intensively, but\nit is just a bit harder to reproduce it.\n\nIf you attach to the walsender with gdb, you'll see the following picture:\n(gdb) bt\n#0 0x00007fd6623d296a in __libc_send (fd=8, buf=0x55cb958dca08,\nlen=94, flags=0) at ../sysdeps/unix/sysv/linux/send.c:28\n#1 0x000055cb93aa7ce9 in secure_raw_write (port=0x55cb958d71e0,\nptr=0x55cb958dca08, len=94) at be-secure.c:318\n#2 0x000055cb93aa7b87 in secure_write (port=0x55cb958d71e0,\nptr=0x55cb958dca08, len=94) at be-secure.c:265\n#3 0x000055cb93ab6bf9 in internal_flush () at pqcomm.c:1433\n#4 0x000055cb93ab6b89 in socket_flush () at pqcomm.c:1409\n#5 0x000055cb93dac30b in send_message_to_frontend\n(edata=0x55cb942b4380 <errordata>) at elog.c:3317\n#6 0x000055cb93da8973 in EmitErrorReport () at elog.c:1481\n#7 0x000055cb93da5abf in errfinish (dummy=0) at elog.c:481\n#8 0x000055cb93da852d in elog_finish (elevel=13, fmt=0x55cb93f32de3\n\"sending replication keepalive\") at elog.c:1376\n#9 0x000055cb93bcae71 in WalSndKeepalive (requestReply=true) at\nwalsender.c:3358\n#10 0x000055cb93bca062 in WalSndDone (send_data=0x55cb93bc9e29\n<XLogSendLogical>) at walsender.c:2872\n#11 0x000055cb93bc9155 in WalSndLoop (send_data=0x55cb93bc9e29\n<XLogSendLogical>) at walsender.c:2194\n#12 0x000055cb93bc7b11 in StartLogicalReplication (cmd=0x55cb95931cc0)\nat walsender.c:1109\n#13 0x000055cb93bc83d6 in exec_replication_command\n(cmd_string=0x55cb958b2360 \"START_REPLICATION SLOT \\\"test\\\" LOGICAL\n0/00000000\") at walsender.c:1541\n#14 0x000055cb93c31653 in PostgresMain (argc=1, argv=0x55cb958deb68,\ndbname=0x55cb958deb48 \"postgres\", username=0x55cb958deb28\n\"akukushkin\") at postgres.c:4178\n#15 0x000055cb93b95185 in BackendRun (port=0x55cb958d71e0) at postmaster.c:4361\n#16 0x000055cb93b94824 in BackendStartup (port=0x55cb958d71e0) at\npostmaster.c:4033\n#17 0x000055cb93b90ccd in ServerLoop () at postmaster.c:1706\n#18 0x000055cb93b90463 in PostmasterMain (argc=3, argv=0x55cb958ac710)\nat postmaster.c:1379\n#19 0x000055cb93abb08e in main (argc=3, argv=0x55cb958ac710) at main.c:228\n(gdb) f 10\n#10 0x000055cb93bca062 in WalSndDone (send_data=0x55cb93bc9e29\n<XLogSendLogical>) at walsender.c:2872\n2872 WalSndKeepalive(true);\n(gdb) p WalSndCaughtUp\n$1 = true\n(gdb) p *MyWalSnd\n$2 = {pid = 21845, state = WALSNDSTATE_STREAMING, sentPtr = 23586168,\nneedreload = false, write = 0, flush = 23586112, apply = 0, writeLag =\n-1, flushLag = -1, applyLag = -1, mutex = 0 '\\000', latch =\n0x7fd66096b594, sync_standby_priority = 0}\n\nAs you can see, the value of WalSndCaughtUp is set to true! The\nshutdown never finishes because the value of sentPtr is higher than\nvalues of MyWalSnd->flush or MyWalSnd->write:\n(gdb) l 2858\n2853 /*\n2854 * To figure out whether all WAL has successfully been\nreplicated, check\n2855 * flush location if valid, write otherwise. Tools\nlike pg_receivewal will\n2856 * usually (unless in synchronous mode) return an\ninvalid flush location.\n2857 */\n2858 replicatedPtr = XLogRecPtrIsInvalid(MyWalSnd->flush) ?\n2859 MyWalSnd->write : MyWalSnd->flush;\n2860\n2861 if (WalSndCaughtUp && sentPtr == replicatedPtr &&\n2862 !pq_is_send_pending())\n2863 {\n2864 /* Inform the standby that XLOG streaming is done */\n2865 EndCommand(\"COPY 0\", DestRemote);\n2866 pq_flush();\n2867\n2868 proc_exit(0);\n2869 }\n\nWhat is more interesting, if one is using pg_recvlogical, it is not\npossible to reproduce the issue. That happens because pg_recvlogical\nsends the response on keepalive messages by sending the flush location\nequals to walEnd which it got from keepalive.\n\nAs a next logical step I tried to do the same in python with psycopg2.\nUnfortunately, the keepalive functionality is hidden in the C code and\nit is not possible to change it without recompiling the psycopg2, but\nthere is an asynchronous interface available:\nhttp://initd.org/psycopg/docs/extras.html#psycopg2.extras.ReplicationCursor.wal_end.\nI just had to do just one minor adjustment:\n\n try:\n sel = select([cur], [], [], max(0, timeout))\n if not any(sel):\n- cur.send_feedback() # timed out, send keepalive message\n+ cur.send_feedback(flush_lsn=cur.wal_end) # timed out,\nsend keepalive message\n except InterruptedError:\n pass # recalculate timeout and continue\n\nwal_end шы еру property of the cursor object and it is updated not\nonly for every message received, but also from keepalive messages.\nBasically such a little change made the python example behavior very\nsimilar to the pg_recvlogical.\n\nAll above text probably looks like a brain dump, but I don't think\nthat it conflicts with Tomas's findings it rather compliments them.\nI am very glad that now I know how to mitigate the problem on the\nclient side, but IMHO it is also very important to fix the server\nbehavior if it is ever possible.\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Wed, 1 May 2019 14:13:10 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "On Wed, May 01, 2019 at 02:13:10PM +0200, Alexander Kukushkin wrote:\n> Hi Tomas,\n>\n>On Wed, 1 May 2019 at 02:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n>> I see there's an ongoing discussion about race conditions in walreceiver\n>> blocking shutdown, so let me start a new thread about a race condition in\n>> walsender during shutdown.\n>>\n>> The symptoms are rather simple - 'pg_ctl -m fast shutdown' gets stuck,\n>> waiting for walsender processes to catch-up and terminate indefinitely.\n>\n>I can confirm, during the past couple of years we observed such a\n>problem a few times and this is really annoying.\n>\n>> The reason for that is pretty simple - the walsenders are doing logical\n>> decoding, and during shutdown they're waiting for WalSndCaughtUp=true.\n>> But per XLogSendLogical() this only happens if\n>>\n>> if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())\n>> {\n>> WalSndCaughtUp = true;\n>> ...\n>> }\n>\n>After a couple of days investigating and debugging I came to a\n>slightly different conclusion, WalSndCaughtUp is set to true in my\n>case.\n>Since I am mostly a python person, I decided to use psycopg2 for my\n>investigation. I took an example from\n>http://initd.org/psycopg/docs/advanced.html#logical-replication-quick-start\n>as a starting point, created a slot and started the script.\n>I wasn't running any transactions, therefore the DemoConsumer.__call__\n>was never executed and cursor.send_feedback(flush_lsn=msg.data_start)\n>was never called either. Basically, the only what the psycopg2\n>internals was doing - periodically sending keepalive messages or\n>replying to keepalives sent by postgres.\n\nOK, so that seems like a separate issue, somewhat unrelated to the issue I\nreported. And I'm not sure it's a walsender issue - it seems it might be a\npsycopg2 issue in not reporting the flush properly, no?\n\n>Actually, the same problem will happen even in the case when the\n>consumer script receives some message, but not very intensively, but\n>it is just a bit harder to reproduce it.\n>\n>If you attach to the walsender with gdb, you'll see the following picture:\n>(gdb) bt\n>#0 0x00007fd6623d296a in __libc_send (fd=8, buf=0x55cb958dca08,\n>len=94, flags=0) at ../sysdeps/unix/sysv/linux/send.c:28\n>#1 0x000055cb93aa7ce9 in secure_raw_write (port=0x55cb958d71e0,\n>ptr=0x55cb958dca08, len=94) at be-secure.c:318\n>#2 0x000055cb93aa7b87 in secure_write (port=0x55cb958d71e0,\n>ptr=0x55cb958dca08, len=94) at be-secure.c:265\n>#3 0x000055cb93ab6bf9 in internal_flush () at pqcomm.c:1433\n>#4 0x000055cb93ab6b89 in socket_flush () at pqcomm.c:1409\n>#5 0x000055cb93dac30b in send_message_to_frontend\n>(edata=0x55cb942b4380 <errordata>) at elog.c:3317\n>#6 0x000055cb93da8973 in EmitErrorReport () at elog.c:1481\n>#7 0x000055cb93da5abf in errfinish (dummy=0) at elog.c:481\n>#8 0x000055cb93da852d in elog_finish (elevel=13, fmt=0x55cb93f32de3\n>\"sending replication keepalive\") at elog.c:1376\n>#9 0x000055cb93bcae71 in WalSndKeepalive (requestReply=true) at\n>walsender.c:3358\n\nIs it stuck in the send() call forever, or did you happen to grab\nthis bracktrace?\n\n>\n>All above text probably looks like a brain dump, but I don't think\n>that it conflicts with Tomas's findings it rather compliments them.\n>I am very glad that now I know how to mitigate the problem on the\n>client side, but IMHO it is also very important to fix the server\n>behavior if it is ever possible.\n>\n\nI think having a report of an issue, with a way to reproduce it is a first\n(and quite important) step. So thanks for doing that.\n\nThat being said, I think those are two separate issues, with different\ncauses and likely different fixes. I don't think fixing the xlog flush\nwill resolve your issue, and vice versa.\n\nFWIW attached is a patch that I used to reliably trigger the xlog flush\nissue - it simply adds LogLogicalMessage() to commit handler in the\nbuilt-in output plugin. So all you need to do is create a subscription,\nstart generating commit and trigger a restart. The message is 8kB, so it's\ndefinitely long enough to overflow to the next xlog page.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 1 May 2019 17:02:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-01 02:28:45 +0200, Tomas Vondra wrote:\n> The reason for that is pretty simple - the walsenders are doing logical\n> decoding, and during shutdown they're waiting for WalSndCaughtUp=true.\n> But per XLogSendLogical() this only happens if\n> \n> if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())\n> {\n> WalSndCaughtUp = true;\n> ...\n> }\n> \n> That is, we need to get beyong GetFlushRecPtr(). But that may never\n> happen, because there may be incomplete (and unflushed) page in WAL\n> buffers, not forced out by any transaction. So if there's a WAL record\n> overflowing to the unflushed page, the walsender will never catch up.\n> \n> Now, this situation is apparently expected, because WalSndWaitForWal()\n> does this:\n> \n> /*\n> * If we're shutting down, trigger pending WAL to be written out,\n> * otherwise we'd possibly end up waiting for WAL that never gets\n> * written, because walwriter has shut down already.\n> */\n> if (got_STOPPING)\n> XLogBackgroundFlush();\n> \n> but unfortunately that does not actually do anything, because right at\n> the very beginning XLogBackgroundFlush() does this:\n> \n> /* back off to last completed page boundary */\n> WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;\n\n> That is, it intentionally ignores the incomplete page, which means the\n> walsender can't read the record and reach GetFlushRecPtr().\n> \n> XLogBackgroundFlush() does this since (at least) 2007, so how come we\n> never had issues with this before?\n\nI assume that's because of the following logic:\n\t/* if we have already flushed that far, consider async commit records */\n\tif (WriteRqst.Write <= LogwrtResult.Flush)\n\t{\n\t\tSpinLockAcquire(&XLogCtl->info_lck);\n\t\tWriteRqst.Write = XLogCtl->asyncXactLSN;\n\t\tSpinLockRelease(&XLogCtl->info_lck);\n\t\tflexible = false;\t\t/* ensure it all gets written */\n\t}\n\nand various pieces of the code doing XLogSetAsyncXactLSN() to force\nflushing. I wonder if the issue is that we're better at avoiding\nunnecessary WAL to be written due to\n6ef2eba3f57f17960b7cd4958e18aa79e357de2f\n\n\n> But I don't think we're safe without the failover slots patch, because\n> any output plugin can do something similar - say, LogLogicalMessage() or\n> something like that. I'm not aware of a plugin doing such things, but I\n> don't think it's illegal / prohibited either. (Of course, plugins that\n> generate WAL won't be useful for decoding on standby in the future.)\n\nFWIW, I'd consider such an output plugin outright broken.\n\n\n> So what I think we should do is to tweak XLogBackgroundFlush() so that\n> during shutdown it skips the back-off to page boundary, and flushes even\n> the last piece of WAL. There are only two callers anyway, so something\n> like XLogBackgroundFlush(bool) would be simple enough.\n\nI think it then just ought to be a normal XLogFlush(). I.e. something\nalong the lines of:\n\n\t\t/*\n\t\t * If we're shutting down, trigger pending WAL to be written out,\n\t\t * otherwise we'd possibly end up waiting for WAL that never gets\n\t\t * written, because walwriter has shut down already.\n\t\t */\n\t\tif (got_STOPPING && !RecoveryInProgress())\n\t\t\tXLogFlush(GetXLogInsertRecPtr());\n\n\t\t/* Update our idea of the currently flushed position. */\n\t\tif (!RecoveryInProgress())\n\t\t\tRecentFlushPtr = GetFlushRecPtr();\n\t\telse\n\t\t\tRecentFlushPtr = GetXLogReplayRecPtr(NULL);\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 08:53:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "On Wed, May 01, 2019 at 08:53:15AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-05-01 02:28:45 +0200, Tomas Vondra wrote:\n>> The reason for that is pretty simple - the walsenders are doing logical\n>> decoding, and during shutdown they're waiting for WalSndCaughtUp=true.\n>> But per XLogSendLogical() this only happens if\n>>\n>> if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())\n>> {\n>> WalSndCaughtUp = true;\n>> ...\n>> }\n>>\n>> That is, we need to get beyong GetFlushRecPtr(). But that may never\n>> happen, because there may be incomplete (and unflushed) page in WAL\n>> buffers, not forced out by any transaction. So if there's a WAL record\n>> overflowing to the unflushed page, the walsender will never catch up.\n>>\n>> Now, this situation is apparently expected, because WalSndWaitForWal()\n>> does this:\n>>\n>> /*\n>> * If we're shutting down, trigger pending WAL to be written out,\n>> * otherwise we'd possibly end up waiting for WAL that never gets\n>> * written, because walwriter has shut down already.\n>> */\n>> if (got_STOPPING)\n>> XLogBackgroundFlush();\n>>\n>> but unfortunately that does not actually do anything, because right at\n>> the very beginning XLogBackgroundFlush() does this:\n>>\n>> /* back off to last completed page boundary */\n>> WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;\n>\n>> That is, it intentionally ignores the incomplete page, which means the\n>> walsender can't read the record and reach GetFlushRecPtr().\n>>\n>> XLogBackgroundFlush() does this since (at least) 2007, so how come we\n>> never had issues with this before?\n>\n>I assume that's because of the following logic:\n>\t/* if we have already flushed that far, consider async commit records */\n>\tif (WriteRqst.Write <= LogwrtResult.Flush)\n>\t{\n>\t\tSpinLockAcquire(&XLogCtl->info_lck);\n>\t\tWriteRqst.Write = XLogCtl->asyncXactLSN;\n>\t\tSpinLockRelease(&XLogCtl->info_lck);\n>\t\tflexible = false;\t\t/* ensure it all gets written */\n>\t}\n>\n>and various pieces of the code doing XLogSetAsyncXactLSN() to force\n>flushing. I wonder if the issue is that we're better at avoiding\n>unnecessary WAL to be written due to\n>6ef2eba3f57f17960b7cd4958e18aa79e357de2f\n>\n\nI don't think so, because (a) there are no async commits involved, and (b)\nwe originally ran into the issue on 9.6 and 6ef2eba3f57f1 is only in 10+.\n\n>\n>> But I don't think we're safe without the failover slots patch, because\n>> any output plugin can do something similar - say, LogLogicalMessage() or\n>> something like that. I'm not aware of a plugin doing such things, but I\n>> don't think it's illegal / prohibited either. (Of course, plugins that\n>> generate WAL won't be useful for decoding on standby in the future.)\n>\n>FWIW, I'd consider such an output plugin outright broken.\n>\n\nWhy? Is that prohibited somewhere, either explicitly or implicitly? I do\nsee obvious issues with generating WAL from plugin (infinite cycle and so\non), but I suppose that's more a regular coding issue than something that\nwould make all plugins doing that broken.\n\nFWIW I don't see any particular need to generate WAL from output plugins,\nI mentioned it mostly jsut as a convenient way to trigger the issue. I\nsuppose there are other ways to generate a bit of WAL during shutdown.\n\n>\n>> So what I think we should do is to tweak XLogBackgroundFlush() so that\n>> during shutdown it skips the back-off to page boundary, and flushes even\n>> the last piece of WAL. There are only two callers anyway, so something\n>> like XLogBackgroundFlush(bool) would be simple enough.\n>\n>I think it then just ought to be a normal XLogFlush(). I.e. something\n>along the lines of:\n>\n>\t\t/*\n>\t\t * If we're shutting down, trigger pending WAL to be written out,\n>\t\t * otherwise we'd possibly end up waiting for WAL that never gets\n>\t\t * written, because walwriter has shut down already.\n>\t\t */\n>\t\tif (got_STOPPING && !RecoveryInProgress())\n>\t\t\tXLogFlush(GetXLogInsertRecPtr());\n>\n>\t\t/* Update our idea of the currently flushed position. */\n>\t\tif (!RecoveryInProgress())\n>\t\t\tRecentFlushPtr = GetFlushRecPtr();\n>\t\telse\n>\t\t\tRecentFlushPtr = GetXLogReplayRecPtr(NULL);\n>\n\nPerhaps. That would work too, I guess.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 1 May 2019 18:46:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "Hi,\n\nOn Wed, 1 May 2019 at 17:02, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> OK, so that seems like a separate issue, somewhat unrelated to the issue I\n> reported. And I'm not sure it's a walsender issue - it seems it might be a\n> psycopg2 issue in not reporting the flush properly, no?\n\nAgree, it is a different issue, but I am unsure what to blame,\npostgres or psycopg2.\nRight now in the psycopg2 we confirm more or less every XLogData\nmessage, but at the same time LSN on the primary is moving forward and\nwe get updates with keepalive messages.\nI perfectly understand the need to periodically send updates of flush\n= walEnd (which comes from keepalive). It might happen that there is\nno transaction activity but WAL is still generated and as a result\nreplication slot will prevent WAL from being cleaned up.\n From the client side perspective, it confirmed everything that it\nshould, but from the postgres side, this is not enough to shut down\ncleanly. Maybe it is possible to change the check (sentPtr ==\nreplicatedPtr) to something like (lastMsgSentPtr <= replicatedPtr) or\nit would be unsafe?\n\n> >Actually, the same problem will happen even in the case when the\n> >consumer script receives some message, but not very intensively, but\n> >it is just a bit harder to reproduce it.\n> >\n> >If you attach to the walsender with gdb, you'll see the following picture:\n> >(gdb) bt\n> >#0 0x00007fd6623d296a in __libc_send (fd=8, buf=0x55cb958dca08,\n> >len=94, flags=0) at ../sysdeps/unix/sysv/linux/send.c:28\n> >#1 0x000055cb93aa7ce9 in secure_raw_write (port=0x55cb958d71e0,\n> >ptr=0x55cb958dca08, len=94) at be-secure.c:318\n> >#2 0x000055cb93aa7b87 in secure_write (port=0x55cb958d71e0,\n> >ptr=0x55cb958dca08, len=94) at be-secure.c:265\n> >#3 0x000055cb93ab6bf9 in internal_flush () at pqcomm.c:1433\n> >#4 0x000055cb93ab6b89 in socket_flush () at pqcomm.c:1409\n> >#5 0x000055cb93dac30b in send_message_to_frontend\n> >(edata=0x55cb942b4380 <errordata>) at elog.c:3317\n> >#6 0x000055cb93da8973 in EmitErrorReport () at elog.c:1481\n> >#7 0x000055cb93da5abf in errfinish (dummy=0) at elog.c:481\n> >#8 0x000055cb93da852d in elog_finish (elevel=13, fmt=0x55cb93f32de3\n> >\"sending replication keepalive\") at elog.c:1376\n> >#9 0x000055cb93bcae71 in WalSndKeepalive (requestReply=true) at\n> >walsender.c:3358\n>\n> Is it stuck in the send() call forever, or did you happen to grab\n> this bracktrace?\n\nNo, it didn't stuck there. During the shutdown postgres starts sending\na few thousand keepalive messages per second and receives back so many\nfeedback message, therefore the chances of interrupting somewhere in\nthe send are quite high.\nThe loop never breaks because psycopg2 is always replying with the\nsame flush as very last time, which was set during processing of\nXLogData message.\n\n> I think having a report of an issue, with a way to reproduce it is a first\n> (and quite important) step. So thanks for doing that.\n>\n> That being said, I think those are two separate issues, with different\n> causes and likely different fixes. I don't think fixing the xlog flush\n> will resolve your issue, and vice versa.\n\nAgree, these are different issues.\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Wed, 1 May 2019 19:12:52 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "On Wed, May 01, 2019 at 07:12:52PM +0200, Alexander Kukushkin wrote:\n>Hi,\n>\n>On Wed, 1 May 2019 at 17:02, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n>> OK, so that seems like a separate issue, somewhat unrelated to the issue I\n>> reported. And I'm not sure it's a walsender issue - it seems it might be a\n>> psycopg2 issue in not reporting the flush properly, no?\n>\n>Agree, it is a different issue, but I am unsure what to blame,\n>postgres or psycopg2.\n>Right now in the psycopg2 we confirm more or less every XLogData\n>message, but at the same time LSN on the primary is moving forward and\n>we get updates with keepalive messages.\n>I perfectly understand the need to periodically send updates of flush\n>= walEnd (which comes from keepalive). It might happen that there is\n>no transaction activity but WAL is still generated and as a result\n>replication slot will prevent WAL from being cleaned up.\n>From the client side perspective, it confirmed everything that it\n>should, but from the postgres side, this is not enough to shut down\n>cleanly. Maybe it is possible to change the check (sentPtr ==\n>replicatedPtr) to something like (lastMsgSentPtr <= replicatedPtr) or\n>it would be unsafe?\n>\n\nI don't know.\n\nIn general I think it's a bit strange that we're waiting for walsender\nprocesses to catch up even in fast shutdown mode, instead of just aborting\nthem like other backends. But I assume there are reasons for that. OTOH it\nmakes us vulnerable to issues like this, when a (presumably) misbehaving\ndownstream prevents a shutdown.\n\n>> >ptr=0x55cb958dca08, len=94) at be-secure.c:318\n>> >#2 0x000055cb93aa7b87 in secure_write (port=0x55cb958d71e0,\n>> >ptr=0x55cb958dca08, len=94) at be-secure.c:265\n>> >#3 0x000055cb93ab6bf9 in internal_flush () at pqcomm.c:1433\n>> >#4 0x000055cb93ab6b89 in socket_flush () at pqcomm.c:1409\n>> >#5 0x000055cb93dac30b in send_message_to_frontend\n>> >(edata=0x55cb942b4380 <errordata>) at elog.c:3317\n>> >#6 0x000055cb93da8973 in EmitErrorReport () at elog.c:1481\n>> >#7 0x000055cb93da5abf in errfinish (dummy=0) at elog.c:481\n>> >#8 0x000055cb93da852d in elog_finish (elevel=13, fmt=0x55cb93f32de3\n>> >\"sending replication keepalive\") at elog.c:1376\n>> >#9 0x000055cb93bcae71 in WalSndKeepalive (requestReply=true) at\n>> >walsender.c:3358\n>>\n>> Is it stuck in the send() call forever, or did you happen to grab\n>> this bracktrace?\n>\n>No, it didn't stuck there. During the shutdown postgres starts sending\n>a few thousand keepalive messages per second and receives back so many\n>feedback message, therefore the chances of interrupting somewhere in\n>the send are quite high.\n\nUh, that seems a bit broken, perhaps?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 2 May 2019 14:35:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "On Wed, 1 May 2019 at 01:28, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n\n> Now, this situation is apparently expected, because WalSndWaitForWal()\n> does this:\n>\n> /*\n> * If we're shutting down, trigger pending WAL to be written out,\n> * otherwise we'd possibly end up waiting for WAL that never gets\n> * written, because walwriter has shut down already.\n> */\n> if (got_STOPPING)\n> XLogBackgroundFlush();\n>\n> but unfortunately that does not actually do anything, because right at\n> the very beginning XLogBackgroundFlush() does this:\n>\n> /* back off to last completed page boundary */\n> WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;\n>\n> That is, it intentionally ignores the incomplete page, which means the\n> walsender can't read the record and reach GetFlushRecPtr().\n>\n> XLogBackgroundFlush() does this since (at least) 2007, so how come we\n> never had issues with this before?\n>\n\nYeh, not quite what I originally wrote for that.\n\nI think the confusion is that XLogBackgroundFlush() doesn't do quite what\nit says.\n\nXLogWrite() kinda does with its \"flexible\" parameter. So I suggest we do\nthe same on XLogBackgroundFlush() so callers can indicate whether they want\nit to be flexible or not.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 1 May 2019 at 01:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: Now, this situation is apparently expected, because WalSndWaitForWal()\ndoes this:\n\n /*\n * If we're shutting down, trigger pending WAL to be written out,\n * otherwise we'd possibly end up waiting for WAL that never gets\n * written, because walwriter has shut down already.\n */\n if (got_STOPPING)\n XLogBackgroundFlush();\n\nbut unfortunately that does not actually do anything, because right at\nthe very beginning XLogBackgroundFlush() does this:\n\n /* back off to last completed page boundary */\n WriteRqst.Write -= WriteRqst.Write % XLOG_BLCKSZ;\n\nThat is, it intentionally ignores the incomplete page, which means the\nwalsender can't read the record and reach GetFlushRecPtr().\n\nXLogBackgroundFlush() does this since (at least) 2007, so how come we\nnever had issues with this before?Yeh, not quite what I originally wrote for that.I think the confusion is that XLogBackgroundFlush() doesn't do quite what it says.XLogWrite() kinda does with its \"flexible\" parameter. So I suggest we do the same on XLogBackgroundFlush() so callers can indicate whether they want it to be flexible or not. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 2 May 2019 15:00:44 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
},
{
"msg_contents": "On Thu, 2 May 2019 at 14:35, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >From the client side perspective, it confirmed everything that it\n> >should, but from the postgres side, this is not enough to shut down\n> >cleanly. Maybe it is possible to change the check (sentPtr ==\n> >replicatedPtr) to something like (lastMsgSentPtr <= replicatedPtr) or\n> >it would be unsafe?\n>\n> I don't know.\n>\n> In general I think it's a bit strange that we're waiting for walsender\n> processes to catch up even in fast shutdown mode, instead of just aborting\n> them like other backends. But I assume there are reasons for that. OTOH it\n> makes us vulnerable to issues like this, when a (presumably) misbehaving\n> downstream prevents a shutdown.\n\nIMHO waiting until remote side received and flushed all changes is a\nright strategy, but physical and logical replication should be handled\nslightly differently.\nFor a physical replication we want to make sure that remote side\nreceived and flushed all changes, otherwise in case of switchover we\nwon't be able to join the former primary as a new standby.\nLogical replication case is a bit different. I think we can safely\nshutdown walsender when the client confirmed the last XLogData\nmessage, while now we are waiting until the client confirms wal_end\nreceived in the keepalive message. If we shutdown walsender too early,\nand do a switchover, the client might miss some events, because\nlogical slots are not replicated :(\n\n\n> >No, it didn't stuck there. During the shutdown postgres starts sending\n> >a few thousand keepalive messages per second and receives back so many\n> >feedback message, therefore the chances of interrupting somewhere in\n> >the send are quite high.\n>\n> Uh, that seems a bit broken, perhaps?\n\nIndeed, this is broken psycopg2 behavior :(\nI am thinking about submitting a patch fixing it.\n\nActually I quickly skimmed through the pgjdbc logical replication\nsource code and example\nhttps://jdbc.postgresql.org/documentation/head/replication.html and I\nthink that it will also cause problems with the shutdown.\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Sun, 5 May 2019 17:31:35 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender vs. XLogBackgroundFlush during shutdown"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently PostgreSQL doesn't support full text search natively for many\nAsian languages such as Chinese, Japanese and others. These languages are\nused by a large portion of the population of the world.\n\nThe two key modules that could be modified to support Asian languages are\nthe full text search module (including tsvector) and pg_trgm.\n\nI would like to propose that this support be added to PostgreSQL.\n\nFor full text search, PostgreSQL could add a new parser (\nhttps://www.postgresql.org/docs/9.2/textsearch-parsers.html) that\nimplements ICU word tokenization. This should be a lot more easier than\nbefore now that PostgreSQL itself already includes ICU dependencies for\nother things.\n\nThen allow the ICU parser to be chosen at run-time (via a run-time config\nor an option to to_tsvector). That is all that is needed to support full\ntext search for many more Asian languages natively in PostgreSQL such as\nChinese, Japanese and Thai.\n\nFor example Elastic Search implements this using its ICU Tokenizer plugin:\nhttps://www.elastic.co/guide/en/elasticsearch/guide/current/icu-tokenizer.html\n\nSome information about the related APIs in ICU for this are at:\nhttp://userguide.icu-project.org/boundaryanalysis\n\nAnother simple improvement that would give another option for searching for\nAsian languages is to add a run-time setting for pg_trgm that would tell it\nto not drop non-ascii characters, as currently it only indexes ascii\ncharacters and thus all Asian language characters are dropped.\n\nI emphasize 'run-time setting' because when using PostgreSQL via a\nDatabase-As-A-Service service provider, most of the time it is not possible\nto change the config files, recompile sources, or add any new extensions.\n\nPostgreSQL is an awesome project and probably the best RDBMS right now. I\nhope the maintainers consider this suggestion.\n\nBest Regards,\nChanon\n\n\nHello,Currently PostgreSQL doesn't \nsupport full text search natively for many Asian languages such as Chinese, Japanese and others. These languages are used by a\n large portion of the population of the world.The\n two key modules that could be modified to support Asian languages are \nthe full text search module (including tsvector) and pg_trgm.I would like to propose that this support be added to PostgreSQL.For full text search, PostgreSQL could add a new parser (https://www.postgresql.org/docs/9.2/textsearch-parsers.html)\n that implements ICU word tokenization. This should be a lot more easier\n than before now that PostgreSQL itself already includes ICU \ndependencies for other things. Then allow the \nICU parser to be chosen at run-time (via a run-time config or an option \nto to_tsvector). That is all that is needed to support full text search \nfor many more Asian languages natively in PostgreSQL such as Chinese, \nJapanese and Thai.For example Elastic Search implements this using its ICU Tokenizer plugin: https://www.elastic.co/guide/en/elasticsearch/guide/current/icu-tokenizer.htmlSome information about the related APIs in ICU for this are at:http://userguide.icu-project.org/boundaryanalysisAnother\n simple improvement that would give another option for searching for \nAsian languages is to add a run-time setting for pg_trgm that would tell\n it to not drop non-ascii characters, as currently it only indexes ascii\n characters and thus all Asian language characters are dropped.I emphasize 'run-time setting' because when using PostgreSQL via a Database-As-A-Service service provider, most of the time it is not possible to change the config files, recompile sources, or add any new extensions. PostgreSQL is an awesome project and probably the best RDBMS right now. I hope the maintainers consider this suggestion.Best Regards,Chanon",
"msg_date": "Wed, 1 May 2019 08:55:50 +0700",
"msg_from": "Chanon Sajjamanochai <chanon.s@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Asian language support for full text search using ICU (and\n also updating pg_trgm)"
},
{
"msg_contents": "[redirected to hackers list since I think this topic is related to\nadding new PostgreSQL feature.]\n\nI think there's no doubt that it would be nice if PostgreSQL natively\nsupports Asian languages. For the first step, I briefly tested the ICU\ntokenizer (ubrk_open and other functions) with Japanese, the only\nAsian language I understand. The result was a little bit different\nfrom the most popular Japanese tokenizer \"Mecab\" [1], but it seems I\ncan live with that as far as it's used for full text search\npurpose. Of course more tests would be needed though.\n\nIn addition to the accuracy of tokenizing, performance is of course\nimportant. This needs more work.\n\nI think same studies would be needed for other Asian languages. Hope\nsomeone who is familiar with other Asian languages volunteers to do\nthe task.\n\n[1] https://taku910.github.io/mecab/\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\nFrom: Chanon Sajjamanochai <chanon.s@gmail.com>\nSubject: PostgreSQL Asian language support for full text search using ICU (and also updating pg_trgm)\nDate: Wed, 1 May 2019 08:55:50 +0700\nMessage-ID: <CAEV3FNPU8hU_hi=0+QNAbEkc-uO8-K9PB3aAChdmcCyPfWX6rg@mail.gmail.com>\n\n> Hello,\n> \n> Currently PostgreSQL doesn't support full text search natively for many\n> Asian languages such as Chinese, Japanese and others. These languages are\n> used by a large portion of the population of the world.\n> \n> The two key modules that could be modified to support Asian languages are\n> the full text search module (including tsvector) and pg_trgm.\n> \n> I would like to propose that this support be added to PostgreSQL.\n> \n> For full text search, PostgreSQL could add a new parser (\n> https://www.postgresql.org/docs/9.2/textsearch-parsers.html) that\n> implements ICU word tokenization. This should be a lot more easier than\n> before now that PostgreSQL itself already includes ICU dependencies for\n> other things.\n> \n> Then allow the ICU parser to be chosen at run-time (via a run-time config\n> or an option to to_tsvector). That is all that is needed to support full\n> text search for many more Asian languages natively in PostgreSQL such as\n> Chinese, Japanese and Thai.\n> \n> For example Elastic Search implements this using its ICU Tokenizer plugin:\n> https://www.elastic.co/guide/en/elasticsearch/guide/current/icu-tokenizer.html\n> \n> Some information about the related APIs in ICU for this are at:\n> http://userguide.icu-project.org/boundaryanalysis\n> \n> Another simple improvement that would give another option for searching for\n> Asian languages is to add a run-time setting for pg_trgm that would tell it\n> to not drop non-ascii characters, as currently it only indexes ascii\n> characters and thus all Asian language characters are dropped.\n> \n> I emphasize 'run-time setting' because when using PostgreSQL via a\n> Database-As-A-Service service provider, most of the time it is not possible\n> to change the config files, recompile sources, or add any new extensions.\n> \n> PostgreSQL is an awesome project and probably the best RDBMS right now. I\n> hope the maintainers consider this suggestion.\n> \n> Best Regards,\n> Chanon\n\n\n",
"msg_date": "Thu, 02 May 2019 10:07:11 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Asian language support for full text search using\n ICU (and also updating pg_trgm)"
}
] |
[
{
"msg_contents": "I don't think the changes made in PG 12 are documented accurately. It\ncurrently says:\n\n\tto_timestamp and to_date matches any single separator in the input\n\tstring or is skipped\n\nHowever, I think it is more accurate to say _multiple_ whitespace can\nalso be matched by a single separator:\n\n\tSELECT to_timestamp('%1976','_YYYY');\n\t to_timestamp\n\t------------------------\n\t 1976-01-01 00:00:00-05\n\t\n\tSELECT to_timestamp('%%1976','_YYYY');\n\tERROR: invalid value \"%197\" for \"YYYY\"\n\tDETAIL: Value must be an integer.\n\t\n\t-- two spaces\n-->\tSELECT to_timestamp(' 1976','_YYYY');\n\t to_timestamp\n\t------------------------\n\t 1976-01-01 00:00:00-05\n\n-->\tSELECT to_timestamp(E'\\t\\t\\t1976','_YYYY');\n\t to_timestamp\n\t------------------------\n\t 1976-01-01 00:00:00-05\n\nProposed patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Wed, 1 May 2019 09:38:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "to_timestamp docs"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I don't think the changes made in PG 12 are documented accurately.\n\nThat code is swapped out of my head at the moment, but it looks\nto me like the para before the one you changed is where we discuss\nthe behavior for whitespace. I'm not sure that this change is\nright, or an improvement, in the context of both paras.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 10:01:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Wed, May 1, 2019 at 10:01:50AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I don't think the changes made in PG 12 are documented accurately.\n> \n> That code is swapped out of my head at the moment, but it looks\n> to me like the para before the one you changed is where we discuss\n> the behavior for whitespace. I'm not sure that this change is\n> right, or an improvement, in the context of both paras.\n\nThanks. I think I see the sentence you are thinking of:\n\n <function>to_timestamp</function> and <function>to_date</function>\n skip multiple blank spaces at the beginning of the input string\n and around date and time values unless the <literal>FX</literal>\n option is used.\n\nHowever, first, it is unclear what 'skip' means here, i.e., does it mean\nmultiple blank spaces become a single space, or they are ignored. That\nshould be clarified, though I am unclear if that matters based on how\nseparators are handled. Also, I think \"blank spaces\" should be\n\"whitespace\".\n\nSecond, I see inconsistent behaviour around the use of FX for various\npatterns, e.g.:\n\n\tSELECT to_timestamp('5 1976','FXDD_FXYYYY');\n\t to_timestamp\n\t------------------------\n\t 1976-01-05 00:00:00-05\n\t\n\tSELECT to_timestamp('JUL JUL','FXMON_FXMON');\n\t to_timestamp\n\t---------------------------------\n\t 0001-07-01 00:00:00-04:56:02 BC\n\n\tSELECT to_timestamp('JUL JUL','FXMON_FXMON');\n\tERROR: invalid value \" \" for \"MON\"\n\tDETAIL: The given value did not match any of the allowed values for this field.\n\nIt seems DD and YYYY (as numerics?) in FX mode eat trailing whitespace,\nwhile MON does not? Also, I used these queries to determine it is\n\"trailing\" whitespace that \"FXMON\" controls:\n\n\tSELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n\t to_timestamp\n\t---------------------------------\n\t 0001-07-01 00:00:00-04:56:02 BC\n\t\n\tSELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n\tERROR: invalid value \" J\" for \"MON\"\n\tDETAIL: The given value did not match any of the allowed values for this field.\n\nOnce we figure out how it is behaving I think we can pull together the\nFX text above to reference the separator text below.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 1 May 2019 11:04:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "Hello,\n\nOn Wed, May 1, 2019 at 6:05 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Thanks. I think I see the sentence you are thinking of:\n>\n> <function>to_timestamp</function> and <function>to_date</function>\n> skip multiple blank spaces at the beginning of the input string\n> and around date and time values unless the <literal>FX</literal>\n> option is used.\n>\n> However, first, it is unclear what 'skip' means here, i.e., does it mean\n> multiple blank spaces become a single space, or they are ignored.\n\nI worked at to_timestamp some time ago. In this case multiple bank spaces at\nthe beginning should be ignored.\n\n> Second, I see inconsistent behaviour around the use of FX for various\n> patterns, e.g.:\n>\n> SELECT to_timestamp('5 1976','FXDD_FXYYYY');\n> to_timestamp\n> ------------------------\n> 1976-01-05 00:00:00-05\n\nHm, I think strspace_len() is partly to blame here, which is called by\nfrom_char_parse_int_len():\n\n/*\n * Skip any whitespace before parsing the integer.\n */\n*src += strspace_len(*src);\n\nBut even if you remove this line of code then strtol() will eat\nsurvived whitespaces:\n\nresult = strtol(init, src, 10);\n\nNot sure if we need some additional checks here if FX is set.\n\n> It seems DD and YYYY (as numerics?) in FX mode eat trailing whitespace,\n> while MON does not? Also, I used these queries to determine it is\n> \"trailing\" whitespace that \"FXMON\" controls:\n>\n> SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> to_timestamp\n> ---------------------------------\n> 0001-07-01 00:00:00-04:56:02 BC\n>\n> SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> ERROR: invalid value \" J\" for \"MON\"\n> DETAIL: The given value did not match any of the allowed values for this field.\n\nThe problem here is that you need to specify FX only once and at beginning of\nthe format string. It is stated in the documentation:\n\n\"FX must be specified as the first item in the template.\"\n\nIt works globally (but only for remaining string if you don't put it\nat the beginning)\nand you can set it only once. For example:\n\n=# SELECT to_timestamp('JUL JUL JUL','FXMON_MON_MON');\nERROR: invalid value \" J\" for \"MON\"\nDETAIL: The given value did not match any of the allowed values for this field.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 1 May 2019 23:20:05 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Wed, May 1, 2019 at 11:20:05PM +0300, Arthur Zakirov wrote:\n> Hello,\n> \n> On Wed, May 1, 2019 at 6:05 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Thanks. I think I see the sentence you are thinking of:\n> >\n> > <function>to_timestamp</function> and <function>to_date</function>\n> > skip multiple blank spaces at the beginning of the input string\n> > and around date and time values unless the <literal>FX</literal>\n> > option is used.\n> >\n> > However, first, it is unclear what 'skip' means here, i.e., does it mean\n> > multiple blank spaces become a single space, or they are ignored.\n> \n> I worked at to_timestamp some time ago. In this case multiple bank spaces at\n> the beginning should be ignored.\n\nOK.\n\n> > Second, I see inconsistent behaviour around the use of FX for various\n> > patterns, e.g.:\n> >\n> > SELECT to_timestamp('5 1976','FXDD_FXYYYY');\n> > to_timestamp\n> > ------------------------\n> > 1976-01-05 00:00:00-05\n> \n> Hm, I think strspace_len() is partly to blame here, which is called by\n> from_char_parse_int_len():\n> \n> /*\n> * Skip any whitespace before parsing the integer.\n> */\n> *src += strspace_len(*src);\n> \n> But even if you remove this line of code then strtol() will eat\n> survived whitespaces:\n> \n> result = strtol(init, src, 10);\n> \n> Not sure if we need some additional checks here if FX is set.\n\nYes, I suspected it was part of the input function, but it seems it is\ndone in two places. It seems we need the opposite of strspace_len() in\nthat place to throw an error if we are in FX mode.\n\n> The problem here is that you need to specify FX only once and at beginning of\n> the format string. It is stated in the documentation:\n> \n> \"FX must be specified as the first item in the template.\"\n\nUh, FX certainly changes behavior if it isn't the first thing in the\nformat string.\n\n> It works globally (but only for remaining string if you don't put it\n> at the beginning)\n\nUh, then the documentation is wrong?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 1 May 2019 16:41:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Wed, May 1, 2019 at 11:20 PM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> Hello,\n>\n> On Wed, May 1, 2019 at 6:05 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Thanks. I think I see the sentence you are thinking of:\n> >\n> > <function>to_timestamp</function> and <function>to_date</function>\n> > skip multiple blank spaces at the beginning of the input string\n> > and around date and time values unless the <literal>FX</literal>\n> > option is used.\n> >\n> > However, first, it is unclear what 'skip' means here, i.e., does it mean\n> > multiple blank spaces become a single space, or they are ignored.\n>\n> I worked at to_timestamp some time ago. In this case multiple bank spaces at\n> the beginning should be ignored.\n>\n> > Second, I see inconsistent behaviour around the use of FX for various\n> > patterns, e.g.:\n> >\n> > SELECT to_timestamp('5 1976','FXDD_FXYYYY');\n> > to_timestamp\n> > ------------------------\n> > 1976-01-05 00:00:00-05\n>\n> Hm, I think strspace_len() is partly to blame here, which is called by\n> from_char_parse_int_len():\n>\n> /*\n> * Skip any whitespace before parsing the integer.\n> */\n> *src += strspace_len(*src);\n>\n> But even if you remove this line of code then strtol() will eat\n> survived whitespaces:\n>\n> result = strtol(init, src, 10);\n>\n> Not sure if we need some additional checks here if FX is set.\n\nI'd like to add that this behavior is not new in 12. It was the same before.\n\n> > It seems DD and YYYY (as numerics?) in FX mode eat trailing whitespace,\n> > while MON does not? Also, I used these queries to determine it is\n> > \"trailing\" whitespace that \"FXMON\" controls:\n> >\n> > SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> > to_timestamp\n> > ---------------------------------\n> > 0001-07-01 00:00:00-04:56:02 BC\n> >\n> > SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> > ERROR: invalid value \" J\" for \"MON\"\n> > DETAIL: The given value did not match any of the allowed values for this field.\n>\n> The problem here is that you need to specify FX only once and at beginning of\n> the format string. It is stated in the documentation:\n>\n> \"FX must be specified as the first item in the template.\"\n>\n> It works globally (but only for remaining string if you don't put it\n> at the beginning)\n> and you can set it only once. For example:\n>\n> =# SELECT to_timestamp('JUL JUL JUL','FXMON_MON_MON');\n> ERROR: invalid value \" J\" for \"MON\"\n> DETAIL: The given value did not match any of the allowed values for this field.\n\nActually, FX takes effect on subsequent format patterns. This is not\ndocumented, but it copycats Oracle behavior. Sure, normally FX should\nbe specified as the first item. We could document current behavior or\nrestrict specifying FX not as first item. This is also not new in 12,\nso documenting current behavior is better for compatibility.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 2 May 2019 00:49:23 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Thu, May 2, 2019 at 12:49:23AM +0300, Alexander Korotkov wrote:\n> On Wed, May 1, 2019 at 11:20 PM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> > Hello,\n> > Not sure if we need some additional checks here if FX is set.\n> \n> I'd like to add that this behavior is not new in 12. It was the same before.\n\nAgreed, but since we are looking at it, let's document it.\n\n> > > It seems DD and YYYY (as numerics?) in FX mode eat trailing whitespace,\n> > > while MON does not? Also, I used these queries to determine it is\n> > > \"trailing\" whitespace that \"FXMON\" controls:\n> > >\n> > > SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> > > to_timestamp\n> > > ---------------------------------\n> > > 0001-07-01 00:00:00-04:56:02 BC\n> > >\n> > > SELECT to_timestamp('JUL JUL JUL','MON_FXMON_MON');\n> > > ERROR: invalid value \" J\" for \"MON\"\n> > > DETAIL: The given value did not match any of the allowed values for this field.\n> >\n> > The problem here is that you need to specify FX only once and at beginning of\n> > the format string. It is stated in the documentation:\n> >\n> > \"FX must be specified as the first item in the template.\"\n> >\n> > It works globally (but only for remaining string if you don't put it\n> > at the beginning)\n> > and you can set it only once. For example:\n> >\n> > =# SELECT to_timestamp('JUL JUL JUL','FXMON_MON_MON');\n> > ERROR: invalid value \" J\" for \"MON\"\n> > DETAIL: The given value did not match any of the allowed values for this field.\n> \n> Actually, FX takes effect on subsequent format patterns. This is not\n> documented, but it copycats Oracle behavior. Sure, normally FX should\n> be specified as the first item. We could document current behavior or\n> restrict specifying FX not as first item. This is also not new in 12,\n> so documenting current behavior is better for compatibility.\n\nAgreed. Since is it pre-12 behavior, I suggest we just document it and\nnot change it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 1 May 2019 18:02:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Thu, May 2, 2019 at 12:49 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> > It works globally (but only for remaining string if you don't put it\n> > at the beginning)\n> > and you can set it only once. For example:\n> >\n> > =# SELECT to_timestamp('JUL JUL JUL','FXMON_MON_MON');\n> > ERROR: invalid value \" J\" for \"MON\"\n> > DETAIL: The given value did not match any of the allowed values for this field.\n>\n> Actually, FX takes effect on subsequent format patterns. This is not\n> documented, but it copycats Oracle behavior. Sure, normally FX should\n> be specified as the first item. We could document current behavior or\n> restrict specifying FX not as first item. This is also not new in 12,\n> so documenting current behavior is better for compatibility.\n\nI went to Oracle's documentation. It seems that the behavior is\nslightly different.\nTheir documentation says:\n\n\"A modifier can appear in a format model more than once. In such a case,\neach subsequent occurrence toggles the effects of the modifier. Its effects are\nenabled for the portion of the model following its first occurrence, and then\ndisabled for the portion following its second, and then reenabled for\nthe portion\nfollowing its third, and so on.\"\n\nIn PostgreSQL one cannot disable exact mode using second FX. I think we\nshouldn't add some restriction for FX. Instead PostgreSQL's documentation\ncan be fixed. And current explanation in the documentation might be wrong as\nBruce pointed.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 2 May 2019 01:03:38 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: to_timestamp docs"
},
{
"msg_contents": "On Thu, May 2, 2019 at 1:03 AM Arthur Zakirov <a.zakirov@postgrespro.ru> wrote:\n> On Thu, May 2, 2019 at 12:49 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Actually, FX takes effect on subsequent format patterns. This is not\n> > documented, but it copycats Oracle behavior. Sure, normally FX should\n> > be specified as the first item. We could document current behavior or\n> > restrict specifying FX not as first item. This is also not new in 12,\n> > so documenting current behavior is better for compatibility.\n>\n> I went to Oracle's documentation. It seems that the behavior is\n> slightly different.\n> Their documentation says:\n>\n> \"A modifier can appear in a format model more than once. In such a case,\n> each subsequent occurrence toggles the effects of the modifier. Its effects are\n> enabled for the portion of the model following its first occurrence, and then\n> disabled for the portion following its second, and then reenabled for\n> the portion\n> following its third, and so on.\"\n\nWhat about the patch I attached? It fixes the explanation of FX option a little.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Sat, 4 May 2019 19:11:21 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: to_timestamp docs"
}
] |
[
{
"msg_contents": "With the new pg_upgrade --clone, if we are going to end up throwing the\nerror \"file cloning not supported on this platform\" (which seems to depend\nonly on ifdefs) I think we should throw it first thing, before any other\nchecks are done and certainly before pg_dump gets run.\n\nThis might result in some small amount of code duplication, but I think it\nwould be worth the cost.\n\nFor cases where we might throw \"could not clone file between old and new\ndata directories\", I wonder if we shouldn't do some kind of dummy copy to\ncatch that error earlier, as well. Maybe that one is not worth it.\n\nCheers,\n\nJeff\n\nWith the new pg_upgrade --clone, if we are going to end up throwing the error \"file cloning not supported on this platform\" (which seems to depend only on ifdefs) I think we should throw it first thing, before any other checks are done and certainly before pg_dump gets run.This might result in some small amount of code duplication, but I think it would be worth the cost.For cases where we might throw \"could not clone file between old and new data directories\", I wonder if we shouldn't do some kind of dummy copy to catch that error earlier, as well. Maybe that one is not worth it.Cheers,Jeff",
"msg_date": "Wed, 1 May 2019 16:10:34 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade --clone error checking"
},
{
"msg_contents": "On 2019-05-01 22:10, Jeff Janes wrote:\n> With the new pg_upgrade --clone, if we are going to end up throwing the\n> error \"file cloning not supported on this platform\" (which seems to\n> depend only on ifdefs) I think we should throw it first thing, before\n> any other checks are done and certainly before pg_dump gets run.\n\nCould you explain in more detail what command you are running, what\nmessages you are getting, and what you would like to see instead?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 17:57:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --clone error checking"
},
{
"msg_contents": "On Thu, May 2, 2019 at 11:57 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-05-01 22:10, Jeff Janes wrote:\n> > With the new pg_upgrade --clone, if we are going to end up throwing the\n> > error \"file cloning not supported on this platform\" (which seems to\n> > depend only on ifdefs) I think we should throw it first thing, before\n> > any other checks are done and certainly before pg_dump gets run.\n>\n> Could you explain in more detail what command you are running, what\n> messages you are getting, and what you would like to see instead?\n>\n\nI'm running:\n\npg_upgrade --clone -b /home/jjanes/pgsql/REL9_6_12/bin/ -B\n/home/jjanes/pgsql/origin_jit/bin/ -d /home/jjanes/pgsql/data_96/ -D\n/home/jjanes/pgsql/data_clone/\n\nAnd I get:\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nChecking database user is the install user ok\nChecking database connection settings ok\nChecking for prepared transactions ok\nChecking for reg* data types in user tables ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for tables WITH OIDS ok\nChecking for invalid \"unknown\" user columns ok\nCreating dump of global objects ok\nCreating dump of database schemas\n ok\nChecking for presence of required libraries ok\n\nfile cloning not supported on this platform\nFailure, exiting\n\nI think the error message wording is OK, I think it should be thrown\nearlier, before the \"Creating dump of database schemas\" (which can take a\nlong time), and preferably before either database is even started. So\nideally it would be something like:\n\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions\nChecking file cloning support\nFile cloning not supported on this platform\nFailure, exiting\n\n\nWhen something is doomed to fail, we should report the failure as early as\nfeasibly detectable.\n\nCheers,\n\nJeff\n\nOn Thu, May 2, 2019 at 11:57 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-05-01 22:10, Jeff Janes wrote:\n> With the new pg_upgrade --clone, if we are going to end up throwing the\n> error \"file cloning not supported on this platform\" (which seems to\n> depend only on ifdefs) I think we should throw it first thing, before\n> any other checks are done and certainly before pg_dump gets run.\n\nCould you explain in more detail what command you are running, what\nmessages you are getting, and what you would like to see instead?I'm running:pg_upgrade --clone -b /home/jjanes/pgsql/REL9_6_12/bin/ -B /home/jjanes/pgsql/origin_jit/bin/ -d /home/jjanes/pgsql/data_96/ -D /home/jjanes/pgsql/data_clone/And I get:Performing Consistency Checks-----------------------------Checking cluster versions okChecking database user is the install user okChecking database connection settings okChecking for prepared transactions okChecking for reg* data types in user tables okChecking for contrib/isn with bigint-passing mismatch okChecking for tables WITH OIDS okChecking for invalid \"unknown\" user columns okCreating dump of global objects okCreating dump of database schemas okChecking for presence of required libraries okfile cloning not supported on this platformFailure, exitingI think the error message wording is OK, I think it should be thrown earlier, before the \"Creating dump of database schemas\" (which can take a long time), and preferably before either database is even started. So ideally it would be something like:Performing Consistency Checks-----------------------------Checking cluster versionsChecking file cloning support File cloning not supported on this platformFailure, exitingWhen something is doomed to fail, we should report the failure as early as feasibly detectable.Cheers,Jeff",
"msg_date": "Thu, 2 May 2019 12:24:17 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --clone error checking"
},
{
"msg_contents": "On 2019-May-02, Jeff Janes wrote:\n\n> I think the error message wording is OK, I think it should be thrown\n> earlier, before the \"Creating dump of database schemas\" (which can take a\n> long time), and preferably before either database is even started. So\n> ideally it would be something like:\n> \n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions\n> Checking file cloning support\n> File cloning not supported on this platform\n> Failure, exiting\n> \n> \n> When something is doomed to fail, we should report the failure as early as\n> feasibly detectable.\n\nI agree -- this check should be done before checking the database\ncontents. Maybe even before \"Checking cluster versions\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 May 2019 12:28:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --clone error checking"
},
{
"msg_contents": "On Thu, May 2, 2019 at 12:28 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-May-02, Jeff Janes wrote:\n>\n> >\n> > When something is doomed to fail, we should report the failure as early\n> as\n> > feasibly detectable.\n>\n> I agree -- this check should be done before checking the database\n> contents. Maybe even before \"Checking cluster versions\".\n>\n\nIt looks like it was designed for early checking, it just wasn't placed\nearly enough. So changing it is pretty easy, as check_file_clone does not\nneed to be invented, and there is no additional code duplication over what\nwas already there.\n\nThis patch moves the checking to near the beginning.\n\nIt carries the --link mode checking along with it. That should be done as\nwell, and doing it as a separate patch would make both patches uglier.\n\nCheers,\n\nJeff",
"msg_date": "Thu, 2 May 2019 14:03:23 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --clone error checking"
},
{
"msg_contents": "On 2019-05-02 20:03, Jeff Janes wrote:\n> It looks like it was designed for early checking, it just wasn't placed\n> early enough. So changing it is pretty easy, as check_file_clone does\n> not need to be invented, and there is no additional code duplication\n> over what was already there.\n> \n> This patch moves the checking to near the beginning.\n\nI think the reason it was ordered that way is that it wants to do all\nthe checks of the old cluster before doing any checks touching the new\ncluster.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 May 2019 09:53:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --clone error checking"
},
{
"msg_contents": "On Fri, May 3, 2019 at 3:53 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-05-02 20:03, Jeff Janes wrote:\n> > It looks like it was designed for early checking, it just wasn't placed\n> > early enough. So changing it is pretty easy, as check_file_clone does\n> > not need to be invented, and there is no additional code duplication\n> > over what was already there.\n> >\n> > This patch moves the checking to near the beginning.\n>\n> I think the reason it was ordered that way is that it wants to do all\n> the checks of the old cluster before doing any checks touching the new\n> cluster.\n>\n\nBut is there a reason to want to do that? I understand we don't want to\nkeep starting and stopping the clusters needlessly, so we should do\neverything we can in one before moving to the other. But for checks that\ndon't need a running cluster, why would it matter? The existence and\ncontents of PG_VERSION of the new cluster directory is already checked at\nthe very beginning (and even tries to start it up and shuts it down again\nif a pid file also exists), so there is precedence for touching the new\ncluster directory at the filesystem level early (albeit in a readonly\nmanner) and if a pid file exists then doing even more than that. I didn't\nmove check_file_clone to before the liveness check is done, out of a\nabundance of caution. But creating a transient file with a name of no\nsignificance (\"PG_VERSION.clonetest\") in a cluster that is not even running\nseems like a very low risk thing to do. The pay off is that we get an\ninevitable error message much sooner.\n\nCheers,\n\nJeff\n\nOn Fri, May 3, 2019 at 3:53 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-05-02 20:03, Jeff Janes wrote:\n> It looks like it was designed for early checking, it just wasn't placed\n> early enough. So changing it is pretty easy, as check_file_clone does\n> not need to be invented, and there is no additional code duplication\n> over what was already there.\n> \n> This patch moves the checking to near the beginning.\n\nI think the reason it was ordered that way is that it wants to do all\nthe checks of the old cluster before doing any checks touching the new\ncluster.But is there a reason to want to do that? I understand we don't want to keep starting and stopping the clusters needlessly, so we should do everything we can in one before moving to the other. But for checks that don't need a running cluster, why would it matter? The existence and contents of PG_VERSION of the new cluster directory is already checked at the very beginning (and even tries to start it up and shuts it down again if a pid file also exists), so there is precedence for touching the new cluster directory at the filesystem level early (albeit in a readonly manner) and if a pid file exists then doing even more than that. I didn't move check_file_clone to before the liveness check is done, out of a abundance of caution. But creating a transient file with a name of no significance (\"PG_VERSION.clonetest\") in a cluster that is not even running seems like a very low risk thing to do. The pay off is that we get an inevitable error message much sooner.Cheers,Jeff",
"msg_date": "Fri, 3 May 2019 09:01:00 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --clone error checking"
}
] |
[
{
"msg_contents": "Why do these two queries produce different results?\n\nvik=# select random(), random(), random() from generate_series(1, 5);\n random | random | random\n-------------------+-------------------+-------------------\n 0.47517032455653 | 0.631991865579039 | 0.985628996044397\n 0.341754949185997 | 0.304212234914303 | 0.545252074021846\n 0.684523592237383 | 0.595671262592077 | 0.560677206143737\n 0.352716268971562 | 0.131561728194356 | 0.399888414423913\n 0.877433629240841 | 0.543397729285061 | 0.133583522867411\n(5 rows)\n\nvik=# select random(), random(), random() from generate_series(1, 5)\norder by random();\n random | random | random\n-------------------+-------------------+-------------------\n 0.108651491813362 | 0.108651491813362 | 0.108651491813362\n 0.178489942103624 | 0.178489942103624 | 0.178489942103624\n 0.343531942460686 | 0.343531942460686 | 0.343531942460686\n 0.471797252073884 | 0.471797252073884 | 0.471797252073884\n 0.652373222634196 | 0.652373222634196 | 0.652373222634196\n(5 rows)\n\nObviously I'm not talking about the actual values, but the fact that\nwhen the volatile function is put in the ORDER BY clause, it seems to\nget called just once per row rather than each time like the first query.\n\nIs this as designed? It's certainly unexpected, and my initial reaction\nis undesirable.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n",
"msg_date": "Thu, 2 May 2019 11:04:52 +0200",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Volatile function weirdness"
},
{
"msg_contents": "On Thu, May 2, 2019 at 11:05 AM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>\n> Why do these two queries produce different results?\n\nSee https://www.postgresql.org/message-id/30382.1537932940@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 2 May 2019 11:11:32 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Volatile function weirdness"
}
] |
[
{
"msg_contents": "Hi all,\n\nA while back I posted a patch[1] to change the order of resowner\ncleanup so that DSM handles are released last. That's useful for the\nerror cleanup path on Windows, when a SharedFileSet is cleaned up (a\nmechanism that's used by parallel CREATE INDEX and parallel hash join,\nfor spilling files to disk under a temporary directory, with automatic\ncleanup). Previously we believed that it was OK to unlink things that\nother processes might have currently open as long as you use the\nFILE_SHARE_DELETE flag, but that turned out not to be the full story:\nyou can unlink files that someone has open, but you can't unlink the\ndirectory that contains them! Hence the desire to reverse the\nclean-up order.\n\nIt didn't seem worth the risk of back-patching the change, because the\nonly consequence is a confusing message that appears somewhere near\nthe real error:\n\nLOG: could not rmdir directory\n\"base/pgsql_tmp/pgsql_tmp5088.0.sharedfileset\": Directory not empty\n\nI suppose we probably should make the change to 12 though: then owners\nof extensions that use DSM detach hooks (if there any such extensions)\nwill have a bit of time to get used to the new order during the beta\nperiod. I'll need to find someone to test this with a fault injection\nscenario on Windows before committing it, but wanted to sound out the\nlist for any objections to this late change?\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D2ikUtjmiJ18bTnwaeUBoiYN%3DwMDSdhU1jy%3D8WzNhET-Q%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 May 2019 22:30:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> A while back I posted a patch[1] to change the order of resowner\n> cleanup so that DSM handles are released last. That's useful for the\n> error cleanup path on Windows, when a SharedFileSet is cleaned up (a\n> mechanism that's used by parallel CREATE INDEX and parallel hash join,\n> for spilling files to disk under a temporary directory, with automatic\n> cleanup).\n\nI guess what I'm wondering is if there are any potential negative\nconsequences, ie code that won't work if we change the order like this.\nI'm finding it hard to visualize what that would be, but then again\nthis failure mode wasn't obvious either.\n\n> I suppose we probably should make the change to 12 though: then owners\n> of extensions that use DSM detach hooks (if there any such extensions)\n> will have a bit of time to get used to the new order during the beta\n> period. I'll need to find someone to test this with a fault injection\n> scenario on Windows before committing it, but wanted to sound out the\n> list for any objections to this late change?\n\nSince we haven't started beta yet, I don't see a reason not to change\nit. Worst case is that it causes problems and we revert it.\n\nI concur with not back-patching, in any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 10:15:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Fri, May 3, 2019 at 2:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > A while back I posted a patch[1] to change the order of resowner\n> > cleanup so that DSM handles are released last. That's useful for the\n> > error cleanup path on Windows, when a SharedFileSet is cleaned up (a\n> > mechanism that's used by parallel CREATE INDEX and parallel hash join,\n> > for spilling files to disk under a temporary directory, with automatic\n> > cleanup).\n>\n> I guess what I'm wondering is if there are any potential negative\n> consequences, ie code that won't work if we change the order like this.\n> I'm finding it hard to visualize what that would be, but then again\n> this failure mode wasn't obvious either.\n\nI can't think of anything in core. The trouble here is that we're\ntalking about hypothetical out-of-tree code that could want to plug in\ndetach hooks to do anything at all, so it's hard to say. One idea\nthat occurred to me is that if someone comes up with a genuine need to\nrun arbitrary callbacks before locks are released (for example), we\ncould provide a way to be called in all three phases and receive the\nphase, though admittedly in this case FileClose() is in the same phase\nas I'm proposing to put dsm_detach(), so there is an ordering\nrequirement that might require more fine grained phases. I don't\nknow.\n\n> > I suppose we probably should make the change to 12 though: then owners\n> > of extensions that use DSM detach hooks (if there any such extensions)\n> > will have a bit of time to get used to the new order during the beta\n> > period. I'll need to find someone to test this with a fault injection\n> > scenario on Windows before committing it, but wanted to sound out the\n> > list for any objections to this late change?\n>\n> Since we haven't started beta yet, I don't see a reason not to change\n> it. Worst case is that it causes problems and we revert it.\n>\n> I concur with not back-patching, in any case.\n\nHere's a way to produce an error which might produce the log message\non Windows. Does anyone want to try it?\n\npostgres=# create table foo as select generate_series(1, 10000000)::int i;\nSELECT 10000000\npostgres=# set synchronize_seqscans = off;\nSET\npostgres=# create index on foo ((1 / (5000000 - i)));\npsql: ERROR: division by zero\npostgres=# create index on foo ((1 / (5000000 - i)));\npsql: ERROR: division by zero\npostgres=# create index on foo ((1 / (5000000 - i)));\npsql: ERROR: division by zero\nCONTEXT: parallel worker\n\n(If you don't turn sync scan off, it starts scanning from where it\nleft off last time and then fails immediately, which may interfere\nwith the experiment if you run it more than once, I'm not sure).\n\nIf it does produce the log message, then the attached patch should\nmake it go away.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Mon, 6 May 2019 10:13:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> If it does produce the log message, then the attached patch should\n> make it go away.\n\nOne thing I don't care for about this patch is that the original code\nlooked like it didn't matter what order we did the resource releases in,\nand the patched code still looks like that. You're not doing future\nhackers any service by failing to include a comment that explains that\nDSM detach MUST BE LAST, and explaining why. Even with that, I'd only\nrate it about a 75% chance that somebody won't try to add their new\nresource type at the end --- but with no comment, the odds they'll\nget it right are indistinguishable from zero.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 May 2019 19:07:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One thing I don't care for about this patch is that the original code\n> looked like it didn't matter what order we did the resource releases in,\n> and the patched code still looks like that. You're not doing future\n> hackers any service by failing to include a comment that explains that\n> DSM detach MUST BE LAST, and explaining why. Even with that, I'd only\n> rate it about a 75% chance that somebody won't try to add their new\n> resource type at the end --- but with no comment, the odds they'll\n> get it right are indistinguishable from zero.\n\nOk, here's a version that provides a specific reason (the Windows file\nhandle thing) and also a more general reasoning: we don't really want\nextension (or core) authors writing callbacks that depend on eg pins\nor locks or whatever else being still held when they run, because\nthat's fragile, so calling them last is the best and most conservative\nchoice. I think if someone does come with legitimate reasons to want\nthat, we should discuss it then, and perhaps consider something a bit\nlike the ResourceRelease_callbacks list: its callbacks are invoked for\neach phase.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Mon, 6 May 2019 12:56:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, May 6, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... You're not doing future\n>> hackers any service by failing to include a comment that explains that\n>> DSM detach MUST BE LAST, and explaining why.\n\n> Ok, here's a version that provides a specific reason (the Windows file\n> handle thing) and also a more general reasoning: we don't really want\n> extension (or core) authors writing callbacks that depend on eg pins\n> or locks or whatever else being still held when they run, because\n> that's fragile, so calling them last is the best and most conservative\n> choice.\n\nLGTM.\n\n> ... I think if someone does come with legitimate reasons to want\n> that, we should discuss it then, and perhaps consider something a bit\n> like the ResourceRelease_callbacks list: its callbacks are invoked for\n> each phase.\n\nHmm, now that you mention it: this bit at the very end\n\n\t/* Let add-on modules get a chance too */\n\tfor (item = ResourceRelease_callbacks; item; item = item->next)\n\t\titem->callback(phase, isCommit, isTopLevel, item->arg);\n\nseems kind of misplaced given this discussion. Should we not run that\n*first*, before we release core resources for the same phase? It's\na lot more plausible that extension resources depend on core resources\nthan vice versa.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 May 2019 23:44:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 3:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, May 6, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... You're not doing future\n> >> hackers any service by failing to include a comment that explains that\n> >> DSM detach MUST BE LAST, and explaining why.\n>\n> > Ok, here's a version that provides a specific reason (the Windows file\n> > handle thing) and also a more general reasoning: we don't really want\n> > extension (or core) authors writing callbacks that depend on eg pins\n> > or locks or whatever else being still held when they run, because\n> > that's fragile, so calling them last is the best and most conservative\n> > choice.\n>\n> LGTM.\n\nCool. I'll wait a bit to see if we can get confirmation from a\nWindows hacker that it does what I claim. Or maybe I should try to\ncome up with a regression test that exercises it without having to\ncreate a big table.\n\n> > ... I think if someone does come with legitimate reasons to want\n> > that, we should discuss it then, and perhaps consider something a bit\n> > like the ResourceRelease_callbacks list: its callbacks are invoked for\n> > each phase.\n>\n> Hmm, now that you mention it: this bit at the very end\n>\n> /* Let add-on modules get a chance too */\n> for (item = ResourceRelease_callbacks; item; item = item->next)\n> item->callback(phase, isCommit, isTopLevel, item->arg);\n>\n> seems kind of misplaced given this discussion. Should we not run that\n> *first*, before we release core resources for the same phase? It's\n> a lot more plausible that extension resources depend on core resources\n> than vice versa.\n\nNot sure. Changing the meaning of the existing callbacks from last\nto first in each phase seems a bit unfriendly. If it's useful to be\nable to run a callback before RESOURCE_RELEASE_BEFORE_LOCKS, perhaps\nwe need a new phase that comes before that?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 May 2019 16:47:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 3:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, May 3, 2019 at 2:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > A while back I posted a patch[1] to change the order of resowner\n> > > cleanup so that DSM handles are released last. That's useful for the\n> > > error cleanup path on Windows, when a SharedFileSet is cleaned up (a\n> > > mechanism that's used by parallel CREATE INDEX and parallel hash join,\n> > > for spilling files to disk under a temporary directory, with automatic\n> > > cleanup).\n> >\n> > I guess what I'm wondering is if there are any potential negative\n> > consequences, ie code that won't work if we change the order like this.\n> > I'm finding it hard to visualize what that would be, but then again\n> > this failure mode wasn't obvious either.\n>\n> I can't think of anything in core. The trouble here is that we're\n> talking about hypothetical out-of-tree code that could want to plug in\n> detach hooks to do anything at all, so it's hard to say. One idea\n> that occurred to me is that if someone comes up with a genuine need to\n> run arbitrary callbacks before locks are released (for example), we\n> could provide a way to be called in all three phases and receive the\n> phase, though admittedly in this case FileClose() is in the same phase\n> as I'm proposing to put dsm_detach(), so there is an ordering\n> requirement that might require more fine grained phases. I don't\n> know.\n>\n> > > I suppose we probably should make the change to 12 though: then owners\n> > > of extensions that use DSM detach hooks (if there any such extensions)\n> > > will have a bit of time to get used to the new order during the beta\n> > > period. I'll need to find someone to test this with a fault injection\n> > > scenario on Windows before committing it, but wanted to sound out the\n> > > list for any objections to this late change?\n> >\n> > Since we haven't started beta yet, I don't see a reason not to change\n> > it. Worst case is that it causes problems and we revert it.\n> >\n> > I concur with not back-patching, in any case.\n>\n> Here's a way to produce an error which might produce the log message\n> on Windows. Does anyone want to try it?\n>\n\nI can give it a try.\n\n> postgres=# create table foo as select generate_series(1, 10000000)::int i;\n> SELECT 10000000\n> postgres=# set synchronize_seqscans = off;\n> SET\n> postgres=# create index on foo ((1 / (5000000 - i)));\n> psql: ERROR: division by zero\n> postgres=# create index on foo ((1 / (5000000 - i)));\n> psql: ERROR: division by zero\n> postgres=# create index on foo ((1 / (5000000 - i)));\n> psql: ERROR: division by zero\n> CONTEXT: parallel worker\n>\n> (If you don't turn sync scan off, it starts scanning from where it\n> left off last time and then fails immediately, which may interfere\n> with the experiment if you run it more than once, I'm not sure).\n>\n> If it does produce the log message, then the attached patch should\n> make it go away.\n>\n\nAre you referring to log message \"LOG: could not rmdir directory\n\"base/pgsql_tmp/pgsql_tmp3692.0.sharedfileset\": Directory not empty\"?\nIf so, I am getting it both before and after your patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 May 2019 14:56:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, May 6, 2019 at 3:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a way to produce an error which might produce the log message\n> > on Windows. Does anyone want to try it?\n>\n> I can give it a try.\n\nThanks!\n\n> > If it does produce the log message, then the attached patch should\n> > make it go away.\n>\n> Are you referring to log message \"LOG: could not rmdir directory\n> \"base/pgsql_tmp/pgsql_tmp3692.0.sharedfileset\": Directory not empty\"?\n\nYes.\n\n> If so, I am getting it both before and after your patch.\n\nHuh. I thought the only problem here was the phenomenon demonstrated\nby [1]. I'm a bit stumped... if we've closed all the handles in every\nbackend before detaching, and then the last to detach unlinks all the\nfiles first and then the directory, how can we get that error?\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D2rH_V5by1kH1Q1HZWPFj%3D4ykjU4JcyoKMNVT6Jh8Q_Rw%40mail.gmail.com\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 May 2019 23:11:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 4:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, May 6, 2019 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > If so, I am getting it both before and after your patch.\n>\n> Huh. I thought the only problem here was the phenomenon demonstrated\n> by [1]. I'm a bit stumped... if we've closed all the handles in every\n> backend before detaching, and then the last to detach unlinks all the\n> files first and then the directory, how can we get that error?\n>\n\nYeah, I am also not sure what caused that and I have verified it two\ntimes to ensure that I have not made any mistake. I can try once\nagain tomorrow after adding some debug messages, but if someone else\ncan also once confirm the behavior, it would be good.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 May 2019 17:21:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Thu, May 2, 2019 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > A while back I posted a patch[1] to change the order of resowner\n> > cleanup so that DSM handles are released last. That's useful for the\n> > error cleanup path on Windows, when a SharedFileSet is cleaned up (a\n> > mechanism that's used by parallel CREATE INDEX and parallel hash join,\n> > for spilling files to disk under a temporary directory, with automatic\n> > cleanup).\n>\n> I guess what I'm wondering is if there are any potential negative\n> consequences, ie code that won't work if we change the order like this.\n> I'm finding it hard to visualize what that would be, but then again\n> this failure mode wasn't obvious either.\n\nI have a thought about this. It seems to me that when it comes to\nbackend-private memory, we release it even later: aborting the\ntransaction does nothing, and we do it only later when we clean up the\ntransaction. So I wonder whether we're going to find that we actually\nwant to postpone reclaiming dynamic shared memory for even longer than\nthis change would do. But in general, I think we've already\nestablished the principle that releasing memory needs to happen last,\nbecause every other resource that you might be using is tracked using\ndata structures that are, uh, stored in memory. Therefore I suspect\nthat this change is going in the right direction.\n\nTo put that another way, the issue here is that the removal of the\nfiles can't happen after the cleanup of the memory that tells us which\nfiles to remove. If we had the corresponding problem for the\nnon-parallel case, it would mean that we were deleting the\ntransaction's memory context before we finished releasing all the\nresources managed by the transaction's resowner, which would be\ninsane. I believe I put the call to release DSM segments where it is\non the theory that \"we should release dynamic shared memory as early\nas possible because freeing memory is good,\" completing failing to\ntake into account that this was not at all like what we do for\nbackend-private memory.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 12:14:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have a thought about this. It seems to me that when it comes to\n> backend-private memory, we release it even later: aborting the\n> transaction does nothing, and we do it only later when we clean up the\n> transaction. So I wonder whether we're going to find that we actually\n> want to postpone reclaiming dynamic shared memory for even longer than\n> this change would do. But in general, I think we've already\n> established the principle that releasing memory needs to happen last,\n> because every other resource that you might be using is tracked using\n> data structures that are, uh, stored in memory. Therefore I suspect\n> that this change is going in the right direction.\n\nHmm. That argument suggests that DSM cleanup shouldn't be part of\nresowner cleanup at all, but should be handled as a bespoke, late\nstep in transaction cleanup, as memory-context release is.\n\nNot sure if that's going too far or not. It would definitely be a big\nchange in environment for DSM-cleanup hooks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 13:32:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I have a thought about this. It seems to me that when it comes to\n> > backend-private memory, we release it even later: aborting the\n> > transaction does nothing, and we do it only later when we clean up the\n> > transaction. So I wonder whether we're going to find that we actually\n> > want to postpone reclaiming dynamic shared memory for even longer than\n> > this change would do. But in general, I think we've already\n> > established the principle that releasing memory needs to happen last,\n> > because every other resource that you might be using is tracked using\n> > data structures that are, uh, stored in memory. Therefore I suspect\n> > that this change is going in the right direction.\n>\n> Hmm. That argument suggests that DSM cleanup shouldn't be part of\n> resowner cleanup at all, but should be handled as a bespoke, late\n> step in transaction cleanup, as memory-context release is.\n>\n> Not sure if that's going too far or not. It would definitely be a big\n> change in environment for DSM-cleanup hooks.\n\nRight. That's why I favor applying the change to move DSM cleanup to\nthe end for now, and seeing how that goes. It could be that we'll\neventually discover that doing it before all of the AtEOXact_BLAH\nfunctions have had a short at doing their thing is still too early,\nbut the only concrete problem that we know about right now can be\nsolved by this much-less-invasive change.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 13:47:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Right. That's why I favor applying the change to move DSM cleanup to\n> the end for now, and seeing how that goes. It could be that we'll\n> eventually discover that doing it before all of the AtEOXact_BLAH\n> functions have had a short at doing their thing is still too early,\n> but the only concrete problem that we know about right now can be\n> solved by this much-less-invasive change.\n\nBut Amit's results say that this *doesn't* fix the problem that we know\nabout. I suspect the reason is exactly that we need to run AtEOXact_Files\nor the like before closing DSM. But we should get some Windows developer\nto trace through this and identify the cause for-sure before we go\ndesigning an invasive fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 13:58:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Mon, May 6, 2019 at 1:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Right. That's why I favor applying the change to move DSM cleanup to\n> > the end for now, and seeing how that goes. It could be that we'll\n> > eventually discover that doing it before all of the AtEOXact_BLAH\n> > functions have had a short at doing their thing is still too early,\n> > but the only concrete problem that we know about right now can be\n> > solved by this much-less-invasive change.\n>\n> But Amit's results say that this *doesn't* fix the problem that we know\n> about. I suspect the reason is exactly that we need to run AtEOXact_Files\n> or the like before closing DSM. But we should get some Windows developer\n> to trace through this and identify the cause for-sure before we go\n> designing an invasive fix.\n\nHuh, OK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 May 2019 14:08:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Tue, May 7, 2019 at 6:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, May 6, 2019 at 1:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Right. That's why I favor applying the change to move DSM cleanup to\n> > > the end for now, and seeing how that goes. It could be that we'll\n> > > eventually discover that doing it before all of the AtEOXact_BLAH\n> > > functions have had a short at doing their thing is still too early,\n> > > but the only concrete problem that we know about right now can be\n> > > solved by this much-less-invasive change.\n> >\n> > But Amit's results say that this *doesn't* fix the problem that we know\n> > about. I suspect the reason is exactly that we need to run AtEOXact_Files\n> > or the like before closing DSM. But we should get some Windows developer\n> > to trace through this and identify the cause for-sure before we go\n> > designing an invasive fix.\n>\n> Huh, OK.\n\nThe reason the patch didn't solve the problem is that\nAtEOXact_Parallel() calls DestroyParallelContext(). So DSM segments\nthat happen to belong to ParallelContext objects are already gone by\nthe time resowner.c gets involved.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 May 2019 22:23:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
},
{
"msg_contents": "On Thu, May 9, 2019 at 10:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The reason the patch didn't solve the problem is that\n> AtEOXact_Parallel() calls DestroyParallelContext(). So DSM segments\n> that happen to belong to ParallelContext objects are already gone by\n> the time resowner.c gets involved.\n\nThis was listed as an open item for PostgreSQL 12, but I'm going to\nmove it to \"older bugs\". I want to fix it, but now that I understand\nwhat's wrong, it's a slightly bigger design issue than I'm game to try\nto fix right now.\n\nThis means that 12, like 11, will be capable of leaking empty\ntemporary directories on Windows whenever an error is raised in the\nmiddle of parallel CREATE INDEX or multi-batch Parallel Hash Join.\nThe directories are eventually unlinked at restart, and at least\nthe (potentially large) files inside the directory are unlinked on\nabort. I think we can live with that for a bit longer.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 13:42:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing order of resowner cleanup in 12, for Windows"
}
] |
[
{
"msg_contents": "https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-INFINITE has:\n\n Also, some element types have a notion of “infinity”, but that is just\n another value so far as the range type mechanisms are concerned.\n For example, in timestamp ranges, [today,] means the same thing as [today,).\n But [today,infinity] means something different from [today,infinity) —\n the latter excludes the special timestamp value infinity.\n\nThis does not work as expected for ranges with discrete base types,\nnotably daterange:\n\ntest=> SELECT '[2000-01-01,infinity]'::daterange;\n daterange \n-----------------------\n [2000-01-01,infinity)\n(1 row)\n\ntest=> SELECT '(-infinity,2000-01-01)'::daterange;\n daterange \n------------------------\n [-infinity,2000-01-01)\n(1 row)\n\nThis is because \"daterange_canonical\" makes no difference for 'infinity',\nand adding one to infinity does not change the value.\n\nI propose the attached patch which fixes the problem.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 02 May 2019 14:40:52 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "I wrote:\n> I propose the attached patch which fixes the problem.\n\nI forgot to attach the patch. Here it is.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 02 May 2019 14:49:23 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Fri, May 3, 2019 at 12:49 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > I propose the attached patch which fixes the problem.\n\nHi Laurenz,\n\nI agree that the patch makes the code match the documentation. The\ndocumented behaviour seems to make more sense than the code, since\nunpatched master gives this nonsense result when it flips the\ninclusive flag but doesn't adjust the value (because it can't):\n\npostgres=# select '(-infinity,infinity]'::daterange @> 'infinity'::date;\n ?column?\n----------\n f\n(1 row)\n\n- if (!upper.infinite && upper.inclusive)\n+ if (!(upper.infinite || DATE_NOT_FINITE(upper.val)) && upper.inclusive)\n\nEven though !(X || Y) is equivalent to !X && !Y, by my reading of\nrange_in(), lower.value can be uninitialised when lower.infinite is\ntrue, and it's also a bit hard to read IMHO, so I'd probably write\nthat as !upper.infinite && !DATE_NOT_FINITE(upper.val) &&\nupper.inclusive. I don't think it can affect the result but it might\nupset Valgrind or similar.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 14 Jul 2019 00:44:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Sun, Jul 14, 2019 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Even though !(X || Y) is equivalent to !X && !Y, by my reading of\n> range_in(), lower.value can be uninitialised when lower.infinite is\n> true, and it's also a bit hard to read IMHO, so I'd probably write\n> that as !upper.infinite && !DATE_NOT_FINITE(upper.val) &&\n> upper.inclusive. I don't think it can affect the result but it might\n> upset Valgrind or similar.\n\nI take back the bit about reading an uninitialised value (X || Y\ndoesn't access Y if X is true... duh), but I still think the other way\nof putting it is a bit easier to read. YMMV.\n\nGenerally, +1 for this patch. I'll wait a couple of days for more\nfeedback to appear.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 14 Jul 2019 15:27:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Sun, 2019-07-14 at 15:27 +1200, Thomas Munro wrote:\n> I take back the bit about reading an uninitialised value (X || Y\n> doesn't access Y if X is true... duh), but I still think the other\n> way\n> of putting it is a bit easier to read. YMMV.\n> \n> Generally, +1 for this patch. I'll wait a couple of days for more\n> feedback to appear.\n\nI went ahead and committed this using Thomas's suggestion to remove the\nparentheses.\n\nThanks!\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 13:56:16 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> I went ahead and committed this using Thomas's suggestion to remove the\n> parentheses.\n\nThe commit message claims this was back-patched, but I see no back-patch?\n\n(The commit message doesn't seem to have made it to the pgsql-committers\nlist either, but that's probably an independent issue.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 17:36:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Thu, 2019-07-18 at 13:56 -0700, Jeff Davis wrote:\n> I went ahead and committed this using Thomas's suggestion to remove the\n> parentheses.\n\nThanks for the review and the commit!\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 00:02:59 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Thu, 2019-07-18 at 17:36 -0400, Tom Lane wrote:\n> The commit message claims this was back-patched, but I see no back-\n> patch?\n\nSorry, I noticed an issue after pushing: we were passing a datum\ndirectly to DATE_NOT_FINITE, when we should have called\nDatumGetDateADT() first. I ran through the tests again and now pushed\nto all branches.\n\n> (The commit message doesn't seem to have made it to the pgsql-\n> committers\n> list either, but that's probably an independent issue.)\n\nI was curious about that as well.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 17:17:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 05:36:35PM -0400, Tom Lane wrote:\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > I went ahead and committed this using Thomas's suggestion to remove the\n> > parentheses.\n> \n> The commit message claims this was back-patched, but I see no back-patch?\n> \n> (The commit message doesn't seem to have made it to the pgsql-committers\n> list either, but that's probably an independent issue.)\n\nREL_12_STABLE has been missed in the set of branches patched. Could\nyou fix that as well (including the extra fix b0a7e0f0)? \n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 09:19:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Thu, 2019-07-18 at 17:36 -0400, Tom Lane wrote:\n> > (The commit message doesn't seem to have made it to the pgsql-\n> > committers\n> > list either, but that's probably an independent issue.)\n> \n> I was curious about that as well.\n\nThe whitelists we put in place expire after a certain period of time\n(iirc, it's 1 year currently) and then your posts end up getting\nmoderated.\n\nIf you register that address as an alternate for you, you should be\nable to post with it without needing to be on a whitelist.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 21 Jul 2019 14:25:29 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Bad canonicalization for dateranges with 'infinity' bounds"
}
] |
[
{
"msg_contents": "In view of the REINDEX-on-pg_class kerfuffle that we're currently\nsorting through, I was very glad to see that the concurrent reindex\ncode doesn't even try:\n\nregression=# reindex index concurrently pg_class_oid_index;\npsql: ERROR: concurrent reindex is not supported for catalog relations\nregression=# reindex table concurrently pg_class; \npsql: ERROR: concurrent index creation on system catalog tables is not supported\n\nIt'd be nice though if those error messages gave the impression of having\nbeen written on the same planet.\n\n(It might be worth comparing wording of other errors-in-common between\nwhat are evidently two completely different code paths...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 10:06:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Thu, May 02, 2019 at 10:06:42AM -0400, Tom Lane wrote:\n> In view of the REINDEX-on-pg_class kerfuffle that we're currently\n> sorting through, I was very glad to see that the concurrent reindex\n> code doesn't even try:\n> \n> regression=# reindex index concurrently pg_class_oid_index;\n> psql: ERROR: concurrent reindex is not supported for catalog relations\n> regression=# reindex table concurrently pg_class; \n> psql: ERROR: concurrent index creation on system catalog tables is not supported\n> \n> It'd be nice though if those error messages gave the impression of having\n> been written on the same planet.\n\nWe could do a larger brush-up of error messages in this area, as these\nare full sentences which is not a style allowed, no? The second error\nmessage can be used as well by both CREATE INDEX CONCURRENTLY and\nREINDEX CONCURRENTLY, but not the first one, so the first one needs to\nbe more generic than the second one. How about the following changes\nfor at least these two?\n\"cannot use REINDEX CONCURRENTLY on system catalogs\"\n\"cannot create index on system catalog concurrently\"\n\nThen we have some other messages in index.c which could be cleaned\nup.. For example at the beginning of index_constraint_create(), there\nare two them, but there is much more which could be improved. Do you\nthink this is worth having a look and fixing?\n--\nMichael",
"msg_date": "Sat, 4 May 2019 17:55:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, May 02, 2019 at 10:06:42AM -0400, Tom Lane wrote:\n>> regression=# reindex index concurrently pg_class_oid_index;\n>> psql: ERROR: concurrent reindex is not supported for catalog relations\n>> regression=# reindex table concurrently pg_class; \n>> psql: ERROR: concurrent index creation on system catalog tables is not supported\n>> \n>> It'd be nice though if those error messages gave the impression of having\n>> been written on the same planet.\n\n> We could do a larger brush-up of error messages in this area, as these\n> are full sentences which is not a style allowed, no?\n\nI wouldn't object to either one in isolation, it's the inconsistency\nthat irks me.\n\n> How about the following changes\n> for at least these two?\n> \"cannot use REINDEX CONCURRENTLY on system catalogs\"\n> \"cannot create index on system catalog concurrently\"\n\nI'd suggest something like \"cannot reindex a system catalog concurrently\"\nfor both cases. The \"cannot create index\" wording doesn't seem to me to\nbe very relevant, because if you try that you'll get\n\nregression=# create index on pg_class(relchecks);\npsql: ERROR: permission denied: \"pg_class\" is a system catalog\n\n> Then we have some other messages in index.c which could be cleaned\n> up.. For example at the beginning of index_constraint_create(), there\n> are two them, but there is much more which could be improved. Do you\n> think this is worth having a look and fixing?\n\nI'm not excited about rewording longstanding errors. These two are\nnew though (aren't they?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 11:00:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Sat, May 04, 2019 at 11:00:11AM -0400, Tom Lane wrote:\n> I'm not excited about rewording longstanding errors. These two are\n> new though (aren't they?)\n\nThe message you are referring to in index_create() has been introduced\nas of e093dcdd with the introduction of CREATE INDEX CONCURRENTLY, and\nit can be perfectly hit without REINDEX:\n=# show allow_system_table_mods;\n allow_system_table_mods\n-------------------------\n on\n(1 row)\n=# create index CONCURRENTLY popo on pg_class (relname);\nERROR: 0A000: concurrent index creation on system catalog tables is\nnot supported\nLOCATION: index_create, index.c:830\n\nSo I don't agree with switching the existing error message in\nindex_create(). What we could do instead is to add a REINDEX-specific\nerror in ReindexRelationConcurrently() as done for index relkinds,\nusing your proposed wording.\n\nWhat do you think about something like the attached then? HEAD does\nnot check after system indexes with REINDEX INDEX CONCURRENTLY, and I\nhave moved all the catalog-related tests to reindex_catalog.sql.\n--\nMichael",
"msg_date": "Sun, 5 May 2019 23:16:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The message you are referring to in index_create() has been introduced\n> as of e093dcdd with the introduction of CREATE INDEX CONCURRENTLY, and\n> it can be perfectly hit without REINDEX:\n> =# show allow_system_table_mods;\n> allow_system_table_mods\n> -------------------------\n> on\n> (1 row)\n\nOh, yeah, if you do that you can get to it.\n\n> What do you think about something like the attached then? HEAD does\n> not check after system indexes with REINDEX INDEX CONCURRENTLY, and I\n> have moved all the catalog-related tests to reindex_catalog.sql.\n\nOK as far as the wording goes, but now that I look at the specific tests\nthat are being applied, they seem rather loony, as well as inconsistent\nbetween the two cases. IsSystemRelation *sounds* like the right thing,\nbut it's not, because it forbids user-relation toast tables which seems\nlike a restriction we need not make. I think IsCatalogRelation is the\ntest we actually want there. In the other place, checking\nIsSystemNamespace isn't even approximately the correct way to proceed,\nsince it fails to reject reindexing system catalogs' toast tables.\nWe should be doing the equivalent of IsCatalogRelation there too.\n(It's a bit of a pain that catalog.c doesn't offer a function that\nmakes that determination from just an OID. Should we add one?\nThere might be other callers for it.)\n\nI concur that we shouldn't need a separate check for relisshared,\nsince all shared rels should be system catalogs.\n\nI\"m not sure I'd move these error-case tests to reindex_catalog.sql ---\nbear in mind that later today, that test is either going away entirely\nor at least not getting run by default anymore.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 May 2019 13:32:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> What do you think about something like the attached then? HEAD does\n>> not check after system indexes with REINDEX INDEX CONCURRENTLY, and I\n>> have moved all the catalog-related tests to reindex_catalog.sql.\n\n> OK as far as the wording goes, but now that I look at the specific tests\n> that are being applied, they seem rather loony, as well as inconsistent\n> between the two cases. IsSystemRelation *sounds* like the right thing,\n> but it's not, because it forbids user-relation toast tables which seems\n> like a restriction we need not make. I think IsCatalogRelation is the\n> test we actually want there. In the other place, checking\n> IsSystemNamespace isn't even approximately the correct way to proceed,\n> since it fails to reject reindexing system catalogs' toast tables.\n> We should be doing the equivalent of IsCatalogRelation there too.\n> (It's a bit of a pain that catalog.c doesn't offer a function that\n> makes that determination from just an OID. Should we add one?\n> There might be other callers for it.)\n\nAfter looking around a bit, I propose that we invent\n\"IsCatalogRelationOid(Oid reloid)\" (not wedded to that name), which\nis a wrapper around IsCatalogClass() that does the needful syscache\nlookup for you. Aside from this use-case, it could be used in\nsepgsql/dml.c, which I see is also using\nIsSystemNamespace(get_rel_namespace(oid)) for the wrong thing.\n\nI'm also thinking that it'd be a good idea to rename IsSystemNamespace\nto IsCatalogNamespace. The existing naming is totally confusing given\nthat it doesn't square with the distinction between IsSystemRelation\nand IsCatalogRelation (ie that the former includes user toast tables).\nThere are only five external callers of it, and per this discussion\nat least two of them are wrong anyway.\n\nI was thinking about also proposing that we rename IsSystemRelation\nto IsCatalogOrToastRelation (likewise for IsSystemClass), which would\nbe clearer as to its semantics. However, after looking through the\ncode, it seems that 90% of the callers are using those functions to\ndecide whether to apply !allow_system_table_mods restrictions, and\nindeed it's likely that some of the other 10% are wrong and should be\ntesting IsCatalogRelation/Class instead. So unless we want to rename\nthat GUC, I think the existing names of these functions are fine but\nwe should adjust their comments to explain that this is the primary\nif not sole use-case. Another idea is to make IsSystemRelation/Class\nbe macros for IsCatalogOrToastRelation/Class, with the intention that\nwe use the former names specifically for allow_system_table_mods tests\nand the latter names for anything else that happens to really want\nthose semantics.\n\nThere's some other cleanup I want to do in catalog.c --- many of the\ncomments are desperately in need of copy-editing, to start with ---\nbut I don't think any of the rest of it would be controversial.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 May 2019 17:45:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Sun, May 05, 2019 at 05:45:53PM -0400, Tom Lane wrote:\n> In the other place, checking IsSystemNamespace isn't even\n> approximately the correct way to proceed, since it fails to reject\n> reindexing system catalogs' toast tables.\n\nGood point. I overlooked that part. It is easy enough to have a test\nwhich fails for a catalog index, a catalog table, a toast table on a\nsystem catalog and a toast index on a system catalog. However I don't\nsee a way to test directly that a toast relation or index on a\nnon-catalog relation works as we cannot run REINDEX CONCURRENTLY\nwithin a function, and it is not possible to save the toast relation\nname as a psql variable. Perhaps somebody has a trick?x\n\n> After looking around a bit, I propose that we invent\n> \"IsCatalogRelationOid(Oid reloid)\" (not wedded to that name), which\n> is a wrapper around IsCatalogClass() that does the needful syscache\n> lookup for you. Aside from this use-case, it could be used in\n> sepgsql/dml.c, which I see is also using\n> IsSystemNamespace(get_rel_namespace(oid)) for the wrong thing.\n\nHmmm. A wrapper on top of IsCatalogClass() implies that we would need\nto open the related relation directly in the new function so as it is\npossible to grab its pg_class entry. We could imply that the function\ntakes a ShareLock all the time, but that's not going to be true most\nof the time and the recent discussions around lock upgrades stress me\na bit, and I'd rather not add new race conditions or upgrade hazards.\nWe should have an extra argument with the lock mode, but we have\nnothing in catalog.c of that kind, and that does not feel consistent\nwith the current interface. At the end I have made the choice to not\nreinvent the world, and just get a Relation from the parent table when\nlooking after an index relkind so as IsCatalogRelation() is used for\nthe check.\n\nWhat do you think about the updated patch attached? I have removed\nthe tests from reindex_catalog.sql, and added more coverage into\ncreate_index.sql.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 16:50:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "I wrote:\n> After looking around a bit, I propose that we invent\n> \"IsCatalogRelationOid(Oid reloid)\" (not wedded to that name), which\n> is a wrapper around IsCatalogClass() that does the needful syscache\n> lookup for you. Aside from this use-case, it could be used in\n> sepgsql/dml.c, which I see is also using\n> IsSystemNamespace(get_rel_namespace(oid)) for the wrong thing.\n\n> I'm also thinking that it'd be a good idea to rename IsSystemNamespace\n> to IsCatalogNamespace. The existing naming is totally confusing given\n> that it doesn't square with the distinction between IsSystemRelation\n> and IsCatalogRelation (ie that the former includes user toast tables).\n> There are only five external callers of it, and per this discussion\n> at least two of them are wrong anyway.\n\nAfter studying the callers of these catalog.c functions for awhile,\nI realized that IsCatalogRelation/Class are really fundamentally wrong,\nand have been so for a long time. The reason is that while they will\nreturn FALSE for tables in the information_schema, they will return\nTRUE for toast tables attached to the information_schema tables.\n(They're toast tables, and they have OIDs below FirstNormalObjectId,\nso there you have it.) This is wrong on its face: if those tables don't\nneed to be protected as catalogs, why should their TOAST appendages\nneed it? Moreover, if you drop and recreate information_schema, you'll\nstart getting different behavior for them, which is even sillier.\n\nI was driven to this realization by the following very confused (and\nconfusing) bit in ReindexMultipleTables:\n\n /*\n * Skip system tables that index_create() would reject to index\n * concurrently. XXX We need the additional check for\n * FirstNormalObjectId to skip information_schema tables, because\n * IsCatalogClass() here does not cover information_schema, but the\n * check in index_create() will error on the TOAST tables of\n * information_schema tables.\n */\n if (concurrent &&\n (IsCatalogClass(relid, classtuple) || relid < FirstNormalObjectId))\n {\n\nThat's nothing but a hack, and the reason it's necessary is that\nindex_create will throw error if IsCatalogRelation is true, which\nit will be for information_schema TOAST tables --- but not for their\nparent tables that are being examined here.\n\nAfter looking around, it seems to me that the correct definition for\nIsCatalogRelation is just \"is the OID less than FirstBootstrapObjectId?\".\nCurrently we could actually restrict it to \"less than\nFirstGenbkiObjectId\", because all the catalogs, indexes, and TOAST tables\nhave hand-assigned OIDs --- but perhaps someday we'll let genbki.pl\nassign some of those OIDs, so I prefer the weaker constraint. In any\ncase, this gives us a correct separation between objects that are\ntraceable to the bootstrap data and those that are created by plain SQL\nlater in initdb.\n\nWith this, the Form_pg_class argument to IsCatalogClass becomes\nvestigial. I'm tempted to get rid of that function altogether in\nfavor of direct calls to IsCatalogRelationOid, but haven't done so\nin the attached.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 07 May 2019 17:19:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Tue, May 07, 2019 at 05:19:38PM -0400, Tom Lane wrote:\n> That's nothing but a hack, and the reason it's necessary is that\n> index_create will throw error if IsCatalogRelation is true, which\n> it will be for information_schema TOAST tables --- but not for their\n> parent tables that are being examined here.\n\nOh. Good catch. That's indeed crazy now that I look closer at that.\n\n> After looking around, it seems to me that the correct definition for\n> IsCatalogRelation is just \"is the OID less than FirstBootstrapObjectId?\".\n> Currently we could actually restrict it to \"less than\n> FirstGenbkiObjectId\", because all the catalogs, indexes, and TOAST tables\n> have hand-assigned OIDs --- but perhaps someday we'll let genbki.pl\n> assign some of those OIDs, so I prefer the weaker constraint. In any\n> case, this gives us a correct separation between objects that are\n> traceable to the bootstrap data and those that are created by plain SQL\n> later in initdb.\n> \n> With this, the Form_pg_class argument to IsCatalogClass becomes\n> vestigial. I'm tempted to get rid of that function altogether in\n> favor of direct calls to IsCatalogRelationOid, but haven't done so\n> in the attached.\n\nI think that removing entirely IsCatalogClass() is just better as if\nany extension uses this routine, then it could potentially simplify\nits code because needing Form_pg_class means usually opening a\nRelation, and this can be removed.\n\nWith IsCatalogClass() removed, the only dependency with Form_pg_class\ncomes from IsToastClass() which is not used at all except in\nIsSystemClass(). Wouldn't it be better to remove entirely\nIsToastClass() and switch IsSystemClass() to use a namespace OID\ninstead of Form_pg_class?\n\nWith your patch, ReindexRelationConcurrently() does not complain for\nREINDEX TABLE CONCURRENTLY for a catalog table and would trigger the\nerror from index_create(), which is at the origin of this thread. The\ncheck with IsSharedRelation() for REINDEX INDEX CONCURRENTLY is\nuseless and the error message generated for IsCatalogRelationOid()\nstill needs to be improved. Would you prefer to include those changes\nin your patch? Or should I work on top of what you are proposing\n(your patch does not include negative tests for toast index and\ntables on catalogs either).\n--\nMichael",
"msg_date": "Wed, 8 May 2019 16:58:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, May 07, 2019 at 05:19:38PM -0400, Tom Lane wrote:\n>> With this, the Form_pg_class argument to IsCatalogClass becomes\n>> vestigial. I'm tempted to get rid of that function altogether in\n>> favor of direct calls to IsCatalogRelationOid, but haven't done so\n>> in the attached.\n\n> I think that removing entirely IsCatalogClass() is just better as if\n> any extension uses this routine, then it could potentially simplify\n> its code because needing Form_pg_class means usually opening a\n> Relation, and this can be removed.\n\nYeah, it's clearly easier to use without the extra argument.\n\n> With IsCatalogClass() removed, the only dependency with Form_pg_class\n> comes from IsToastClass() which is not used at all except in\n> IsSystemClass(). Wouldn't it be better to remove entirely\n> IsToastClass() and switch IsSystemClass() to use a namespace OID\n> instead of Form_pg_class?\n\nNot sure. The way it's defined has the advantage of being more\nindependent of exactly what the implementation of the \"is a toast table\"\ncheck is. Also, I looked around to see if any callers could really be\nsimplified if they only had to pass the table OID, and didn't find much;\nalmost all of them are looking at the pg_class tuple themselves, typically\nto check the relkind too. So we'd not make any net savings in syscache\nlookups by changing IsSystemClass's API. I'm kind of inclined to leave\nit alone.\n\n> With your patch, ReindexRelationConcurrently() does not complain for\n> REINDEX TABLE CONCURRENTLY for a catalog table and would trigger the\n> error from index_create(), which is at the origin of this thread. The\n> check with IsSharedRelation() for REINDEX INDEX CONCURRENTLY is\n> useless and the error message generated for IsCatalogRelationOid()\n> still needs to be improved. Would you prefer to include those changes\n> in your patch? Or should I work on top of what you are proposing\n> (your patch does not include negative tests for toast index and\n> tables on catalogs either).\n\nYes, we still need to do your patch on top of this one (or really\neither order would do). I think keeping them separate is good.\n\nBTW, when I was looking at this I got dissatisfied about another\naspect of the wording of the relevant error messages: a lot of them\nare like, for example\n\n errmsg(\"cannot reindex concurrently this type of relation\")));\n\nWhile that matches the command syntax we're using, it's just horrid\nEnglish grammar. Better would be\n\n errmsg(\"cannot reindex this type of relation concurrently\")));\n\nCan we change that while we're at it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 08:31:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Wed, May 08, 2019 at 08:31:54AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> With IsCatalogClass() removed, the only dependency with Form_pg_class\n>> comes from IsToastClass() which is not used at all except in\n>> IsSystemClass(). Wouldn't it be better to remove entirely\n>> IsToastClass() and switch IsSystemClass() to use a namespace OID\n>> instead of Form_pg_class?\n> \n> Not sure. The way it's defined has the advantage of being more\n> independent of exactly what the implementation of the \"is a toast table\"\n> check is. Also, I looked around to see if any callers could really be\n> simplified if they only had to pass the table OID, and didn't find much;\n> almost all of them are looking at the pg_class tuple themselves, typically\n> to check the relkind too. So we'd not make any net savings in syscache\n> lookups by changing IsSystemClass's API. I'm kind of inclined to leave\n> it alone.\n\nHmm. Okay. It would have been nice to remove completely the\ndependency to Form_pg_class from this set of APIs, but I can see your\npoint.\n\n> Yes, we still need to do your patch on top of this one (or really\n> either order would do). I think keeping them separate is good.\n\nOkay, glad to hear. That's what I wanted to do.\n\n> While that matches the command syntax we're using, it's just horrid\n> English grammar. Better would be\n> \n> errmsg(\"cannot reindex this type of relation concurrently\")));\n> \n> Can we change that while we're at it?\n\nNo problem to do that. I'll brush up all that once you commit the\nfirst piece you have come up with, and reuse the new API of catalog.c\nyou are introducing based on the table OID.\n--\nMichael",
"msg_date": "Wed, 8 May 2019 22:05:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> No problem to do that. I'll brush up all that once you commit the\n> first piece you have come up with, and reuse the new API of catalog.c\n> you are introducing based on the table OID.\n\nPushed my stuff, have at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 23:28:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Wed, May 08, 2019 at 11:28:35PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> No problem to do that. I'll brush up all that once you commit the\n>> first piece you have come up with, and reuse the new API of catalog.c\n>> you are introducing based on the table OID.\n> \n> Pushed my stuff, have at it.\n\nThanks. Attached is what I get to after scanning the error messages\nin indexcmds.c and index.c. Perhaps you have more comments about it?\n--\nMichael",
"msg_date": "Thu, 9 May 2019 13:11:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 08, 2019 at 11:28:35PM -0400, Tom Lane wrote:\n>> Pushed my stuff, have at it.\n\n> Thanks. Attached is what I get to after scanning the error messages\n> in indexcmds.c and index.c. Perhaps you have more comments about it?\n\nLGTM, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 14:08:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Thu, May 09, 2019 at 02:08:39PM -0400, Tom Lane wrote:\n> LGTM, thanks.\n\nThanks for double-checking, committed. I am closing the open item.\n--\nMichael",
"msg_date": "Fri, 10 May 2019 08:20:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On 2019-May-09, Michael Paquier wrote:\n\n> On Wed, May 08, 2019 at 11:28:35PM -0400, Tom Lane wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> >> No problem to do that. I'll brush up all that once you commit the\n> >> first piece you have come up with, and reuse the new API of catalog.c\n> >> you are introducing based on the table OID.\n> > \n> > Pushed my stuff, have at it.\n> \n> Thanks. Attached is what I get to after scanning the error messages\n> in indexcmds.c and index.c. Perhaps you have more comments about it?\n\nI do :-) There are a couple of \"is not supported\" messages that are\nannoyingly similar but different:\ngit grep --show-function 'reindex.*supported' -- *.c\n\nsrc/backend/commands/indexcmds.c=ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind,\nsrc/backend/commands/indexcmds.c: errmsg(\"concurrent reindex of system catalogs is not supported\")));\nsrc/backend/commands/indexcmds.c: errmsg(\"concurrent reindex is not supported for catalog relations, skipping all\")));\nsrc/backend/commands/indexcmds.c=ReindexRelationConcurrently(Oid relationOid, int options)\nsrc/backend/commands/indexcmds.c: errmsg(\"concurrent reindex is not supported for shared relations\")));\nsrc/backend/commands/indexcmds.c: errmsg(\"concurrent reindex is not supported for catalog relations\")));\n\nIt seems strange to have some cases say \"cannot do foo\" and other cases\nsay \"foo is not supported\". However, I think having\nReindexMultipleTables say \"cannot reindex a system catalog\" would be\nslightly wrong (since we're not reindexing one but many) -- so it would\nhave to be \"cannot reindex system catalogs\". And in order to avoid\nhaving two messages that are essentially identical except in number, I\npropose to change the others to use the plural too. So the one you just\ncommitted\n\n> +\t\t\t\t/* A system catalog cannot be reindexed concurrently */\n> +\t\t\t\tif (IsCatalogRelationOid(relationOid))\n> +\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t\t\t errmsg(\"cannot reindex a system catalog concurrently\")));\n\nwould become \"cannot reindex system catalogs concurrently\", identical to\nthe one in ReindexMultipleTables.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 May 2019 11:32:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Tue, May 14, 2019 at 11:32:52AM -0400, Alvaro Herrera wrote:\n> I do :-)\n\nAnd actually I am happy to see somebody raising that point. The\ncurrent log messages are quite inconsistent for a couple of years now\nbut I did not bother changing anything other than the new strings per\nthe feedback I got until, well, yesterday.\n\n> It seems strange to have some cases say \"cannot do foo\" and other cases\n> say \"foo is not supported\". However, I think having\n> ReindexMultipleTables say \"cannot reindex a system catalog\" would be\n> slightly wrong (since we're not reindexing one but many) -- so it would\n> have to be \"cannot reindex system catalogs\". And in order to avoid\n> having two messages that are essentially identical except in number, I\n> propose to change the others to use the plural too. So the one you just\n> committed\n> \n> would become \"cannot reindex system catalogs concurrently\", identical to\n> the one in ReindexMultipleTables.\n\nThere are also a couple of similar, much older, error messages in\nindex_create() for concurrent creation. Do you think that these\nshould be changed? I can see benefits for translators to unify things\na bit more, but these do not directly apply to REINDEX, and all\nmessages are a bit different depending on the context. One argument\nto change them is that they don't comply with the project style as\nthey use full sentences.\n\nPerhaps something like the attached for the REINDEX portion would make\nthe world a better place? What do you think?\n--\nMichael",
"msg_date": "Wed, 15 May 2019 11:17:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Wed, May 15, 2019 at 11:17:51AM +0900, Michael Paquier wrote:\n> Perhaps something like the attached for the REINDEX portion would make\n> the world a better place? What do you think?\n\nAlvaro, do you have extra thoughts about this patch improving the\nerror message consistency for REINDEX CONCURRENTLY. I quite like the\nsuggestions you made and this makes the error strings more\nproject-like, so I would like to apply it.\n--\nMichael",
"msg_date": "Mon, 27 May 2019 10:54:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On 2019-May-27, Michael Paquier wrote:\n\n> On Wed, May 15, 2019 at 11:17:51AM +0900, Michael Paquier wrote:\n> > Perhaps something like the attached for the REINDEX portion would make\n> > the world a better place? What do you think?\n> \n> Alvaro, do you have extra thoughts about this patch improving the\n> error message consistency for REINDEX CONCURRENTLY. I quite like the\n> suggestions you made and this makes the error strings more\n> project-like, so I would like to apply it.\n\nI wonder if we really want to abolish all distinction between \"cannot do\nX\" and \"Y is not supported\". I take the former to mean that the\noperation is impossible to do for some reason, while the latter means we\njust haven't implemented it yet and it seems likely to get implemented\nin a reasonable timeframe. See some excellent commentary about about\nthe \"can not\" wording at\nhttps://postgr.es/m/CA+TgmoYS8jKhETyhGYTYMcbvGPwYY=qA6yYp9B47MX7MweE25w@mail.gmail.com\n\nI notice your patch changes \"catalog relations\" to \"system catalogs\".\nI think we predominantly prefer the latter, so that part of your change\nseems OK. (In passing, I noticed we have a couple of places using\n\"system catalog tables\", which is weird.)\n\nI think reindexing system catalogs concurrently is a complex enough\nundertaking that implementing it is far enough in the future that the\n\"cannot\" wording is okay; but reindexing partitioned tables is not so\nobviously out of the question. We do have \"is not yet implemented\" in a\ncouple of other places, so all things considered I'm not so sure about\nchanging that one to \"cannot\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 May 2019 00:20:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Mon, May 27, 2019 at 12:20:58AM -0400, Alvaro Herrera wrote:\n> I wonder if we really want to abolish all distinction between \"cannot do\n> X\" and \"Y is not supported\". I take the former to mean that the\n> operation is impossible to do for some reason, while the latter means we\n> just haven't implemented it yet and it seems likely to get implemented\n> in a reasonable timeframe. See some excellent commentary about about\n> the \"can not\" wording at\n> https://postgr.es/m/CA+TgmoYS8jKhETyhGYTYMcbvGPwYY=qA6yYp9B47MX7MweE25w@mail.gmail.com\n\nIncorrect URL?\n\n> I notice your patch changes \"catalog relations\" to \"system catalogs\".\n> I think we predominantly prefer the latter, so that part of your change\n> seems OK. (In passing, I noticed we have a couple of places using\n> \"system catalog tables\", which is weird.)\n\nGood point. These are not new though, so I would prefer not touch\nthose parts for this patch.\nsrc/backend/catalog/index.c: errmsg(\"user-defined\nindexes on system catalog tables are not supported\")));\nsrc/backend/catalog/index.c: errmsg(\"concurrent index\ncreation on system catalog tables is not supported\")));\nsrc/backend/catalog/index.c: errmsg(\"user-defined\nindexes on system catalog tables are not supported\")));\nsrc/backend/parser/parse_clause.c: errmsg(\"ON CONFLICT\nis not supported with system catalog tables\"),\n\n> I think reindexing system catalogs concurrently is a complex enough\n> undertaking that implementing it is far enough in the future that the\n> \"cannot\" wording is okay; but reindexing partitioned tables is not so\n> obviously out of the question.\n\nI am not sure that we actually can without much complication, as\ntechnically locks on catalogs may get released before commit if I\nrecall correctly.\n\n> We do have \"is not yet implemented\" in a\n> couple of other places, so all things considered I'm not so sure about\n> changing that one to \"cannot\".\n\nOkay. I can live with this difference. Not changing the string in\nReindexRelationConcurrently() has the merit to be consistent with the\nexisting ones in reindex_relation() and ReindexPartitionedIndex().\nPlease find attached an updated version. What do you think?\n--\nMichael",
"msg_date": "Mon, 27 May 2019 17:02:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Mon, May 27, 2019 at 4:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, May 27, 2019 at 12:20:58AM -0400, Alvaro Herrera wrote:\n> > I wonder if we really want to abolish all distinction between \"cannot do\n> > X\" and \"Y is not supported\". I take the former to mean that the\n> > operation is impossible to do for some reason, while the latter means we\n> > just haven't implemented it yet and it seems likely to get implemented\n> > in a reasonable timeframe. See some excellent commentary about about\n> > the \"can not\" wording at\n> > https://postgr.es/m/CA+TgmoYS8jKhETyhGYTYMcbvGPwYY=qA6yYp9B47MX7MweE25w@mail.gmail.com\n>\n> Incorrect URL?\n\nThat's one of my messages that never made it through to the list.\n\nTry http://postgr.es/m/CA+TgmoZ0HZuLGVLkF_LRTNYDijic4nqd-EpCDf_NgtMksfNL1g@mail.gmail.com\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 17:34:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On 2019-May-27, Michael Paquier wrote:\n\n> On Mon, May 27, 2019 at 12:20:58AM -0400, Alvaro Herrera wrote:\n\n> > I notice your patch changes \"catalog relations\" to \"system catalogs\".\n> > I think we predominantly prefer the latter, so that part of your change\n> > seems OK. (In passing, I noticed we have a couple of places using\n> > \"system catalog tables\", which is weird.)\n> \n> Good point. These are not new though, so I would prefer not touch\n> those parts for this patch.\n\nSure.\n\n> > We do have \"is not yet implemented\" in a\n> > couple of other places, so all things considered I'm not so sure about\n> > changing that one to \"cannot\".\n> \n> Okay. I can live with this difference. Not changing the string in\n> ReindexRelationConcurrently() has the merit to be consistent with the\n> existing ones in reindex_relation() and ReindexPartitionedIndex().\n> Please find attached an updated version. What do you think?\n\nLooks good.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 23:29:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 11:29:37PM -0400, Alvaro Herrera wrote:\n> Looks good.\n\nThanks for the review, and reminding me about it :)\n\nWhile on it, I have removed some comments around the error messages\nbecause they actually don't bring more information.\n--\nMichael",
"msg_date": "Thu, 20 Jun 2019 13:32:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent error message wording for REINDEX CONCURRENTLY"
}
] |
[
{
"msg_contents": "In the past week, four different buildfarm members have shown\nnon-reproducing segfaults in the \"select infinite_recurse()\"\ntest case, rather than the expected detection of stack overrun\nbefore we get to the point of a segfault.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bonito&dt=2019-05-01%2023%3A05%3A36\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=takin&dt=2019-05-01%2008%3A16%3A48\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=buri&dt=2019-04-27%2023%3A54%3A46\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=demoiselle&dt=2019-04-27%2014%3A55%3A52\n\nThey're all on HEAD, and they all look like\n\n2019-05-01 23:11:00.145 UTC [13933:65] LOG: server process (PID 17161) was terminated by signal 11: Segmentation fault\n2019-05-01 23:11:00.145 UTC [13933:66] DETAIL: Failed process was running: select infinite_recurse();\n\nI scraped the buildfarm database and verified that there are no similar\nfailures for at least three months back; nor, offhand, can I remember ever\nhaving seen this test fail in many years. So it seems we broke something\nrecently. No idea what though.\n\n(Another possibility, seeing that these are all members of Mark's PPC64\nflotilla, is that there's some common misconfiguration --- but it's hard\nto credit that such a problem would only affect HEAD not the back\nbranches.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 11:02:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-02 11:02:03 -0400, Tom Lane wrote:\n> In the past week, four different buildfarm members have shown\n> non-reproducing segfaults in the \"select infinite_recurse()\"\n> test case, rather than the expected detection of stack overrun\n> before we get to the point of a segfault.\n\nI was just staring at bonito's failure in confusion.\n\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bonito&dt=2019-05-01%2023%3A05%3A36\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=takin&dt=2019-05-01%2008%3A16%3A48\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=buri&dt=2019-04-27%2023%3A54%3A46\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=demoiselle&dt=2019-04-27%2014%3A55%3A52\n> \n> They're all on HEAD, and they all look like\n> \n> 2019-05-01 23:11:00.145 UTC [13933:65] LOG: server process (PID 17161) was terminated by signal 11: Segmentation fault\n> 2019-05-01 23:11:00.145 UTC [13933:66] DETAIL: Failed process was running: select infinite_recurse();\n> \n> I scraped the buildfarm database and verified that there are no similar\n> failures for at least three months back; nor, offhand, can I remember ever\n> having seen this test fail in many years. So it seems we broke something\n> recently. No idea what though.\n\nI can't recall any recent changes to relevant area of code.\n\n\n> (Another possibility, seeing that these are all members of Mark's PPC64\n> flotilla, is that there's some common misconfiguration --- but it's hard\n> to credit that such a problem would only affect HEAD not the back\n> branches.)\n\nHm, I just noticed:\n 'HEAD' => [\n 'force_parallel_mode = regress'\n ]\n\non all those animals. So it's not necessarily the case that HEAD and\nbackbranch runs are behaving all that identical. Note that isn't a\nrecent config change, so it's not an explanation as to why they started\nto fail only recently.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 May 2019 08:38:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm, I just noticed:\n> 'HEAD' => [\n> 'force_parallel_mode = regress'\n> ]\n\nOooh, I didn't see that.\n\n> on all those animals. So it's not necessarily the case that HEAD and\n> backbranch runs are behaving all that identical. Note that isn't a\n> recent config change, so it's not an explanation as to why they started\n> to fail only recently.\n\nNo, but it does point at another area of the code in which a relevant\nchange could've occurred.\n\nWhile we're looking at this --- Mark, if you could install gdb\non your buildfarm hosts, that would be really handy. I think that's\nthe only extra thing the buildfarm script needs to extract stack\ntraces from core dumps. We'd likely already know where the problem\nis if we had a stack trace ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 May 2019 11:45:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Thu, May 02, 2019 at 11:45:34AM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm, I just noticed:\n> > 'HEAD' => [\n> > 'force_parallel_mode = regress'\n> > ]\n> \n> Oooh, I didn't see that.\n> \n> > on all those animals. So it's not necessarily the case that HEAD and\n> > backbranch runs are behaving all that identical. Note that isn't a\n> > recent config change, so it's not an explanation as to why they started\n> > to fail only recently.\n> \n> No, but it does point at another area of the code in which a relevant\n> change could've occurred.\n> \n> While we're looking at this --- Mark, if you could install gdb\n> on your buildfarm hosts, that would be really handy. I think that's\n> the only extra thing the buildfarm script needs to extract stack\n> traces from core dumps. We'd likely already know where the problem\n> is if we had a stack trace ...\n\nOk, I think I have gdb installed now...\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 3 May 2019 10:08:59 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Mark Wong <mark@2ndquadrant.com> writes:\n> On Thu, May 02, 2019 at 11:45:34AM -0400, Tom Lane wrote:\n>> While we're looking at this --- Mark, if you could install gdb\n>> on your buildfarm hosts, that would be really handy. I think that's\n>> the only extra thing the buildfarm script needs to extract stack\n>> traces from core dumps. We'd likely already know where the problem\n>> is if we had a stack trace ...\n\n> Ok, I think I have gdb installed now...\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2019 14:19:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 10:08:59 -0700, Mark Wong wrote:\n> Ok, I think I have gdb installed now...\n\nThanks! Any chance you could turn on force_parallel_mode for the other\nbranches it applies to too? Makes it easier to figure out whether\nbreakage is related to that, or independent.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 11:45:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "We just got another one of these, on yet another member of Mark's\nppc64 armada:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=shoveler&dt=2019-05-10%2014%3A04%3A34\n\nNow we have a stack trace (thanks Mark!), but it is pretty unsurprising:\n\nCore was generated by `postgres: debian regression [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n2748\tmalloc.c: No such file or directory.\n#0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n#1 0x00003fff915bedc8 in _int_malloc (av=0x3fff916e0d28 <main_arena>, bytes=8192) at malloc.c:3865\n#2 0x00003fff915c1064 in __GI___libc_malloc (bytes=8192) at malloc.c:2928\n#3 0x00000000106acfd8 in AllocSetContextCreateInternal (parent=0x1000babdad0, name=0x1085508c \"inline_function\", minContextSize=<optimized out>, initBlockSize=<optimized out>, maxBlockSize=8388608) at aset.c:477\n#4 0x00000000103d5e00 in inline_function (funcid=20170, result_type=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, funcvariadic=<optimized out>, func_tuple=<optimized out>, context=0x3fffe3da15d0, args=<optimized out>) at clauses.c:4459\n#5 simplify_function (funcid=<optimized out>, result_type=<optimized out>, result_typmod=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, args_p=<optimized out>, funcvariadic=<optimized out>, process_args=<optimized out>, allow_non_const=<optimized out>, context=<optimized out>) at clauses.c:4040\n#6 0x00000000103d2e74 in eval_const_expressions_mutator (node=0x1000babe968, context=0x3fffe3da15d0) at clauses.c:2474\n#7 0x00000000103511bc in expression_tree_mutator (node=<optimized out>, mutator=0x103d2b10 <eval_const_expressions_mutator>, context=0x3fffe3da15d0) at nodeFuncs.c:2893\n#8 0x00000000103d2cbc in eval_const_expressions_mutator (node=0x1000babe9c0, context=0x3fffe3da15d0) at clauses.c:3606\n#9 0x00000000103510c8 in expression_tree_mutator (node=<optimized out>, mutator=<optimized out>, context=<optimized out>) at nodeFuncs.c:2942\n#10 0x00000000103d2cbc in eval_const_expressions_mutator (node=0x1000babea40, context=0x3fffe3da15d0) at clauses.c:3606\n#11 0x00000000103d2ae8 in eval_const_expressions (root=<optimized out>, node=<optimized out>) at clauses.c:2266\n#12 0x00000000103b6264 in preprocess_expression (root=0x1000babee18, expr=<optimized out>, kind=1) at planner.c:1087\n#13 0x00000000103b496c in subquery_planner (glob=<optimized out>, parse=<optimized out>, parent_root=<optimized out>, hasRecursion=<optimized out>, tuple_fraction=<optimized out>) at planner.c:769\n#14 0x00000000103b3c58 in standard_planner (parse=<optimized out>, cursorOptions=<optimized out>, boundParams=<optimized out>) at planner.c:406\n#15 0x00000000103b3a68 in planner (parse=<optimized out>, cursorOptions=<optimized out>, boundParams=<optimized out>) at planner.c:275\n#16 0x00000000104cc2cc in pg_plan_query (querytree=0x1000babe7f8, cursorOptions=256, boundParams=0x0) at postgres.c:878\n#17 0x00000000102ef850 in init_execution_state (lazyEvalOK=<optimized out>, queryTree_list=<optimized out>, fcache=<optimized out>) at functions.c:507\n#18 init_sql_fcache (finfo=<optimized out>, collation=<optimized out>, lazyEvalOK=<optimized out>) at functions.c:770\n#19 fmgr_sql (fcinfo=<optimized out>) at functions.c:1053\n#20 0x00000000102cef24 in ExecInterpExpr (state=<optimized out>, econtext=<optimized out>, isnull=<optimized out>) at execExprInterp.c:625\n#21 0x00000000102cddb8 in ExecInterpExprStillValid (state=0x1000bab41e8, econtext=0x1000bab3ed8, isNull=<optimized out>) at execExprInterp.c:1769\n#22 0x0000000010314f10 in ExecEvalExprSwitchContext (state=0x1000bab41e8, econtext=0x1000bab3ed8, isNull=<optimized out>) at ../../../src/include/executor/executor.h:307\n#23 ExecProject (projInfo=0x1000bab41e0) at ../../../src/include/executor/executor.h:341\n#24 ExecResult (pstate=<optimized out>) at nodeResult.c:136\n#25 0x00000000102e319c in ExecProcNodeFirst (node=0x1000bab3dc0) at execProcnode.c:445\n#26 0x00000000102d9c94 in ExecProcNode (node=<optimized out>) at ../../../src/include/executor/executor.h:239\n#27 ExecutePlan (estate=<optimized out>, planstate=<optimized out>, use_parallel_mode=false, operation=<optimized out>, numberTuples=<optimized out>, direction=<optimized out>, dest=<optimized out>, execute_once=<optimized out>, sendTuples=<optimized out>) at execMain.c:1648\n#28 standard_ExecutorRun (queryDesc=<optimized out>, direction=<optimized out>, count=<optimized out>, execute_once=<optimized out>) at execMain.c:365\n#29 0x00000000102d9ac8 in ExecutorRun (queryDesc=<optimized out>, direction=<optimized out>, count=<optimized out>, execute_once=<optimized out>) at execMain.c:309\n#30 0x00000000102efe84 in postquel_getnext (es=<optimized out>, fcache=<optimized out>) at functions.c:867\n#31 fmgr_sql (fcinfo=<optimized out>) at functions.c:1153\n#32 0x00000000102cef24 in ExecInterpExpr (state=<optimized out>, econtext=<optimized out>, isnull=<optimized out>) at execExprInterp.c:625\n#33 0x00000000102cddb8 in ExecInterpExprStillValid (state=0x1000baa8158, econtext=0x1000baa7e48, isNull=<optimized out>) at execExprInterp.c:1769\n#34 0x0000000010314f10 in ExecEvalExprSwitchContext (state=0x1000baa8158, econtext=0x1000baa7e48, isNull=<optimized out>) at ../../../src/include/executor/executor.h:307\n#35 ExecProject (projInfo=0x1000baa8150) at ../../../src/include/executor/executor.h:341\n#36 ExecResult (pstate=<optimized out>) at nodeResult.c:136\n#37 0x00000000102e319c in ExecProcNodeFirst (node=0x1000baa7d30) at execProcnode.c:445\n\n... lots and lots of repetitions ...\n\n#11809 0x00000000102e319c in ExecProcNodeFirst (node=0x10008c01e90) at execProcnode.c:445\n#11810 0x00000000102d9c94 in ExecProcNode (node=<optimized out>) at ../../../src/include/executor/executor.h:239\n#11811 ExecutePlan (estate=<optimized out>, planstate=<optimized out>, use_parallel_mode=false, operation=<optimized out>, numberTuples=<optimized out>, direction=<optimized out>, dest=<optimized out>, execute_once=<optimized out>, sendTuples=<optimized out>) at execMain.c:1648\n#11812 standard_ExecutorRun (queryDesc=<optimized out>, direction=<optimized out>, count=<optimized out>, execute_once=<optimized out>) at execMain.c:365\n#11813 0x00000000102d9ac8 in ExecutorRun (queryDesc=<optimized out>, direction=<optimized out>, count=<optimized out>, execute_once=<optimized out>) at execMain.c:309\n#11814 0x00000000104d39ec in PortalRunSelect (portal=0x10008be9de8, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:929\n#11815 0x00000000104d34c0 in PortalRun (portal=0x10008be9de8, count=<optimized out>, isTopLevel=<optimized out>, run_once=<optimized out>, dest=<optimized out>, altdest=<optimized out>, completionTag=0x3fffe3ecd6c0 \"\") at pquery.c:770\n#11816 0x00000000104d1bc4 in exec_simple_query (query_string=<optimized out>) at postgres.c:1215\n#11817 0x00000000104ced50 in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at postgres.c:4249\n#11818 0x00000000104110fc in BackendRun (port=<optimized out>) at postmaster.c:4430\n#11819 BackendStartup (port=<optimized out>) at postmaster.c:4121\n#11820 ServerLoop () at postmaster.c:1704\n#11821 PostmasterMain (argc=<optimized out>, argv=<optimized out>) at postmaster.c:1377\n#11822 0x000000001034def4 in main (argc=8, argv=0x10008b7efb0) at main.c:228\n\n\nSo that lets out any theory that somehow we're getting into a weird\ncontrol path that misses calling check_stack_depth;\nexpression_tree_mutator does so for one, and it was called just nine\nstack frames down from the crash.\n\nCasting about for possible explanations, I notice that the failure\nseems to have occurred at a nesting depth of 982 SQL-function calls\n((11809 - 25)/12). I'd previously scraped the buildfarm database\nto find out what nesting depths we normally get to before erroring\nout, by counting the number of \"infinite_recurse\" context messages\nin the postmaster log. Here's the entry for shoveler:\n\n shoveler | 2019-05-03 14:19:19 | 1674\n\nSo this failed at substantially less stack depth than it's successfully\nconsumed in other runs, and *very* substantially less stack than ought\nto be there, considering we pay attention to getrlimit in setting\nmax_stack_depth and add quite a generous amount of slop too.\n\nI am wondering if, somehow, the stack depth limit seen by the postmaster\nsometimes doesn't apply to its children. That would be pretty wacko\nkernel behavior, especially if it's only intermittently true.\nBut we're running out of other explanations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 11:38:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 11:38:57 -0400, Tom Lane wrote:\n> Core was generated by `postgres: debian regression [local] SELECT '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n> 2748\tmalloc.c: No such file or directory.\n> #0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n> #1 0x00003fff915bedc8 in _int_malloc (av=0x3fff916e0d28 <main_arena>, bytes=8192) at malloc.c:3865\n> #2 0x00003fff915c1064 in __GI___libc_malloc (bytes=8192) at malloc.c:2928\n> #3 0x00000000106acfd8 in AllocSetContextCreateInternal (parent=0x1000babdad0, name=0x1085508c \"inline_function\", minContextSize=<optimized out>, initBlockSize=<optimized out>, maxBlockSize=8388608) at aset.c:477\n> #4 0x00000000103d5e00 in inline_function (funcid=20170, result_type=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, funcvariadic=<optimized out>, func_tuple=<optimized out>, context=0x3fffe3da15d0, args=<optimized out>) at clauses.c:4459\n> #5 simplify_function (funcid=<optimized out>, result_type=<optimized out>, result_typmod=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, args_p=<optimized out>, funcvariadic=<optimized out>, process_args=<optimized out>, allow_non_const=<optimized out>, context=<optimized out>) at clauses.c:4040\n> #6 0x00000000103d2e74 in eval_const_expressions_mutator (node=0x1000babe968, context=0x3fffe3da15d0) at clauses.c:2474\n> #7 0x00000000103511bc in expression_tree_mutator (node=<optimized out>, mutator=0x103d2b10 <eval_const_expressions_mutator>, context=0x3fffe3da15d0) at nodeFuncs.c:2893\n\n\n> So that lets out any theory that somehow we're getting into a weird\n> control path that misses calling check_stack_depth;\n> expression_tree_mutator does so for one, and it was called just nine\n> stack frames down from the crash.\n\nRight. There's plenty places checking it...\n\n\n> I am wondering if, somehow, the stack depth limit seen by the postmaster\n> sometimes doesn't apply to its children. That would be pretty wacko\n> kernel behavior, especially if it's only intermittently true.\n> But we're running out of other explanations.\n\nI wonder if this is a SIGSEGV that actually signals an OOM\nsituation. Linux, if it can't actually extend the stack on-demand due to\nOOM, sends a SIGSEGV. The signal has that information, but\nunfortunately the buildfarm code doesn't print it. p $_siginfo would\nshow us some of that...\n\nMark, how tight is the memory on that machine? Does dmesg have any other\ninformation (often segfaults are logged by the kernel with the code\nIIRC).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 11:27:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-10 11:38:57 -0400, Tom Lane wrote:\n>> I am wondering if, somehow, the stack depth limit seen by the postmaster\n>> sometimes doesn't apply to its children. That would be pretty wacko\n>> kernel behavior, especially if it's only intermittently true.\n>> But we're running out of other explanations.\n\n> I wonder if this is a SIGSEGV that actually signals an OOM\n> situation. Linux, if it can't actually extend the stack on-demand due to\n> OOM, sends a SIGSEGV. The signal has that information, but\n> unfortunately the buildfarm code doesn't print it. p $_siginfo would\n> show us some of that...\n\n> Mark, how tight is the memory on that machine? Does dmesg have any other\n> information (often segfaults are logged by the kernel with the code\n> IIRC).\n\nIt does sort of smell like a resource exhaustion problem, especially\nif all these buildfarm animals are VMs running on the same underlying\nplatform. But why would that manifest as \"you can't have a measly two\nmegabytes of stack\" and not as any other sort of OOM symptom?\n\nMark, if you don't mind modding your local copies of the buildfarm\nscript, I think what Andres is asking for is a pretty trivial addition\nin PGBuild/Utils.pm's sub get_stack_trace:\n\n\tmy $cmdfile = \"./gdbcmd\";\n\tmy $handle;\n\topen($handle, '>', $cmdfile) || die \"opening $cmdfile: $!\";\n\tprint $handle \"bt\\n\";\n+\tprint $handle \"p $_siginfo\\n\";\n\tclose($handle);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 15:35:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "\nOn 5/10/19 3:35 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-05-10 11:38:57 -0400, Tom Lane wrote:\n>>> I am wondering if, somehow, the stack depth limit seen by the postmaster\n>>> sometimes doesn't apply to its children. That would be pretty wacko\n>>> kernel behavior, especially if it's only intermittently true.\n>>> But we're running out of other explanations.\n>> I wonder if this is a SIGSEGV that actually signals an OOM\n>> situation. Linux, if it can't actually extend the stack on-demand due to\n>> OOM, sends a SIGSEGV. The signal has that information, but\n>> unfortunately the buildfarm code doesn't print it. p $_siginfo would\n>> show us some of that...\n>> Mark, how tight is the memory on that machine? Does dmesg have any other\n>> information (often segfaults are logged by the kernel with the code\n>> IIRC).\n> It does sort of smell like a resource exhaustion problem, especially\n> if all these buildfarm animals are VMs running on the same underlying\n> platform. But why would that manifest as \"you can't have a measly two\n> megabytes of stack\" and not as any other sort of OOM symptom?\n>\n> Mark, if you don't mind modding your local copies of the buildfarm\n> script, I think what Andres is asking for is a pretty trivial addition\n> in PGBuild/Utils.pm's sub get_stack_trace:\n>\n> \tmy $cmdfile = \"./gdbcmd\";\n> \tmy $handle;\n> \topen($handle, '>', $cmdfile) || die \"opening $cmdfile: $!\";\n> \tprint $handle \"bt\\n\";\n> +\tprint $handle \"p $_siginfo\\n\";\n> \tclose($handle);\n>\n> \t\t\t\n\n\nI think we'll need to write that as:\n\n\n print $handle 'p $_siginfo',\"\\n\";\n\n\nAs you have it written perl will try to interpolate a variable called\n$_siginfo.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n",
"msg_date": "Fri, 10 May 2019 17:26:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Fri, May 10, 2019 at 05:26:43PM -0400, Andrew Dunstan wrote:\n> I think we'll need to write that as:\n> \n> print $handle 'p $_siginfo',\"\\n\";\n> \n> As you have it written perl will try to interpolate a variable called\n> $_siginfo.\n\nAnything in double quotes with a dollar sign would be interpreted as a\nvariable, and single quotes make that safe.\n--\nMichael",
"msg_date": "Sun, 12 May 2019 17:57:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "\nOn 5/12/19 4:57 AM, Michael Paquier wrote:\n> On Fri, May 10, 2019 at 05:26:43PM -0400, Andrew Dunstan wrote:\n>> I think we'll need to write that as:\n>>\n>> ��� print $handle 'p $_siginfo',\"\\n\";\n>>\n>> As you have it written perl will try to interpolate a variable called\n>> $_siginfo.\n> Anything in double quotes with a dollar sign would be interpreted as a\n> variable, and single quotes make that safe.\n\n\nYes, that's why I did it that way.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Sun, 12 May 2019 09:33:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Fri, May 03, 2019 at 11:45:33AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-03 10:08:59 -0700, Mark Wong wrote:\n> > Ok, I think I have gdb installed now...\n> \n> Thanks! Any chance you could turn on force_parallel_mode for the other\n> branches it applies to too? Makes it easier to figure out whether\n> breakage is related to that, or independent.\n\nSlowly catching up on my collection of ppc64le animals...\n\nI still need to upgrade the build farm client (v8) on:\n* dhole\n* vulpes\n* wobbegong\n* cuon\n* batfish\n* devario\n* cardinalfish\n\nThe following I've enabled force_parallel_mode for HEAD, 11, 10, and\n9.6:\n\n* buri\n* urocryon\n* ayu\n* shoveler\n* chimaera\n* bonito\n* takin\n* bufflehead\n* elasmobranch\n* demoiselle\n* cavefish\n\nRegards,\nMark\n\n\n",
"msg_date": "Tue, 14 May 2019 07:31:40 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Mark Wong <mark@2ndQuadrant.com> writes:\n> The following I've enabled force_parallel_mode for HEAD, 11, 10, and\n> 9.6:\n\nThanks Mark!\n\nIn theory, the stack trace we now have from shoveler proves that parallel\nmode has nothing to do with this, because the crash happens during\nplanning (specifically, inlining SQL functions). I wonder though if\nit's possible that previous force_parallel_mode queries have had some\nundesirable effect on the process's stack allocation. Just grasping\nat straws, because it's sure not clear what's going wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2019 10:40:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Mark Wong <mark@2ndQuadrant.com> writes:\n> Slowly catching up on my collection of ppc64le animals...\n\nOh, btw ... you didn't answer my question from before: are these animals\nall running on a common platform (and if so, what is that), or are they\nreally different hardware? If they're all VMs on one machine then it\nmight be that there's some common-mode effect from the underlying system.\n\n(Another thing I notice, now, is that these are all Linux variants;\nI'd been thinking you had some BSDen in there too, but now I see\nthat none of those are ppc64. Hm.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2019 10:52:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Fri, May 10, 2019 at 11:27:07AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-10 11:38:57 -0400, Tom Lane wrote:\n> > Core was generated by `postgres: debian regression [local] SELECT '.\n> > Program terminated with signal SIGSEGV, Segmentation fault.\n> > #0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n> > 2748\tmalloc.c: No such file or directory.\n> > #0 sysmalloc (nb=8208, av=0x3fff916e0d28 <main_arena>) at malloc.c:2748\n> > #1 0x00003fff915bedc8 in _int_malloc (av=0x3fff916e0d28 <main_arena>, bytes=8192) at malloc.c:3865\n> > #2 0x00003fff915c1064 in __GI___libc_malloc (bytes=8192) at malloc.c:2928\n> > #3 0x00000000106acfd8 in AllocSetContextCreateInternal (parent=0x1000babdad0, name=0x1085508c \"inline_function\", minContextSize=<optimized out>, initBlockSize=<optimized out>, maxBlockSize=8388608) at aset.c:477\n> > #4 0x00000000103d5e00 in inline_function (funcid=20170, result_type=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, funcvariadic=<optimized out>, func_tuple=<optimized out>, context=0x3fffe3da15d0, args=<optimized out>) at clauses.c:4459\n> > #5 simplify_function (funcid=<optimized out>, result_type=<optimized out>, result_typmod=<optimized out>, result_collid=<optimized out>, input_collid=<optimized out>, args_p=<optimized out>, funcvariadic=<optimized out>, process_args=<optimized out>, allow_non_const=<optimized out>, context=<optimized out>) at clauses.c:4040\n> > #6 0x00000000103d2e74 in eval_const_expressions_mutator (node=0x1000babe968, context=0x3fffe3da15d0) at clauses.c:2474\n> > #7 0x00000000103511bc in expression_tree_mutator (node=<optimized out>, mutator=0x103d2b10 <eval_const_expressions_mutator>, context=0x3fffe3da15d0) at nodeFuncs.c:2893\n> \n> \n> > So that lets out any theory that somehow we're getting into a weird\n> > control path that misses calling check_stack_depth;\n> > expression_tree_mutator does so for one, and it was called just nine\n> > stack frames down from the crash.\n> \n> Right. There's plenty places checking it...\n> \n> \n> > I am wondering if, somehow, the stack depth limit seen by the postmaster\n> > sometimes doesn't apply to its children. That would be pretty wacko\n> > kernel behavior, especially if it's only intermittently true.\n> > But we're running out of other explanations.\n> \n> I wonder if this is a SIGSEGV that actually signals an OOM\n> situation. Linux, if it can't actually extend the stack on-demand due to\n> OOM, sends a SIGSEGV. The signal has that information, but\n> unfortunately the buildfarm code doesn't print it. p $_siginfo would\n> show us some of that...\n> \n> Mark, how tight is the memory on that machine?\n\nThere's about 2GB allocated:\n\ndebian@postgresql-debian:~$ cat /proc/meminfo\nMemTotal: 2080704 kB\nMemFree: 1344768 kB\nMemAvailable: 1824192 kB\n\n\nAt the moment it looks like plenty. :) Maybe I should set something up\nto monitor these things.\n\n> Does dmesg have any other\n> information (often segfaults are logged by the kernel with the code\n> IIRC).\n\nIt's been up for about 49 days:\n\ndebian@postgresql-debian:~$ uptime\n 14:54:30 up 49 days, 14:59, 3 users, load average: 0.00, 0.34, 1.04\n\n\nI see one line from dmesg that is related to postgres:\n\n[3939350.616849] postgres[17057]: bad frame in setup_rt_frame: 00003fffe3d9fe00 nip 00003fff915bdba0 lr 00003fff915bde9c\n\n\nBut only that one time in 49 days up. Otherwise I see a half dozen\nhung_task_timeout_secs messages around jdb2 and dhclient.\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Tue, 14 May 2019 07:59:01 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Tue, May 14, 2019 at 10:52:07AM -0400, Tom Lane wrote:\n> Mark Wong <mark@2ndQuadrant.com> writes:\n> > Slowly catching up on my collection of ppc64le animals...\n> \n> Oh, btw ... you didn't answer my question from before: are these animals\n> all running on a common platform (and if so, what is that), or are they\n> really different hardware? If they're all VMs on one machine then it\n> might be that there's some common-mode effect from the underlying system.\n\nSorry, I was almost there. :)\n\nThese systems are provisioned with OpenStack. Additionally, a couple\nmore (cardinalfish, devario) are using docker under that.\n\n> (Another thing I notice, now, is that these are all Linux variants;\n> I'd been thinking you had some BSDen in there too, but now I see\n> that none of those are ppc64. Hm.)\n\nRight, the BSDen I have are on different hardware.\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Tue, 14 May 2019 08:12:07 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Fri, May 10, 2019 at 05:26:43PM -0400, Andrew Dunstan wrote:\n> \n> On 5/10/19 3:35 PM, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> On 2019-05-10 11:38:57 -0400, Tom Lane wrote:\n> >>> I am wondering if, somehow, the stack depth limit seen by the postmaster\n> >>> sometimes doesn't apply to its children. That would be pretty wacko\n> >>> kernel behavior, especially if it's only intermittently true.\n> >>> But we're running out of other explanations.\n> >> I wonder if this is a SIGSEGV that actually signals an OOM\n> >> situation. Linux, if it can't actually extend the stack on-demand due to\n> >> OOM, sends a SIGSEGV. The signal has that information, but\n> >> unfortunately the buildfarm code doesn't print it. p $_siginfo would\n> >> show us some of that...\n> >> Mark, how tight is the memory on that machine? Does dmesg have any other\n> >> information (often segfaults are logged by the kernel with the code\n> >> IIRC).\n> > It does sort of smell like a resource exhaustion problem, especially\n> > if all these buildfarm animals are VMs running on the same underlying\n> > platform. But why would that manifest as \"you can't have a measly two\n> > megabytes of stack\" and not as any other sort of OOM symptom?\n> >\n> > Mark, if you don't mind modding your local copies of the buildfarm\n> > script, I think what Andres is asking for is a pretty trivial addition\n> > in PGBuild/Utils.pm's sub get_stack_trace:\n> >\n> > \tmy $cmdfile = \"./gdbcmd\";\n> > \tmy $handle;\n> > \topen($handle, '>', $cmdfile) || die \"opening $cmdfile: $!\";\n> > \tprint $handle \"bt\\n\";\n> > +\tprint $handle \"p $_siginfo\\n\";\n> > \tclose($handle);\n> >\n> > \t\t\t\n> \n> \n> I think we'll need to write that as:\n> \n> \n> ��� print $handle 'p $_siginfo',\"\\n\";\n\nOk, I have this added to everyone now.\n\nI think I also have caught up on this thread, but let me know if I\nmissed anything.\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Tue, 14 May 2019 08:31:37 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 08:31:37 -0700, Mark Wong wrote:\n> Ok, I have this added to everyone now.\n> \n> I think I also have caught up on this thread, but let me know if I\n> missed anything.\n\nI notice https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=demoiselle&dt=2019-05-19%2014%3A22%3A23\nfailed recently, but unfortunately does not appear to have gdb\ninstalled? Or the buildfarm version is too old? Or ulimits are set\nstrictly on a system wide basis?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 14:38:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Sun, May 19, 2019 at 02:38:26PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-14 08:31:37 -0700, Mark Wong wrote:\n> > Ok, I have this added to everyone now.\n> > \n> > I think I also have caught up on this thread, but let me know if I\n> > missed anything.\n> \n> I notice https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=demoiselle&dt=2019-05-19%2014%3A22%3A23\n> failed recently, but unfortunately does not appear to have gdb\n> installed? Or the buildfarm version is too old? Or ulimits are set\n> strictly on a system wide basis?\n\nIt looks like I did have gdb on there:\n\nopensuse@postgresql-opensuse-p9:~> gdb --version\nGNU gdb (GDB; openSUSE Leap 15.0) 8.1\n\n\nI'm on v9 of the build-farm here (have it on my todo list to get\neverything up to 10.)\n\n\nI hope nothing is overriding my core size ulimit:\n\nopensuse@postgresql-opensuse-p9:~> ulimit -c\nunlimited\n\n\nThis animal is using clang. I wonder if gdb is disagreeing with the\nclang binaries?\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 20 May 2019 12:15:49 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Mon, May 20, 2019 at 12:15:49PM -0700, Mark Wong wrote:\n> On Sun, May 19, 2019 at 02:38:26PM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2019-05-14 08:31:37 -0700, Mark Wong wrote:\n> > > Ok, I have this added to everyone now.\n> > > \n> > > I think I also have caught up on this thread, but let me know if I\n> > > missed anything.\n> > \n> > I notice https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=demoiselle&dt=2019-05-19%2014%3A22%3A23\n> > failed recently, but unfortunately does not appear to have gdb\n> > installed? Or the buildfarm version is too old? Or ulimits are set\n> > strictly on a system wide basis?\n> \n> I'm on v9 of the build-farm here (have it on my todo list to get\n> everything up to 10.)\n\n\nAndrew let me know I need to get on v10. I've upgraded demoiselle, and\nam trying to work through the rest now...\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 20 May 2019 14:21:53 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Tue, May 21, 2019 at 9:22 AM Mark Wong <mark@2ndquadrant.com> wrote:\n> Andrew let me know I need to get on v10. I've upgraded demoiselle, and\n> am trying to work through the rest now...\n\nHere's another crash like that.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=cavefish&dt=2019-07-13%2003%3A49%3A38\n\n2019-07-13 04:01:23.437 UTC [9365:70] LOG: server process (PID 12951)\nwas terminated by signal 11: Segmentation fault\n2019-07-13 04:01:23.437 UTC [9365:71] DETAIL: Failed process was\nrunning: select infinite_recurse();\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jul 2019 17:57:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here's another crash like that.\n\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=cavefish&dt=2019-07-13%2003%3A49%3A38\n> 2019-07-13 04:01:23.437 UTC [9365:70] LOG: server process (PID 12951)\n> was terminated by signal 11: Segmentation fault\n> 2019-07-13 04:01:23.437 UTC [9365:71] DETAIL: Failed process was\n> running: select infinite_recurse();\n\nIt occurred to me to scrape the buildfarm database for these failures,\nand what I got was\n\n sysname | branch | snapshot | stage | data | architecture\n--------------+---------------+---------------------+-----------------+---------------------------------------------------------------------------------------------------------------+------------------\n demoiselle | HEAD | 2019-04-27 14:55:52 | pg_upgradeCheck | 2019-04-27 15:00:42.736 UTC [1457:66] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n buri | HEAD | 2019-04-27 23:54:46 | Check | 2019-04-28 00:01:49.794 UTC [3041:66] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n takin | HEAD | 2019-05-01 08:16:48 | pg_upgradeCheck | 2019-05-01 08:23:27.159 UTC [32303:59] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n bonito | HEAD | 2019-05-01 23:05:36 | Check | 2019-05-01 23:11:00.145 UTC [13933:66] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n shoveler | HEAD | 2019-05-10 14:04:34 | Check | 2019-05-10 14:11:26.833 UTC [13456:73] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER8) \n demoiselle | HEAD | 2019-05-19 14:22:23 | pg_upgradeCheck | 2019-05-19 14:26:17.002 UTC [23275:80] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n vulpes | HEAD | 2019-06-15 09:16:45 | pg_upgradeCheck | 2019-06-15 09:22:22.268 UTC [4885:77] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n ayu | HEAD | 2019-06-19 22:13:23 | pg_upgradeCheck | 2019-06-19 22:18:16.805 UTC [2708:71] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER8) \n quokka | HEAD | 2019-07-10 14:20:13 | pg_upgradeCheck | 2019-07-10 15:24:06.102 BST [5d25f4fb.2644:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64 \n cavefish | HEAD | 2019-07-13 03:49:38 | pg_upgradeCheck | 2019-07-13 04:01:23.437 UTC [9365:71] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n pintail | REL_12_STABLE | 2019-07-13 19:36:51 | Check | 2019-07-13 19:39:29.013 UTC [31086:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n bonito | HEAD | 2019-07-19 23:13:01 | Check | 2019-07-19 23:16:33.330 UTC [24191:70] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n takin | HEAD | 2019-07-24 08:24:56 | Check | 2019-07-24 08:28:01.735 UTC [16366:75] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n quokka | HEAD | 2019-07-31 02:00:07 | pg_upgradeCheck | 2019-07-31 03:04:04.043 BST [5d40f709.776a:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64 \n elasmobranch | HEAD | 2019-08-01 03:13:38 | Check | 2019-08-01 03:19:05.394 UTC [22888:62] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n buri | HEAD | 2019-08-02 00:10:23 | Check | 2019-08-02 00:17:11.075 UTC [28222:73] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n urocryon | HEAD | 2019-08-02 05:43:46 | Check | 2019-08-02 05:51:51.944 UTC [2724:64] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n batfish | HEAD | 2019-08-04 19:02:36 | pg_upgradeCheck | 2019-08-04 19:08:11.728 UTC [23899:79] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n buri | REL_12_STABLE | 2019-08-07 00:03:29 | pg_upgradeCheck | 2019-08-07 00:11:24.500 UTC [1405:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n quokka | REL_12_STABLE | 2019-08-08 02:43:45 | pg_upgradeCheck | 2019-08-08 03:47:38.115 BST [5d4b8d3f.cdd7:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64 \n quokka | HEAD | 2019-08-08 14:00:08 | Check | 2019-08-08 15:02:59.770 BST [5d4c2b88.cad9:5] DETAIL: Failed process was running: select infinite_recurse(); | ppc64 \n mereswine | REL_11_STABLE | 2019-08-11 02:10:12 | InstallCheck-C | 2019-08-11 02:36:10.159 PDT [5004:4] DETAIL: Failed process was running: select infinite_recurse(); | ARMv7 \n takin | HEAD | 2019-08-11 08:02:48 | Check | 2019-08-11 08:05:57.789 UTC [11500:67] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le \n mereswine | REL_12_STABLE | 2019-08-11 09:52:46 | pg_upgradeCheck | 2019-08-11 04:21:16.756 PDT [6804:5] DETAIL: Failed process was running: select infinite_recurse(); | ARMv7 \n mereswine | HEAD | 2019-08-11 11:29:27 | pg_upgradeCheck | 2019-08-11 07:15:28.454 PDT [9954:76] DETAIL: Failed process was running: select infinite_recurse(); | ARMv7 \n demoiselle | HEAD | 2019-08-11 14:51:38 | pg_upgradeCheck | 2019-08-11 14:57:29.422 UTC [9436:70] DETAIL: Failed process was running: select infinite_recurse(); | ppc64le (POWER9) \n(26 rows)\n\nThis is from a scan going back 9 months (to mid-December), so the lack of\nany matches before late April is pretty notable: it seems highly probable\nthat some change we made during April is related.\n\nA cursory scan of commits during April finds only one that seems\nconceivably related (though perhaps I just lack enough imagination):\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master Release: REL_12_BR [798070ec0] 2019-04-11 18:16:50 -0400\n\n Re-order some regression test scripts for more parallelism.\n \n Move the strings, numerology, insert, insert_conflict, select and\n errors tests to be parts of nearby parallel groups, instead of\n executing by themselves.\n\nSo that leads to the thought that \"the infinite_recurse test is fine\nif it runs by itself, but it tends to fall over if there are\nconcurrently-running backends\". I have absolutely no idea how that\nwould happen on anything that passes for a platform built in this\ncentury. Still, it's a place to start, which we hadn't before.\n\nAlso notable is that we now have a couple of hits on ARM, not\nonly ppc64. Don't know what to make of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Aug 2019 01:49:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "On Thu, Aug 15, 2019 at 5:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So that leads to the thought that \"the infinite_recurse test is fine\n> if it runs by itself, but it tends to fall over if there are\n> concurrently-running backends\". I have absolutely no idea how that\n> would happen on anything that passes for a platform built in this\n> century. Still, it's a place to start, which we hadn't before.\n\nHmm. mereswin's recent failure on REL_11_STABLE was running the\nserial schedule.\n\nI read about 3 ways to get SEGV from stack-related faults: you can\nexceed RLIMIT_STACK (the total mapping size) and then you'll get SEGV\n(per man pages), you can access a page that is inside the mapping but\nis beyond the stack pointer (with some tolerance, exact details vary\nby arch), and you can fail to allocate a page due to low memory.\n\nThe first kind of failure doesn't seem right -- we carefully set\nmax_stack_size based on RLIMIT_STACK minus some slop, so that theory\nwould require child processes to have different stack limits than the\npostmaster as you said (perhaps OpenStack, Docker, related tooling or\nconcurrent activity on the host system is capable of changing it?), or\na bug in our slop logic. The second kind of failure would imply that\nwe have a bug -- we're accessing something below the stack pointer --\nbut that doesn't seem right either -- I think various address\nsanitising tools would have told us about that, and it's hard to\nbelieve there is a bug in the powerpc and arm implementation of the\nstack pointer check (see Linux arch/{powerpc,arm}/mm/fault.c). That\nleaves the third explanation, except then I'd expect to see other\nkinds of problems like OOM etc before you get to that stage, and why\njust here? Confused.\n\n> Also notable is that we now have a couple of hits on ARM, not\n> only ppc64. Don't know what to make of that.\n\nYeah, that is indeed interesting.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Aug 2019 19:17:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
},
{
"msg_contents": "Hello hackers,\n\n15.08.2019 10:17, Thomas Munro wrote:\n> On Thu, Aug 15, 2019 at 5:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So that leads to the thought that \"the infinite_recurse test is fine\n>> if it runs by itself, but it tends to fall over if there are\n>> concurrently-running backends\". I have absolutely no idea how that\n>> would happen on anything that passes for a platform built in this\n>> century. Still, it's a place to start, which we hadn't before.\n> Hmm. mereswin's recent failure on REL_11_STABLE was running the\n> serial schedule.\n>\n> I read about 3 ways to get SEGV from stack-related faults: you can\n> exceed RLIMIT_STACK (the total mapping size) and then you'll get SEGV\n> (per man pages), you can access a page that is inside the mapping but\n> is beyond the stack pointer (with some tolerance, exact details vary\n> by arch), and you can fail to allocate a page due to low memory.\n>\n> The first kind of failure doesn't seem right -- we carefully set\n> max_stack_size based on RLIMIT_STACK minus some slop, so that theory\n> would require child processes to have different stack limits than the\n> postmaster as you said (perhaps OpenStack, Docker, related tooling or\n> concurrent activity on the host system is capable of changing it?), or\n> a bug in our slop logic. The second kind of failure would imply that\n> we have a bug -- we're accessing something below the stack pointer --\n> but that doesn't seem right either -- I think various address\n> sanitising tools would have told us about that, and it's hard to\n> believe there is a bug in the powerpc and arm implementation of the\n> stack pointer check (see Linux arch/{powerpc,arm}/mm/fault.c). That\n> leaves the third explanation, except then I'd expect to see other\n> kinds of problems like OOM etc before you get to that stage, and why\n> just here? Confused.\n>\n>> Also notable is that we now have a couple of hits on ARM, not\n>> only ppc64. Don't know what to make of that.\n> Yeah, that is indeed interesting.\n\nExcuse me for reviving this ancient thread, but aforementioned mereswine\nanimal has failed again recently [1]:\n002_pg_upgrade_old_node.log contains:\n2024-06-26 02:49:06.742 PDT [29121:4] LOG: server process (PID 30908) was terminated by signal 9: Killed\n2024-06-26 02:49:06.742 PDT [29121:5] DETAIL: Failed process was running: select infinite_recurse();\n\nI believe this time it's caused by OOM condition and I think this issue\noccurs on armv7 mereswine because 1) armv7 uses the stack very\nefficiently (thanks to 32-bitness and maybe also the Link Register) and\n2) such old machines are usually tight on memory.\n\nI've analyzed buildfarm logs and found from the check stage of that failed run:\nwget [2] -O log\ngrep 'SQL function \"infinite_recurse\" statement 1' log | wc -l\n5818\n(that is, the nesting depth is 5818 levels for a successful run of the test)\n\nFor comparison, mereswine on HEAD [3], [4] shows 5691 levels;\nalimoche (aarch64) on HEAD [5] — 3535;\nlapwing (i686) on HEAD [6] — 5034;\nalligator (x86_64) on HEAD [7] — 3965;\n\nSo it seems to me that unlike [9] this failure may be explained by reaching\nOOM condition.\n\nI have an armv7 device with 2GB RAM that doesn't pass `make check` nor\neven `TESTS=infinite_recurse make -s check-tests` from time to time due to:\n2024-06-28 12:40:49.947 UTC postmaster[20019] LOG: server process (PID 20078) was terminated by signal 11: Segmentation \nfault\n2024-06-28 12:40:49.947 UTC postmaster[20019] DETAIL: Failed process was running: select infinite_recurse();\n...\nUsing host libthread_db library \"/lib/arm-linux-gnueabihf/libthread_db.so.1\".\nCore was generated by `postgres: android regression [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 downcase_identifier (ident=0xa006d837 \"infinite_recurse\", len=16, warn=true, truncate=truncate@entry=true)\n at scansup.c:52\n52 result = palloc(len + 1);\n(gdb) p $sp\n$1 = (void *) 0xbe9b0020\n\nIt looks more like [9], but I think the OOM effect is OS/kernel dependent.\n\nThough the test passes reliably with lower max_stack_depth values, so I've\nanalyzed how much memory the backend consumes (total size and the size of\nit's largest segment) depending on the value:\n1500kB\nadfe1000 220260K rw--- [ anon ]\n total 419452K\n---\n1600kB\nac7e5000 234748K rw--- [ anon ]\n total 434040\n---\n1700kB\nacf61000 249488K rw--- [ anon ]\n total 448880K\n---\ndefault value (2048kB)\naac65000 300528K rw--- [ anon ]\n total 501424K\n\n(roughly, increasing max_stack_depth by 100kB increases the backend memory\nconsumption by 15MB during the test)\n\nSo I think reducing max_stack_depth for mereswine, say to 1000kB, should\nprevent such failures in the future.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-06-26%2002%3A10%3A45\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mereswine&dt=2024-06-26%2002%3A10%3A45&stg=check&raw=1\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2024-06-26%2016%3A48%3A07\n[4] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mereswine&dt=2024-06-26%2016%3A48%3A07&stg=check&raw=1\n[5] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alimoche&dt=2024-06-27%2021%3A55%3A06\n[6] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2024-06-28%2004%3A12%3A16\n[7] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=alligator&dt=2024-06-28%2005%3A23%3A19\n[8] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=ayu&dt=2024-03-29%2013%3A08%3A06\n[9] https://www.postgresql.org/message-id/95461160-1214-4ac4-d65b-086182797b1d%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 28 Jun 2024 17:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is infinite_recurse test suddenly failing?"
}
] |
[
{
"msg_contents": "Hi,\n\nOn PG-head, Some of statistical aggregate function are not giving correct\noutput when enable partitionwise aggregate while same is working on v11.\n\nbelow are some of examples.\n\nCREATE TABLE tbl(a int2,b float4) partition by range(a);\ncreate table tbl_p1 partition of tbl for values from (minvalue) to (0);\ncreate table tbl_p2 partition of tbl for values from (0) to (maxvalue);\ninsert into tbl values (-1,-1),(0,0),(1,1),(2,2);\n\n--when partitionwise aggregate is off\npostgres=# SELECT regr_count(b, a) FROM tbl;\n regr_count\n------------\n 4\n(1 row)\npostgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n regr_avgx | regr_avgy\n-----------+-----------\n 0.5 | 0.5\n(1 row)\npostgres=# SELECT corr(b, a) FROM tbl;\n corr\n------\n 1\n(1 row)\n\n--when partitionwise aggregate is on\npostgres=# SET enable_partitionwise_aggregate = true;\nSET\npostgres=# SELECT regr_count(b, a) FROM tbl;\n regr_count\n------------\n 0\n(1 row)\npostgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n regr_avgx | regr_avgy\n-----------+-----------\n |\n(1 row)\npostgres=# SELECT corr(b, a) FROM tbl;\n corr\n------\n\n(1 row)\n\nThanks & Regards,\nRajkumar Raghuwanshi\nQMG, EnterpriseDB Corporation\n\nHi,On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.below are some of examples.CREATE TABLE tbl(a int2,b float4) partition by range(a);create table tbl_p1 partition of tbl for values from (minvalue) to (0);create table tbl_p2 partition of tbl for values from (0) to (maxvalue);insert into tbl values (-1,-1),(0,0),(1,1),(2,2);--when partitionwise aggregate is offpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 4(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- 0.5 | 0.5(1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ 1(1 row)--when partitionwise aggregate is onpostgres=# SET enable_partitionwise_aggregate = true;SETpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 0(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- | (1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ (1 row)Thanks & Regards,Rajkumar RaghuwanshiQMG, EnterpriseDB Corporation",
"msg_date": "Fri, 3 May 2019 14:56:04 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Statistical aggregate functions are not working with partitionwise\n aggregate"
},
{
"msg_contents": "On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Hi,\n>\n> On PG-head, Some of statistical aggregate function are not giving correct\n> output when enable partitionwise aggregate while same is working on v11.\n>\n\nI had a quick look over this and observed that something broken with the\nPARTIAL aggregation.\n\nI can reproduce same issue with the larger dataset which results into\nparallel scan.\n\nCREATE TABLE tbl1(a int2,b float4) partition by range(a);\ncreate table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);\ncreate table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);\ninsert into tbl1 select i%2, i from generate_series(1, 1000000) i;\n\n# SELECT regr_count(b, a) FROM tbl1;\n regr_count\n------------\n 0\n(1 row)\n\npostgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8)\n -> Gather (cost=15417.87..15418.08 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8)\n -> Parallel Append (cost=0.00..11091.62 rows=443500\nwidth=6)\n -> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00\nrows=442500 width=6)\n -> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12\nrows=1412 width=6)\n(7 rows)\n\npostgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;\nSET\npostgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;\n regr_count\n------------\n 1000000\n(1 row)\n\nAfter looking further, it seems that it got broken by following commit:\n\ncommit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Sat Jan 26 14:17:52 2019 -0800\n\n Change function call information to be variable length.\n\n\nThis commit is too big to understand and thus could not get into the excact\ncause.\n\nThanks\n\n\n> below are some of examples.\n>\n> CREATE TABLE tbl(a int2,b float4) partition by range(a);\n> create table tbl_p1 partition of tbl for values from (minvalue) to (0);\n> create table tbl_p2 partition of tbl for values from (0) to (maxvalue);\n> insert into tbl values (-1,-1),(0,0),(1,1),(2,2);\n>\n> --when partitionwise aggregate is off\n> postgres=# SELECT regr_count(b, a) FROM tbl;\n> regr_count\n> ------------\n> 4\n> (1 row)\n> postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n> regr_avgx | regr_avgy\n> -----------+-----------\n> 0.5 | 0.5\n> (1 row)\n> postgres=# SELECT corr(b, a) FROM tbl;\n> corr\n> ------\n> 1\n> (1 row)\n>\n> --when partitionwise aggregate is on\n> postgres=# SET enable_partitionwise_aggregate = true;\n> SET\n> postgres=# SELECT regr_count(b, a) FROM tbl;\n> regr_count\n> ------------\n> 0\n> (1 row)\n> postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n> regr_avgx | regr_avgy\n> -----------+-----------\n> |\n> (1 row)\n> postgres=# SELECT corr(b, a) FROM tbl;\n> corr\n> ------\n>\n> (1 row)\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n> QMG, EnterpriseDB Corporation\n>\n\n\n-- \nJeevan Chalke\nTechnical Architect, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Hi,On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.I had a quick look over this and observed that something broken with the PARTIAL aggregation.I can reproduce same issue with the larger dataset which results into parallel scan.CREATE TABLE tbl1(a int2,b float4) partition by range(a);create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);insert into tbl1 select i%2, i from generate_series(1, 1000000) i;# SELECT regr_count(b, a) FROM tbl1; regr_count ------------ 0(1 row)postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1; QUERY PLAN ------------------------------------------------------------------------------------------------ Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8) -> Gather (cost=15417.87..15418.08 rows=2 width=8) Workers Planned: 2 -> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8) -> Parallel Append (cost=0.00..11091.62 rows=443500 width=6) -> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00 rows=442500 width=6) -> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12 rows=1412 width=6)(7 rows)postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;SETpostgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1; regr_count ------------ 1000000(1 row)After looking further, it seems that it got broken by following commit:commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8Author: Andres Freund <andres@anarazel.de>Date: Sat Jan 26 14:17:52 2019 -0800 Change function call information to be variable length.This commit is too big to understand and thus could not get into the excact cause.Thanksbelow are some of examples.CREATE TABLE tbl(a int2,b float4) partition by range(a);create table tbl_p1 partition of tbl for values from (minvalue) to (0);create table tbl_p2 partition of tbl for values from (0) to (maxvalue);insert into tbl values (-1,-1),(0,0),(1,1),(2,2);--when partitionwise aggregate is offpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 4(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- 0.5 | 0.5(1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ 1(1 row)--when partitionwise aggregate is onpostgres=# SET enable_partitionwise_aggregate = true;SETpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 0(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- | (1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ (1 row)Thanks & Regards,Rajkumar RaghuwanshiQMG, EnterpriseDB Corporation\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 3 May 2019 17:26:45 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with\n partitionwise aggregate"
},
{
"msg_contents": "Hi,\nAs this issue is reproducible without partition-wise aggregate also,\nchanging email subject from \"Statistical aggregate functions are not\nworking with partitionwise aggregate \" to \"Statistical aggregate functions\nare not working with PARTIAL aggregation\".\n\noriginal reported test case and discussion can be found at below link.\nhttps://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com\n\nThanks & Regards,\nRajkumar Raghuwanshi\nQMG, EnterpriseDB Corporation\n\n\nOn Fri, May 3, 2019 at 5:26 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com>\nwrote:\n\n>\n>\n> On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <\n> rajkumar.raghuwanshi@enterprisedb.com> wrote:\n>\n>> Hi,\n>>\n>> On PG-head, Some of statistical aggregate function are not giving correct\n>> output when enable partitionwise aggregate while same is working on v11.\n>>\n>\n> I had a quick look over this and observed that something broken with the\n> PARTIAL aggregation.\n>\n> I can reproduce same issue with the larger dataset which results into\n> parallel scan.\n>\n> CREATE TABLE tbl1(a int2,b float4) partition by range(a);\n> create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);\n> create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);\n> insert into tbl1 select i%2, i from generate_series(1, 1000000) i;\n>\n> # SELECT regr_count(b, a) FROM tbl1;\n> regr_count\n> ------------\n> 0\n> (1 row)\n>\n> postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8)\n> -> Gather (cost=15417.87..15418.08 rows=2 width=8)\n> Workers Planned: 2\n> -> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8)\n> -> Parallel Append (cost=0.00..11091.62 rows=443500\n> width=6)\n> -> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00\n> rows=442500 width=6)\n> -> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12\n> rows=1412 width=6)\n> (7 rows)\n>\n> postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;\n> SET\n> postgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;\n> regr_count\n> ------------\n> 1000000\n> (1 row)\n>\n> After looking further, it seems that it got broken by following commit:\n>\n> commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Sat Jan 26 14:17:52 2019 -0800\n>\n> Change function call information to be variable length.\n>\n>\n> This commit is too big to understand and thus could not get into the\n> excact cause.\n>\n> Thanks\n>\n>\n>> below are some of examples.\n>>\n>> CREATE TABLE tbl(a int2,b float4) partition by range(a);\n>> create table tbl_p1 partition of tbl for values from (minvalue) to (0);\n>> create table tbl_p2 partition of tbl for values from (0) to (maxvalue);\n>> insert into tbl values (-1,-1),(0,0),(1,1),(2,2);\n>>\n>> --when partitionwise aggregate is off\n>> postgres=# SELECT regr_count(b, a) FROM tbl;\n>> regr_count\n>> ------------\n>> 4\n>> (1 row)\n>> postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n>> regr_avgx | regr_avgy\n>> -----------+-----------\n>> 0.5 | 0.5\n>> (1 row)\n>> postgres=# SELECT corr(b, a) FROM tbl;\n>> corr\n>> ------\n>> 1\n>> (1 row)\n>>\n>> --when partitionwise aggregate is on\n>> postgres=# SET enable_partitionwise_aggregate = true;\n>> SET\n>> postgres=# SELECT regr_count(b, a) FROM tbl;\n>> regr_count\n>> ------------\n>> 0\n>> (1 row)\n>> postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;\n>> regr_avgx | regr_avgy\n>> -----------+-----------\n>> |\n>> (1 row)\n>> postgres=# SELECT corr(b, a) FROM tbl;\n>> corr\n>> ------\n>>\n>> (1 row)\n>>\n>> Thanks & Regards,\n>> Rajkumar Raghuwanshi\n>> QMG, EnterpriseDB Corporation\n>>\n>\n>\n> --\n> Jeevan Chalke\n> Technical Architect, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\nHi,As this issue is reproducible without partition-wise aggregate also, changing email subject from \"Statistical aggregate functions are not working with partitionwise aggregate \" to \"Statistical aggregate functions are not working with PARTIAL aggregation\".original reported test case and discussion can be found at below link.https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.comThanks & Regards,Rajkumar RaghuwanshiQMG, EnterpriseDB CorporationOn Fri, May 3, 2019 at 5:26 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Hi,On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.I had a quick look over this and observed that something broken with the PARTIAL aggregation.I can reproduce same issue with the larger dataset which results into parallel scan.CREATE TABLE tbl1(a int2,b float4) partition by range(a);create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);insert into tbl1 select i%2, i from generate_series(1, 1000000) i;# SELECT regr_count(b, a) FROM tbl1; regr_count ------------ 0(1 row)postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1; QUERY PLAN ------------------------------------------------------------------------------------------------ Finalize Aggregate (cost=15418.08..15418.09 rows=1 width=8) -> Gather (cost=15417.87..15418.08 rows=2 width=8) Workers Planned: 2 -> Partial Aggregate (cost=14417.87..14417.88 rows=1 width=8) -> Parallel Append (cost=0.00..11091.62 rows=443500 width=6) -> Parallel Seq Scan on tbl1_p2 (cost=0.00..8850.00 rows=442500 width=6) -> Parallel Seq Scan on tbl1_p1 (cost=0.00..24.12 rows=1412 width=6)(7 rows)postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;SETpostgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1; regr_count ------------ 1000000(1 row)After looking further, it seems that it got broken by following commit:commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8Author: Andres Freund <andres@anarazel.de>Date: Sat Jan 26 14:17:52 2019 -0800 Change function call information to be variable length.This commit is too big to understand and thus could not get into the excact cause.Thanksbelow are some of examples.CREATE TABLE tbl(a int2,b float4) partition by range(a);create table tbl_p1 partition of tbl for values from (minvalue) to (0);create table tbl_p2 partition of tbl for values from (0) to (maxvalue);insert into tbl values (-1,-1),(0,0),(1,1),(2,2);--when partitionwise aggregate is offpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 4(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- 0.5 | 0.5(1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ 1(1 row)--when partitionwise aggregate is onpostgres=# SET enable_partitionwise_aggregate = true;SETpostgres=# SELECT regr_count(b, a) FROM tbl; regr_count ------------ 0(1 row)postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl; regr_avgx | regr_avgy -----------+----------- | (1 row)postgres=# SELECT corr(b, a) FROM tbl; corr ------ (1 row)Thanks & Regards,Rajkumar RaghuwanshiQMG, EnterpriseDB Corporation\n-- Jeevan ChalkeTechnical Architect, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 7 May 2019 14:39:55 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hello.\n\nAt Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>\n> Hi,\n> As this issue is reproducible without partition-wise aggregate also,\n> changing email subject from \"Statistical aggregate functions are not\n> working with partitionwise aggregate \" to \"Statistical aggregate functions\n> are not working with PARTIAL aggregation\".\n> \n> original reported test case and discussion can be found at below link.\n> https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com\n\nThe immediate reason for the behavior seems that\nEEOP_AGG_STRICT_INPUT_CHECK_ARGS regards the non-existent second\nargument as null.\n\nThe invalid deserialfn_oid case in ExecBuildAggTrans, it\ninitializes args[1] using the second argument of the functoin\n(int8pl() in the case) so the correct numTransInputs here is 1,\nnot 2.\n\nThe attached patch makes at least the test case work correctly\nand this seems to be the alone instance of the same issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 07 May 2019 21:06:24 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>\n> Hello.\n> \n> At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>\n> > Hi,\n> > As this issue is reproducible without partition-wise aggregate also,\n> > changing email subject from \"Statistical aggregate functions are not\n> > working with partitionwise aggregate \" to \"Statistical aggregate functions\n> > are not working with PARTIAL aggregation\".\n> > \n> > original reported test case and discussion can be found at below link.\n> > https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com\n> \n> The immediate reason for the behavior seems that\n> EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second\n> argument as null, which is out of arguments list in\n> trans_fcinfo->args[].\n> \n> The invalid deserialfn_oid case in ExecBuildAggTrans, it\n> initializes args[1] using the second argument of the functoin\n> (int8pl() in the case) so the correct numTransInputs here is 1,\n> not 2.\n> \n> I don't understand this fully but at least the attached patch\n> makes the test case work correctly and this seems to be the only\n> case of this issue.\n\nThis behavior is introduced by 69c3936a14 (in v11). At that time\nFunctionCallInfoData is pallioc0'ed and has fixed length members\narg[6] and argnull[7]. So nulls[1] is always false even if nargs\n= 1 so the issue had not been revealed.\n\nAfter introducing a9c35cf85c (in v12) the same check is done on\nFunctionCallInfoData that has NullableDatum args[] of required\nnumber of elements. In that case args[1] is out of palloc'ed\nmemory so this issue has been revealed.\n\nIn a second look, I seems to me that the right thing to do here\nis setting numInputs instaed of numArguments to numTransInputs in\ncombining step.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>\n> > Hello.\n> > \n> > At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>\n> > > Hi,\n> > > As this issue is reproducible without partition-wise aggregate also,\n> > > changing email subject from \"Statistical aggregate functions are not\n> > > working with partitionwise aggregate \" to \"Statistical aggregate functions\n> > > are not working with PARTIAL aggregation\".\n> > > \n> > > original reported test case and discussion can be found at below link.\n> > > https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com\n> > \n> > The immediate reason for the behavior seems that\n> > EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second\n> > argument as null, which is out of arguments list in\n> > trans_fcinfo->args[].\n> > \n> > The invalid deserialfn_oid case in ExecBuildAggTrans, it\n> > initializes args[1] using the second argument of the functoin\n> > (int8pl() in the case) so the correct numTransInputs here is 1,\n> > not 2.\n> > \n> > I don't understand this fully but at least the attached patch\n> > makes the test case work correctly and this seems to be the only\n> > case of this issue.\n> \n> This behavior is introduced by 69c3936a14 (in v11). At that time\n> FunctionCallInfoData is pallioc0'ed and has fixed length members\n> arg[6] and argnull[7]. So nulls[1] is always false even if nargs\n> = 1 so the issue had not been revealed.\n> \n> After introducing a9c35cf85c (in v12) the same check is done on\n> FunctionCallInfoData that has NullableDatum args[] of required\n> number of elements. In that case args[1] is out of palloc'ed\n> memory so this issue has been revealed.\n> \n> In a second look, I seems to me that the right thing to do here\n> is setting numInputs instaed of numArguments to numTransInputs in\n> combining step.\n\nBy the way, as mentioned above, this issue exists since 11 but\nharms at 12. Is this an open item, or older bug?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 08 May 2019 13:09:23 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Wed, May 8, 2019 at 12:09 AM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> By the way, as mentioned above, this issue exists since 11 but\n> harms at 12. Is this an open item, or older bug?\n\nSounds more like an open item to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 May 2019 08:22:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Wed, May 08, 2019 at 01:09:23PM +0900, Kyotaro HORIGUCHI wrote:\n>At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>\n>> At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>\n>> > Hello.\n>> >\n>> > At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in <CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>\n>> > > Hi,\n>> > > As this issue is reproducible without partition-wise aggregate also,\n>> > > changing email subject from \"Statistical aggregate functions are not\n>> > > working with partitionwise aggregate \" to \"Statistical aggregate functions\n>> > > are not working with PARTIAL aggregation\".\n>> > >\n>> > > original reported test case and discussion can be found at below link.\n>> > > https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com\n>> >\n>> > The immediate reason for the behavior seems that\n>> > EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second\n>> > argument as null, which is out of arguments list in\n>> > trans_fcinfo->args[].\n>> >\n>> > The invalid deserialfn_oid case in ExecBuildAggTrans, it\n>> > initializes args[1] using the second argument of the functoin\n>> > (int8pl() in the case) so the correct numTransInputs here is 1,\n>> > not 2.\n>> >\n>> > I don't understand this fully but at least the attached patch\n>> > makes the test case work correctly and this seems to be the only\n>> > case of this issue.\n>>\n>> This behavior is introduced by 69c3936a14 (in v11). At that time\n>> FunctionCallInfoData is pallioc0'ed and has fixed length members\n>> arg[6] and argnull[7]. So nulls[1] is always false even if nargs\n>> = 1 so the issue had not been revealed.\n>>\n>> After introducing a9c35cf85c (in v12) the same check is done on\n>> FunctionCallInfoData that has NullableDatum args[] of required\n>> number of elements. In that case args[1] is out of palloc'ed\n>> memory so this issue has been revealed.\n>>\n>> In a second look, I seems to me that the right thing to do here\n>> is setting numInputs instaed of numArguments to numTransInputs in\n>> combining step.\n>\n>By the way, as mentioned above, this issue exists since 11 but\n>harms at 12. Is this an open item, or older bug?\n>\n\nIt is an open item - there's a section for older bugs, but considering\nit's harmless in 11 (at least that's my understanding from the above\ndiscussion) I've added it as a regular open item.\n\nI've linked it to a9c35cf85c, which seems to be the culprit commit.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 8 May 2019 17:30:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Don't we have a build farm animal that runs under valgrind that would\nhave caught this?\n\n\n",
"msg_date": "Wed, 8 May 2019 12:41:31 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "\nOn 5/8/19 12:41 PM, Greg Stark wrote:\n> Don't we have a build farm animal that runs under valgrind that would\n> have caught this?\n>\n>\n\nThere are two animals running under valgrind: lousyjack and skink.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 8 May 2019 14:56:25 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hello. There is an unfortunate story on this issue.\n\nAt Wed, 8 May 2019 14:56:25 -0400, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote in <7969b496-096a-bf9b-2a03-4706baa4c48e@2ndQuadrant.com>\n> \n> On 5/8/19 12:41 PM, Greg Stark wrote:\n> > Don't we have a build farm animal that runs under valgrind that would\n> > have caught this?\n> >\n> >\n> \n> There are two animals running under valgrind: lousyjack and skink.\n\nValgrind doesn't detect the overruning read since the block\ndoesn't has 'MEMNOACCESS' region, since the requested size is\njust 64 bytes.\n\nThus the attached patch let valgrind detect the overrun.\n\n==00:00:00:22.959 20254== VALGRINDERROR-BEGIN\n==00:00:00:22.959 20254== Conditional jump or move depends on uninitialised value(s)\n==00:00:00:22.959 20254== at 0x88A838: ExecInterpExpr (execExprInterp.c:1553)\n==00:00:00:22.959 20254== by 0x88AFD5: ExecInterpExprStillValid (execExprInterp.c:1769)\n==00:00:00:22.959 20254== by 0x8C3503: ExecEvalExprSwitchContext (executor.h:307)\n==00:00:00:22.959 20254== by 0x8C4653: advance_aggregates (nodeAgg.c:679)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 09 May 2019 11:18:12 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "At Thu, 09 May 2019 11:17:46 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190509.111746.217492977.horiguchi.kyotaro@lab.ntt.co.jp>\n> Valgrind doesn't detect the overruning read since the block\n> doesn't has 'MEMNOACCESS' region, since the requested size is\n> just 64 bytes.\n> \n> Thus the attached patch let valgrind detect the overrun.\n\nSo the attached patch makes palloc always attach the MEMNOACCESS\nregion and sentinel byte. The issue under discussion is detected\nwith this patch either. (But in return memory usage gets larger.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 09 May 2019 14:18:14 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:\n> This behavior is introduced by 69c3936a14 (in v11). At that time\n> FunctionCallInfoData is pallioc0'ed and has fixed length members\n> arg[6] and argnull[7]. So nulls[1] is always false even if nargs\n> = 1 so the issue had not been revealed.\n\n> After introducing a9c35cf85c (in v12) the same check is done on\n> FunctionCallInfoData that has NullableDatum args[] of required\n> number of elements. In that case args[1] is out of palloc'ed\n> memory so this issue has been revealed.\n\n> In a second look, I seems to me that the right thing to do here\n> is setting numInputs instaed of numArguments to numTransInputs in\n> combining step.\n\nYea, to me this just seems a consequence of the wrong\nnumTransInputs. Arguably this is a bug going back to 9.6, where\ncombining aggregates where introduced. It's just that numTransInputs\nisn't used anywhere for combining aggregates, before 11.\n\nIt's documentation says:\n\n\t/*\n\t * Number of aggregated input columns to pass to the transfn. This\n\t * includes the ORDER BY columns for ordered-set aggs, but not for plain\n\t * aggs. (This doesn't count the transition state value!)\n\t */\n\tint\t\t\tnumTransInputs;\n\nwhich IMO is violated by having it set to the plain aggregate's value,\nrather than the combine func.\n\nWhile I agree that fixing numTransInputs is the right way, I'm not\nconvinced the way you did it is the right approach. I'm somewhat\ninclined to think that it's wrong that ExecInitAgg() calls\nbuild_pertrans_for_aggref() with a numArguments that's not actually\nright? Alternatively I think we should just move the numTransInputs\ncomputation into the existing branch around DO_AGGSPLIT_COMBINE.\n\nIt seems pretty clear that this needs to be fixed for v11, it seems too\nfragile to rely on trans_fcinfo->argnull[2] being zero initialized.\n\nI'm less sure about fixing it for 9.6/10. There's no use of\nnumTransInputs for combining back then.\n\nDavid, I assume you didn't adjust numTransInput plainly because it\nwasn't needed / you didn't notice? Do you have a preference for a fix?\n\n\n\nIndependent of these changes, some of the code around partial, ordered\nset and polymorphic aggregates really make it hard to understand things:\n\n\t\t/* Detect how many arguments to pass to the finalfn */\n\t\tif (aggform->aggfinalextra)\n\t\t\tperagg->numFinalArgs = numArguments + 1;\n\t\telse\n\t\t\tperagg->numFinalArgs = numDirectArgs + 1;\n\nWhat on earth is that supposed to mean? Sure, the +1 is obvious, but why\nthe different sources for arguments are needed isn't - especially\nbecause numArguments was just calculated with the actual aggregate\ninputs. Nor is aggfinalextra's documentation particularly elucidating:\n\t/* true to pass extra dummy arguments to aggfinalfn */\n\tbool\t\taggfinalextra BKI_DEFAULT(f);\n\nespecially not why aggfinalextra means we have to ignore direct\nargs. Presumably because aggfinalextra just emulates what direct args\ndoes for ordered set args, but we allow both to be set.\n\nSimilarly\n\n\t/* Detect how many arguments to pass to the transfn */\n\tif (AGGKIND_IS_ORDERED_SET(aggref->aggkind))\n\t\tpertrans->numTransInputs = numInputs;\n\telse\n\t\tpertrans->numTransInputs = numArguments;\n\nis hard to understand, without additional comments. One can, looking\naround, infer that it's because ordered set aggs need sort columns\nincluded. But that should just have been mentioned.\n\nAnd to make sense of build_aggregate_transfn_expr()'s treatment of\ndirect args, one has to know that direct args are only possible for\nordered set aggregates. Which IMO is not obvious in nodeAgg.c.\n\n...\n\nI feel this code has become quite creaky in the last few years.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 May 2019 20:04:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nDavid, anyone, any comments?\n\nOn 2019-05-16 20:04:04 -0700, Andres Freund wrote:\n> On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:\n> > This behavior is introduced by 69c3936a14 (in v11). At that time\n> > FunctionCallInfoData is pallioc0'ed and has fixed length members\n> > arg[6] and argnull[7]. So nulls[1] is always false even if nargs\n> > = 1 so the issue had not been revealed.\n> \n> > After introducing a9c35cf85c (in v12) the same check is done on\n> > FunctionCallInfoData that has NullableDatum args[] of required\n> > number of elements. In that case args[1] is out of palloc'ed\n> > memory so this issue has been revealed.\n> \n> > In a second look, I seems to me that the right thing to do here\n> > is setting numInputs instaed of numArguments to numTransInputs in\n> > combining step.\n> \n> Yea, to me this just seems a consequence of the wrong\n> numTransInputs. Arguably this is a bug going back to 9.6, where\n> combining aggregates where introduced. It's just that numTransInputs\n> isn't used anywhere for combining aggregates, before 11.\n> \n> It's documentation says:\n> \n> \t/*\n> \t * Number of aggregated input columns to pass to the transfn. This\n> \t * includes the ORDER BY columns for ordered-set aggs, but not for plain\n> \t * aggs. (This doesn't count the transition state value!)\n> \t */\n> \tint\t\t\tnumTransInputs;\n> \n> which IMO is violated by having it set to the plain aggregate's value,\n> rather than the combine func.\n> \n> While I agree that fixing numTransInputs is the right way, I'm not\n> convinced the way you did it is the right approach. I'm somewhat\n> inclined to think that it's wrong that ExecInitAgg() calls\n> build_pertrans_for_aggref() with a numArguments that's not actually\n> right? Alternatively I think we should just move the numTransInputs\n> computation into the existing branch around DO_AGGSPLIT_COMBINE.\n> \n> It seems pretty clear that this needs to be fixed for v11, it seems too\n> fragile to rely on trans_fcinfo->argnull[2] being zero initialized.\n> \n> I'm less sure about fixing it for 9.6/10. There's no use of\n> numTransInputs for combining back then.\n> \n> David, I assume you didn't adjust numTransInput plainly because it\n> wasn't needed / you didn't notice? Do you have a preference for a fix?\n\nUnless somebody comments I'm later today going to move the numTransInput\ncomputation into the DO_AGGSPLIT_COMBINE branch in\nbuild_pertrans_for_aggref(), add a small test (using\nenable_partitionwise_aggregate), and backpatch to 11.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 May 2019 12:37:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Sun, 19 May 2019 at 07:37, Andres Freund <andres@anarazel.de> wrote:\n> David, anyone, any comments?\n\nLooking at this now.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 19 May 2019 16:37:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:\n> > In a second look, I seems to me that the right thing to do here\n> > is setting numInputs instaed of numArguments to numTransInputs in\n> > combining step.\n>\n> Yea, to me this just seems a consequence of the wrong\n> numTransInputs. Arguably this is a bug going back to 9.6, where\n> combining aggregates where introduced. It's just that numTransInputs\n> isn't used anywhere for combining aggregates, before 11.\n\nIsn't it more due to the lack of any aggregates with > 1 arg having a\ncombine function?\n\n> While I agree that fixing numTransInputs is the right way, I'm not\n> convinced the way you did it is the right approach. I'm somewhat\n> inclined to think that it's wrong that ExecInitAgg() calls\n> build_pertrans_for_aggref() with a numArguments that's not actually\n> right? Alternatively I think we should just move the numTransInputs\n> computation into the existing branch around DO_AGGSPLIT_COMBINE.\n\nYeah, probably we should be passing in the correct arg count for the\ncombinefn to build_pertrans_for_aggref(). However, I see that we also\npass in the inputTypes from the transfn, just we don't use them when\nworking with the combinefn.\n\nYou'll notice that I've just hardcoded the numTransArgs to set it to 1\nwhen we're working with a combinefn. The combinefn always requires 2\nargs of trans type, so this seems pretty valid to me. I think\nKyotaro's patch setting of numInputs is wrong. It just happens to\naccidentally match. I also added a regression test to exercise\nregr_count. I tagged it onto an existing query so as to minimise the\noverhead. It seems worth doing since most other aggs have a single\nargument and this one wasn't working because it had two args.\n\nI also noticed that the code seemed to work in af025eed536d, so I\nguess the new expression evaluation code is highlighting the existing\nissue.\n\n> I feel this code has become quite creaky in the last few years.\n\nYou're not kidding!\n\nPatch attached of how I think we should fix it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 19 May 2019 20:18:38 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-19 20:18:38 +1200, David Rowley wrote:\n> On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:\n> > > In a second look, I seems to me that the right thing to do here\n> > > is setting numInputs instaed of numArguments to numTransInputs in\n> > > combining step.\n> >\n> > Yea, to me this just seems a consequence of the wrong\n> > numTransInputs. Arguably this is a bug going back to 9.6, where\n> > combining aggregates where introduced. It's just that numTransInputs\n> > isn't used anywhere for combining aggregates, before 11.\n> \n> Isn't it more due to the lack of any aggregates with > 1 arg having a\n> combine function?\n\nI'm not sure I follow? regr_count() already was in 9.6? Including a\ncombine function?\n\npostgres[1490][1]=# SELECT version();\n┌──────────────────────────────────────────────────────────────────────────────────────────┐\n│ version │\n├──────────────────────────────────────────────────────────────────────────────────────────┤\n│ PostgreSQL 9.6.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-7) 8.3.0, 64-bit │\n└──────────────────────────────────────────────────────────────────────────────────────────┘\n(1 row)\n\npostgres[1490][1]=# SELECT aggfnoid::regprocedure FROM pg_aggregate pa JOIN pg_proc pptrans ON (pa.aggtransfn = pptrans.oid) AND pptrans.pronargs > 2 AND aggcombinefn <> 0;\n┌───────────────────────────────────────────────────┐\n│ aggfnoid │\n├───────────────────────────────────────────────────┤\n│ regr_count(double precision,double precision) │\n│ regr_sxx(double precision,double precision) │\n│ regr_syy(double precision,double precision) │\n│ regr_sxy(double precision,double precision) │\n│ regr_avgx(double precision,double precision) │\n│ regr_avgy(double precision,double precision) │\n│ regr_r2(double precision,double precision) │\n│ regr_slope(double precision,double precision) │\n│ regr_intercept(double precision,double precision) │\n│ covar_pop(double precision,double precision) │\n│ covar_samp(double precision,double precision) │\n│ corr(double precision,double precision) │\n└───────────────────────────────────────────────────┘\n\n\nBut it's not an active problem in 9.6, because numTransInputs wasn't\nused at all for combine functions: Before c253b722f6 there simply was no\nNULL check for strict trans functions, and after that the check was\nsimply hardcoded for the right offset in fcinfo, as it's done by code\nspecific to aggsplit combine.\n\nIn bf6c614a2f2 that was generalized, so the strictness check was done by\ncommon code doing the strictness checks, based on numTransInputs. But\ndue to the fact that the relevant fcinfo->isnull[2..] was always\nzero-initialized (more or less by accident, by being part of the\nAggStatePerTrans struct, which is palloc0'ed), there was no observable\ndamage, we just checked too many array elements. And then finally in\na9c35cf85ca1f, that broke, because the fcinfo is a) dynamically\nallocated without being zeroed b) exactly the right length.\n\n\n> > While I agree that fixing numTransInputs is the right way, I'm not\n> > convinced the way you did it is the right approach. I'm somewhat\n> > inclined to think that it's wrong that ExecInitAgg() calls\n> > build_pertrans_for_aggref() with a numArguments that's not actually\n> > right? Alternatively I think we should just move the numTransInputs\n> > computation into the existing branch around DO_AGGSPLIT_COMBINE.\n> \n> Yeah, probably we should be passing in the correct arg count for the\n> combinefn to build_pertrans_for_aggref(). However, I see that we also\n> pass in the inputTypes from the transfn, just we don't use them when\n> working with the combinefn.\n\nNot sure what you mean by that \"however\"?\n\n\n> You'll notice that I've just hardcoded the numTransArgs to set it to 1\n> when we're working with a combinefn. The combinefn always requires 2\n> args of trans type, so this seems pretty valid to me.\n\n> I think Kyotaro's patch setting of numInputs is wrong.\n\nYea, my proposal was to simply harcode it to 2 in the\nDO_AGGSPLIT_COMBINE path.\n\n\n> Patch attached of how I think we should fix it.\n\nThanks.\n\n\n\n> diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c\n> index d01fc4f52e..b061162961 100644\n> --- a/src/backend/executor/nodeAgg.c\n> +++ b/src/backend/executor/nodeAgg.c\n> @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> \t\tint\t\t\texisting_aggno;\n> \t\tint\t\t\texisting_transno;\n> \t\tList\t *same_input_transnos;\n> -\t\tOid\t\t\tinputTypes[FUNC_MAX_ARGS];\n> +\t\tOid\t\t\ttransFnInputTypes[FUNC_MAX_ARGS];\n> \t\tint\t\t\tnumArguments;\n> +\t\tint\t\t\tnumTransFnArgs;\n> \t\tint\t\t\tnumDirectArgs;\n> \t\tHeapTuple\taggTuple;\n> \t\tForm_pg_aggregate aggform;\n> @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> \t\t * could be different from the agg's declared input types, when the\n> \t\t * agg accepts ANY or a polymorphic type.\n> \t\t */\n> -\t\tnumArguments = get_aggregate_argtypes(aggref, inputTypes);\n> +\t\tnumTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n\nNot sure I understand the distinction you're trying to make with the\nvariable renaming. The combine function is also a transition function,\nno?\n\n\n> \t\t/* Count the \"direct\" arguments, if any */\n> \t\tnumDirectArgs = list_length(aggref->aggdirectargs);\n> \n> +\t\t/*\n> +\t\t * Combine functions always have a 2 trans state type input params, so\n> +\t\t * this is always set to 1 (we don't count the first trans state).\n> +\t\t */\n\nPerhaps the parenthetical should instead be something like \"to 1 (the\ntrans type is not counted as an arg, just like with non-combine trans\nfunction)\" or similar?\n\n\n> @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> \t\t\t\t\t\t\t\t\t aggref, transfn_oid, aggtranstype,\n> \t\t\t\t\t\t\t\t\t serialfn_oid, deserialfn_oid,\n> \t\t\t\t\t\t\t\t\t initValue, initValueIsNull,\n> -\t\t\t\t\t\t\t\t\t inputTypes, numArguments);\n> +\t\t\t\t\t\t\t\t\t transFnInputTypes, numArguments);\n\nThat means we pass in the wrong input types? Seems like it'd be better\nto either pass an empty list, or just create the argument list here.\n\n\nI'm inclined to push a minimal fix now, and then a slightly more evolved\nversion fo this after beta1.\n\n\n> diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql\n> index d4fd657188..bd8b9e8b4f 100644\n> --- a/src/test/regress/sql/aggregates.sql\n> +++ b/src/test/regress/sql/aggregates.sql\n> @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;\n> \n> -- variance(int4) covers numeric_poly_combine\n> -- sum(int8) covers int8_avg_combine\n> +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg\n\ntypo...\n\n> EXPLAIN (COSTS OFF)\n> - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> \n> -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> \n> ROLLBACK;\n\nDoes this actually cover the bug at issue here? The non-combine case\nwasn't broken?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 11:36:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:\n> > Isn't it more due to the lack of any aggregates with > 1 arg having a\n> > combine function?\n>\n> I'm not sure I follow? regr_count() already was in 9.6? Including a\n> combine function?\n\nOops, that line I meant to delete before sending.\n\n> > Yeah, probably we should be passing in the correct arg count for the\n> > combinefn to build_pertrans_for_aggref(). However, I see that we also\n> > pass in the inputTypes from the transfn, just we don't use them when\n> > working with the combinefn.\n>\n> Not sure what you mean by that \"however\"?\n\nWell, previously those two arguments were always for the function in\npg_aggregate.aggtransfn. I only changed one of them to mean the trans\nfunc that's being used, which detracted slightly from my ambition to\nchange just what numArguments means.\n\n> > You'll notice that I've just hardcoded the numTransArgs to set it to 1\n> > when we're working with a combinefn. The combinefn always requires 2\n> > args of trans type, so this seems pretty valid to me.\n>\n> > I think Kyotaro's patch setting of numInputs is wrong.\n>\n> Yea, my proposal was to simply harcode it to 2 in the\n> DO_AGGSPLIT_COMBINE path.\n\nok.\n\n> > diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c\n> > index d01fc4f52e..b061162961 100644\n> > --- a/src/backend/executor/nodeAgg.c\n> > +++ b/src/backend/executor/nodeAgg.c\n> > @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > int existing_aggno;\n> > int existing_transno;\n> > List *same_input_transnos;\n> > - Oid inputTypes[FUNC_MAX_ARGS];\n> > + Oid transFnInputTypes[FUNC_MAX_ARGS];\n> > int numArguments;\n> > + int numTransFnArgs;\n> > int numDirectArgs;\n> > HeapTuple aggTuple;\n> > Form_pg_aggregate aggform;\n> > @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > * could be different from the agg's declared input types, when the\n> > * agg accepts ANY or a polymorphic type.\n> > */\n> > - numArguments = get_aggregate_argtypes(aggref, inputTypes);\n> > + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n>\n> Not sure I understand the distinction you're trying to make with the\n> variable renaming. The combine function is also a transition function,\n> no?\n\nI was trying to make it more clear what each variable is for. It's\ntrue that the combine function is used as a transition function in\nthis case, but I'd hoped it would be more easy to understand that the\ninput arguments listed in a variable named transFnInputTypes would be\nfor the function mentioned in pg_aggregate.aggtransfn rather than the\ntransfn we're using. If that's not any more clear then maybe another\nfix is better, or we can leave it... I had to make sense of all this\ncode last night and I was just having a go at making it easier to\nfollow for the next person who has to.\n\n> > /* Count the \"direct\" arguments, if any */\n> > numDirectArgs = list_length(aggref->aggdirectargs);\n> >\n> > + /*\n> > + * Combine functions always have a 2 trans state type input params, so\n> > + * this is always set to 1 (we don't count the first trans state).\n> > + */\n>\n> Perhaps the parenthetical should instead be something like \"to 1 (the\n> trans type is not counted as an arg, just like with non-combine trans\n> function)\" or similar?\n\nYeah, that's better.\n\n>\n> > @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > aggref, transfn_oid, aggtranstype,\n> > serialfn_oid, deserialfn_oid,\n> > initValue, initValueIsNull,\n> > - inputTypes, numArguments);\n> > + transFnInputTypes, numArguments);\n>\n> That means we pass in the wrong input types? Seems like it'd be better\n> to either pass an empty list, or just create the argument list here.\n\nWhat do you mean \"here\"? Did you mean to quote this fragment?\n\n@@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,\n Oid aggtransfn, Oid aggtranstype,\n Oid aggserialfn, Oid aggdeserialfn,\n Datum initValue, bool initValueIsNull,\n- Oid *inputTypes, int numArguments)\n+ Oid *transFnInputTypes, int numArguments)\n\nI had hoped the rename would make it more clear that these are the\nargs for the function in pg_aggregate.aggtransfn. We could pass NULL\ninstead when it's the combine func, but I didn't really see the\nadvantage of it.\n\n> I'm inclined to push a minimal fix now, and then a slightly more evolved\n> version fo this after beta1.\n\nOk\n\n> > diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql\n> > index d4fd657188..bd8b9e8b4f 100644\n> > --- a/src/test/regress/sql/aggregates.sql\n> > +++ b/src/test/regress/sql/aggregates.sql\n> > @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;\n> >\n> > -- variance(int4) covers numeric_poly_combine\n> > -- sum(int8) covers int8_avg_combine\n> > +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg\n>\n> typo...\n\noops. I spelt coconut wrong. :)\n\n>\n> > EXPLAIN (COSTS OFF)\n> > - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> > + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> >\n> > -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> > +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> >\n> > ROLLBACK;\n>\n> Does this actually cover the bug at issue here? The non-combine case\n> wasn't broken?\n\nThe EXPLAIN shows the plan is:\n\n QUERY PLAN\n----------------------------------------------\n Finalize Aggregate\n -> Gather\n Workers Planned: 4\n -> Partial Aggregate\n -> Parallel Seq Scan on tenk1\n(5 rows)\n\nso it is exercising the combine functions.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 20 May 2019 10:36:43 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nThanks to all for reporting, helping to identify and finally patch the\nproblem!\n\nOn 2019-05-20 10:36:43 +1200, David Rowley wrote:\n> On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:\n> > > diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c\n> > > index d01fc4f52e..b061162961 100644\n> > > --- a/src/backend/executor/nodeAgg.c\n> > > +++ b/src/backend/executor/nodeAgg.c\n> > > @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > > int existing_aggno;\n> > > int existing_transno;\n> > > List *same_input_transnos;\n> > > - Oid inputTypes[FUNC_MAX_ARGS];\n> > > + Oid transFnInputTypes[FUNC_MAX_ARGS];\n> > > int numArguments;\n> > > + int numTransFnArgs;\n> > > int numDirectArgs;\n> > > HeapTuple aggTuple;\n> > > Form_pg_aggregate aggform;\n> > > @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > > * could be different from the agg's declared input types, when the\n> > > * agg accepts ANY or a polymorphic type.\n> > > */\n> > > - numArguments = get_aggregate_argtypes(aggref, inputTypes);\n> > > + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n> >\n> > Not sure I understand the distinction you're trying to make with the\n> > variable renaming. The combine function is also a transition function,\n> > no?\n> \n> I was trying to make it more clear what each variable is for. It's\n> true that the combine function is used as a transition function in\n> this case, but I'd hoped it would be more easy to understand that the\n> input arguments listed in a variable named transFnInputTypes would be\n> for the function mentioned in pg_aggregate.aggtransfn rather than the\n> transfn we're using. If that's not any more clear then maybe another\n> fix is better, or we can leave it... I had to make sense of all this\n> code last night and I was just having a go at making it easier to\n> follow for the next person who has to.\n\nThat's what I guessed, but I'm not sure it really achieves that. How\nabout we have something roughly like:\n\nint numTransFnArgs = -1;\nint numCombineFnArgs = -1;\nOid transFnInputTypes[FUNC_MAX_ARGS];\nOid combineFnInputTypes[2];\n\nif (DO_AGGSPLIT_COMBINE(...)\n numCombineFnArgs = 1;\n combineFnInputTypes = list_make2(aggtranstype, aggtranstype);\nelse\n numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n\n...\n\nif (DO_AGGSPLIT_COMBINE(...))\n build_pertrans_for_aggref(pertrans, aggstate, estate,\n aggref, combinefn_oid, aggtranstype,\n serialfn_oid, deserialfn_oid,\n initValue, initValueIsNull,\n combineFnInputTypes, numCombineFnArgs);\nelse\n build_pertrans_for_aggref(pertrans, aggstate, estate,\n aggref, transfn_oid, aggtranstype,\n serialfn_oid, deserialfn_oid,\n initValue, initValueIsNull,\n transFnInputTypes, numTransFnArgs);\n\nseems like that'd make the code clearer? I wonder if we shouldn't\nstrive to have *no* DO_AGGSPLIT_COMBINE specific logic in\nbuild_pertrans_for_aggref (except perhaps for an error check or two).\n\nIstm we shouldn't even need a separate build_aggregate_combinefn_expr()\nfrom build_aggregate_transfn_expr().\n\n\n> > > @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> > > aggref, transfn_oid, aggtranstype,\n> > > serialfn_oid, deserialfn_oid,\n> > > initValue, initValueIsNull,\n> > > - inputTypes, numArguments);\n> > > + transFnInputTypes, numArguments);\n> >\n> > That means we pass in the wrong input types? Seems like it'd be better\n> > to either pass an empty list, or just create the argument list here.\n> \n> What do you mean \"here\"? Did you mean to quote this fragment?\n> \n> @@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,\n> Oid aggtransfn, Oid aggtranstype,\n> Oid aggserialfn, Oid aggdeserialfn,\n> Datum initValue, bool initValueIsNull,\n> - Oid *inputTypes, int numArguments)\n> + Oid *transFnInputTypes, int numArguments)\n> \n> I had hoped the rename would make it more clear that these are the\n> args for the function in pg_aggregate.aggtransfn. We could pass NULL\n> instead when it's the combine func, but I didn't really see the\n> advantage of it.\n\nThe advantage is that if somebody starts to use the the wrong list in\nthe wrong context, we'd be more likely to get an error than something\nthat works in the common cases, but not in the more complicated\nsituations.\n\n\n> > I'm inclined to push a minimal fix now, and then a slightly more evolved\n> > version fo this after beta1.\n> \n> Ok\n\nDone that now.\n\n\n> > > EXPLAIN (COSTS OFF)\n> > > - SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> > > + SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> > >\n> > > -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;\n> > > +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;\n> > >\n> > > ROLLBACK;\n> >\n> > Does this actually cover the bug at issue here? The non-combine case\n> > wasn't broken?\n> \n> The EXPLAIN shows the plan is:\n\nErr, comes from only looking at the diff :(. Missed the previous SETs,\nand the explain wasn't included in the context either...\n\n\nUgh, I just noticed - as you did before - that numInputs is declared at\nthe top-level in build_pertrans_for_aggref, and then *again* in the\n!DO_AGGSPLIT_COMBINE branch. Why, oh, why (yes, I'm aware that that's in\none of my commmits :(). I've renamed your numTransInputs variable to\nnumTransArgs, as it seems confusing to have different values in\npertrans->numTransInputs and a local numTransInputs variable.\n\nBtw, the zero input case appears to also be affected by this bug: We\nquite reasonably don't emit a strict input check expression step for the\ncombine function when numTransInputs = 0. But the only zero length agg\nis count(*), and while it has strict trans & combine functions, it does\nhave an initval of 0. So I don't think it's reachable with builtin\naggregates, and I can't imagine another zero argument aggregate.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 18:20:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:\n> How\n> about we have something roughly like:\n>\n> int numTransFnArgs = -1;\n> int numCombineFnArgs = -1;\n> Oid transFnInputTypes[FUNC_MAX_ARGS];\n> Oid combineFnInputTypes[2];\n>\n> if (DO_AGGSPLIT_COMBINE(...)\n> numCombineFnArgs = 1;\n> combineFnInputTypes = list_make2(aggtranstype, aggtranstype);\n> else\n> numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n>\n> ...\n>\n> if (DO_AGGSPLIT_COMBINE(...))\n> build_pertrans_for_aggref(pertrans, aggstate, estate,\n> aggref, combinefn_oid, aggtranstype,\n> serialfn_oid, deserialfn_oid,\n> initValue, initValueIsNull,\n> combineFnInputTypes, numCombineFnArgs);\n> else\n> build_pertrans_for_aggref(pertrans, aggstate, estate,\n> aggref, transfn_oid, aggtranstype,\n> serialfn_oid, deserialfn_oid,\n> initValue, initValueIsNull,\n> transFnInputTypes, numTransFnArgs);\n>\n> seems like that'd make the code clearer?\n\nI think that might be a good idea... I mean apart from trying to\nassign a List to an array :) We still must call\nget_aggregate_argtypes() in order to determine the final function, so\nthe code can't look exactly like you've written.\n\n> I wonder if we shouldn't\n> strive to have *no* DO_AGGSPLIT_COMBINE specific logic in\n> build_pertrans_for_aggref (except perhaps for an error check or two).\n\nJust so we have a hard copy to review and discuss, I think this would\nlook something like the attached.\n\nWe do miss out on a few very small optimisations, but I don't think\nthey'll be anything we could measure. Namely\nbuild_aggregate_combinefn_expr() called make_agg_arg() once and used\nit twice instead of calling it once for each arg. I don't think\nthat's anything we could measure, especially in a situation where\ntwo-stage aggregation is being used.\n\nI ended up also renaming aggtransfn to transfn_oid in\nbuild_pertrans_for_aggref(). Having it called aggtranfn seems a bit\ntoo close to the pg_aggregate.aggtransfn column which is confusion\ngiven that we might pass it the value of the aggcombinefn column.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 20 May 2019 17:27:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hello.\n\nI couldn't understand the multiple argument lists with confident\nso the patch was born from a guess^^; Sorry for the confusing but\nI'm relieved by knowing that it was not so easy to understand.\n\nAt Mon, 20 May 2019 17:27:10 +1200, David Rowley <david.rowley@2ndquadrant.com> wrote in <CAKJS1f_1xfn8navZP05U8BszsG=+CNck_99f_+0j2ccBSrBDkQ@mail.gmail.com>\n> On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:\n> > How\n> > about we have something roughly like:\n> >\n> > int numTransFnArgs = -1;\n> > int numCombineFnArgs = -1;\n> > Oid transFnInputTypes[FUNC_MAX_ARGS];\n> > Oid combineFnInputTypes[2];\n> >\n> > if (DO_AGGSPLIT_COMBINE(...)\n> > numCombineFnArgs = 1;\n> > combineFnInputTypes = list_make2(aggtranstype, aggtranstype);\n> > else\n> > numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n> >\n> > ...\n> >\n> > if (DO_AGGSPLIT_COMBINE(...))\n> > build_pertrans_for_aggref(pertrans, aggstate, estate,\n> > aggref, combinefn_oid, aggtranstype,\n> > serialfn_oid, deserialfn_oid,\n> > initValue, initValueIsNull,\n> > combineFnInputTypes, numCombineFnArgs);\n> > else\n> > build_pertrans_for_aggref(pertrans, aggstate, estate,\n> > aggref, transfn_oid, aggtranstype,\n> > serialfn_oid, deserialfn_oid,\n> > initValue, initValueIsNull,\n> > transFnInputTypes, numTransFnArgs);\n> >\n> > seems like that'd make the code clearer?\n> \n> I think that might be a good idea... I mean apart from trying to\n> assign a List to an array :) We still must call\n> get_aggregate_argtypes() in order to determine the final function, so\n> the code can't look exactly like you've written.\n> \n> > I wonder if we shouldn't\n> > strive to have *no* DO_AGGSPLIT_COMBINE specific logic in\n> > build_pertrans_for_aggref (except perhaps for an error check or two).\n> \n> Just so we have a hard copy to review and discuss, I think this would\n> look something like the attached.\n\nMay I give some comments? They might make me look stupid but I\ncan't help asking.\n\n- numArguments = get_aggregate_argtypes(aggref, inputTypes);\n+ numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n\nIf the function retrieves argument types of transform functions,\nit would be better that the function name is\nget_aggregate_transargtypes() and Aggref.aggargtypes has the name\nlike aggtransargtypes.\n\n /* Detect how many arguments to pass to the finalfn */\n if (aggform->aggfinalextra)\n- peragg->numFinalArgs = numArguments + 1;\n+ peragg->numFinalArgs = numTransFnArgs + 1;\n else\n peragg->numFinalArgs = numDirectArgs + 1;\n\nI can understand the aggfinalextra case, but cannot understand\nanother. As Andres said I think the code requires an explanation\nof why the final args is not numTransFnArgs but *numDirectArgs\nplus 1*.\n\n+ /*\n+ * When combining there's only one input, the to-be-combined\n+ * added transition value from below (this node's transition\n+ * value is counted separately).\n+ */\n+ pertrans->numTransInputs = 1;\n\nI believe this works but why the member has not been set\ncorrectly by the creator of the aggsplit?\n\n\n+ /* Detect how many arguments to pass to the transfn */\n\nI want to have a comment there that explains why what ordered-set\nrequires is not numTransFnArgs + (#sort cols?), but\n\"list_length(aggref->args)\", or a comment that explanas why they\nare compatible to be expplained.\n\n> We do miss out on a few very small optimisations, but I don't think\n> they'll be anything we could measure. Namely\n> build_aggregate_combinefn_expr() called make_agg_arg() once and used\n> it twice instead of calling it once for each arg. I don't think\n> that's anything we could measure, especially in a situation where\n> two-stage aggregation is being used.\n> \n> I ended up also renaming aggtransfn to transfn_oid in\n> build_pertrans_for_aggref(). Having it called aggtranfn seems a bit\n> too close to the pg_aggregate.aggtransfn column which is confusion\n> given that we might pass it the value of the aggcombinefn column.\n\nIs Form_pg_aggregate->aggtransfn different thing from\ntransfn_oid? It seems very confusing to me apart from the naming.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 20 May 2019 16:59:05 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Mon, 20 May 2019 at 19:59, Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> - numArguments = get_aggregate_argtypes(aggref, inputTypes);\n> + numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n>\n> If the function retrieves argument types of transform functions,\n> it would be better that the function name is\n> get_aggregate_transargtypes() and Aggref.aggargtypes has the name\n> like aggtransargtypes.\n\nProbably that would be a better name.\n\n> /* Detect how many arguments to pass to the finalfn */\n> if (aggform->aggfinalextra)\n> - peragg->numFinalArgs = numArguments + 1;\n> + peragg->numFinalArgs = numTransFnArgs + 1;\n> else\n> peragg->numFinalArgs = numDirectArgs + 1;\n>\n> I can understand the aggfinalextra case, but cannot understand\n> another. As Andres said I think the code requires an explanation\n> of why the final args is not numTransFnArgs but *numDirectArgs\n> plus 1*.\n\nnumDirectArgs will be 0 for anything apart from order-set aggregate,\nso in this case, numFinalArgs will become 1, since the final function\njust accepts the aggregate state.\nFor ordered-set aggregates like, say percentile_cont the finalfn also\nneeds the argument that was passed into the aggregate. e.g\npercentile_cont(0.5), the final function needs to know about 0.5. In\nthis case 0.5 is the direct arg and the indirect args are what are in\nthe order by clause, x in WITHIN GROUP (ORDER BY x). At least that's\nmy understanding.\n\n> + /*\n> + * When combining there's only one input, the to-be-combined\n> + * added transition value from below (this node's transition\n> + * value is counted separately).\n> + */\n> + pertrans->numTransInputs = 1;\n>\n> I believe this works but why the member has not been set\n> correctly by the creator of the aggsplit?\n\nNot quite sure what you mean here. We need to set this based on if\nwe're dealing with a combine function or a trans function.\n\n> + /* Detect how many arguments to pass to the transfn */\n>\n> I want to have a comment there that explains why what ordered-set\n> requires is not numTransFnArgs + (#sort cols?), but\n> \"list_length(aggref->args)\", or a comment that explanas why they\n> are compatible to be expplained.\n\nNormal aggregate ORDER BY clauses are handle in nodeAgg.c, but\nordered-set aggregate's WITHIN GROUP (ORDER BY ..) args are handled in\nthe aggregate's transition function.\n\n> > I ended up also renaming aggtransfn to transfn_oid in\n> > build_pertrans_for_aggref(). Having it called aggtranfn seems a bit\n> > too close to the pg_aggregate.aggtransfn column which is confusion\n> > given that we might pass it the value of the aggcombinefn column.\n>\n> Is Form_pg_aggregate->aggtransfn different thing from\n> transfn_oid? It seems very confusing to me apart from the naming.\n\nI don't think this is explained very well in the code, so I understand\nyour confusion. Some of the renaming work I've been trying to do is\nto try to make this more clear.\n\nBasically when we're doing the \"Finalize Aggregate\" stage after having\nperformed a previous \"Partial Aggregate\", we must re-aggregate all the\naggregate states that were made in the Partial Aggregate stage. Some\nof these states might need to be combined together if they both belong\nto the same group. That's done with the function mentioned in\npg_aggregate.aggcombinefn. Whether we're doing a normal \"Aggregate\"\nor a \"Finalize Aggregate\" the actual work to do is not all that\ndifferent, the only real difference is that we're aggregating\npreviously aggregated states rather than normal values. Since the rest\nof the work the same, we run it through the same code in nodeAgg.c,\nonly we use the pg_aggregate.aggcombinefn for \"Finalize Aggregate\" and\nuse pg_aggregate.aggtransfn for nodes shown as \"Aggregate\" and\n\"Partial Aggregate\" in explain.\n\nThat's sort of simplified as really a node shown as \"Partial\nAggregate\" in EXPLAIN is just not calling the aggfinalfn and \"Finalize\nAggregate\" is. The code in nodeAgg.c technically supports\nre-combining previously aggregated states and not finalizing them.\nYou might wonder why you might do that. It was partially a by-product\nof how the code was written, but also I had in mind about clustered\nservers aggregating large datasets on remote servers running parallel\naggregates on each server then each server sending back the partially\naggregated states back to the main server to be re-combined and\nfinalized -- 3-stage aggregation. Changes were made after the initial\npartial aggregate commit to partially remove the ability to form paths\nin this shape this in the planner code, but that's mostly just removed\nsupport in explain.c and changing bool flags in favour of the AggSplit\nenum which lacks a combination of values to do that. We'd need an\nAGGSPLIT_COMBINE_SKIPFINAL_DESERIAL_SERIAL... or something, to make\nthat work. (I'm surprised not to see more AggSplit values for\npartition-wise aggregate since that shouldn't need to serialize and\ndeserialize the states...)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 00:25:32 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Sun, May 19, 2019 at 2:36 PM Andres Freund <andres@anarazel.de> wrote:\n> Not sure I understand the distinction you're trying to make with the\n> variable renaming. The combine function is also a transition function,\n> no?\n\nNot in my mental model. It's true that a combine function is used in\na similar manner to a transition function, but they are not the same\nthing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 May 2019 09:23:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nOn May 20, 2019 6:23:46 AM PDT, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Sun, May 19, 2019 at 2:36 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>> Not sure I understand the distinction you're trying to make with the\n>> variable renaming. The combine function is also a transition\n>function,\n>> no?\n>\n>Not in my mental model. It's true that a combine function is used in\n>a similar manner to a transition function, but they are not the same\n>thing.\n\nWell, the context here is precisely that. We're still calling functions that have trans* in the name, we pass them transfn style named parameters. If you read my suggestion, it essentially is running go *further* than David's renaming?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 20 May 2019 08:23:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 17:27:10 +1200, David Rowley wrote:\n> On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:\n> > How\n> > about we have something roughly like:\n> >\n> > int numTransFnArgs = -1;\n> > int numCombineFnArgs = -1;\n> > Oid transFnInputTypes[FUNC_MAX_ARGS];\n> > Oid combineFnInputTypes[2];\n> >\n> > if (DO_AGGSPLIT_COMBINE(...)\n> > numCombineFnArgs = 1;\n> > combineFnInputTypes = list_make2(aggtranstype, aggtranstype);\n> > else\n> > numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);\n> >\n> > ...\n> >\n> > if (DO_AGGSPLIT_COMBINE(...))\n> > build_pertrans_for_aggref(pertrans, aggstate, estate,\n> > aggref, combinefn_oid, aggtranstype,\n> > serialfn_oid, deserialfn_oid,\n> > initValue, initValueIsNull,\n> > combineFnInputTypes, numCombineFnArgs);\n> > else\n> > build_pertrans_for_aggref(pertrans, aggstate, estate,\n> > aggref, transfn_oid, aggtranstype,\n> > serialfn_oid, deserialfn_oid,\n> > initValue, initValueIsNull,\n> > transFnInputTypes, numTransFnArgs);\n> >\n> > seems like that'd make the code clearer?\n> \n> I think that might be a good idea... I mean apart from trying to\n> assign a List to an array :) We still must call\n> get_aggregate_argtypes() in order to determine the final function, so\n> the code can't look exactly like you've written.\n> \n> > I wonder if we shouldn't\n> > strive to have *no* DO_AGGSPLIT_COMBINE specific logic in\n> > build_pertrans_for_aggref (except perhaps for an error check or two).\n> \n> Just so we have a hard copy to review and discuss, I think this would\n> look something like the attached.\n> \n> We do miss out on a few very small optimisations, but I don't think\n> they'll be anything we could measure. Namely\n> build_aggregate_combinefn_expr() called make_agg_arg() once and used\n> it twice instead of calling it once for each arg. I don't think\n> that's anything we could measure, especially in a situation where\n> two-stage aggregation is being used.\n> \n> I ended up also renaming aggtransfn to transfn_oid in\n> build_pertrans_for_aggref(). Having it called aggtranfn seems a bit\n> too close to the pg_aggregate.aggtransfn column which is confusion\n> given that we might pass it the value of the aggcombinefn column.\n\nNow that master is open for development, and you have a commit bit, are\nyou planning to go forward with this on your own?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:52:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 06:52, Andres Freund <andres@anarazel.de> wrote:\n> Now that master is open for development, and you have a commit bit, are\n> you planning to go forward with this on your own?\n\nI plan to, but it's not a high priority at the moment. I'd like to do\nmuch more in nodeAgg.c, TBH. It would be good to remove some code from\nnodeAgg.c and put it in the planner.\n\nI'd like to see:\n\n1) Planner doing the Aggref merging for aggregates with the same transfn etc.\n2) Planner trying to give nodeAgg.c a sorted path to work with on\nDISTINCT / ORDER BY aggs\n3) Planner providing nodeAgg.c with the order that the aggregates\nshould be evaluated in order to minimise sorting for DISTINCT / ORDER\nBY aggs.\n\nI'd take all those up on a separate thread though.\n\nIf you're in a rush to see the cleanup proposed a few months ago then\nplease feel free to take it up. It might be a while before I can get a\nchance to look at it again.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jul 2019 10:36:26 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-25 10:36:26 +1200, David Rowley wrote:\n> I'd like to do > much more in nodeAgg.c, TBH. It would be good to remove some code from\n> nodeAgg.c and put it in the planner.\n\nIndeed!\n\n\n> I'd like to see:\n> \n> 1) Planner doing the Aggref merging for aggregates with the same\n> transfn etc.\n\nMakes sense.\n\nI assume this would entail associating T_Aggref expressions with the\ncorresponding Agg at an earlier state? The whole business of having to\nprepare expression evaluation, just so ExecInitAgg() can figure out\nwhich aggregates it has to compute always has struck me as\narchitecturally bad.\n\n\n> 2) Planner trying to give nodeAgg.c a sorted path to work with on\n> DISTINCT / ORDER BY aggs\n\nThat'll have to be a best effort thing though, i.e. there'll always be\ncases where we'll have to retain the current logic (or just regress\nperformance really badly)?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 16:33:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 11:33, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-07-25 10:36:26 +1200, David Rowley wrote:\n> > 2) Planner trying to give nodeAgg.c a sorted path to work with on\n> > DISTINCT / ORDER BY aggs\n>\n> That'll have to be a best effort thing though, i.e. there'll always be\n> cases where we'll have to retain the current logic (or just regress\n> performance really badly)?\n\nIt's something we already do for windowing functions. We just don't do\nit for aggregates. It's slightly different since windowing functions\njust chain nodes together to evaluate multiple window definitions.\nAggregates can't/don't do that since aggregates... well, aggregate,\n(i.e the input to the 2nd one can't be aggregated by the 1st one) but\nthere's likely not much of a reason why standard_qp_callback couldn't\nchoose some pathkeys for the first AggRef with a ORDER BY / DISTINCT\nclause. nodeAgg.c would still need to know how to change the sort\norder in order to evaluate other Aggrefs in some different order.\n\nI'm not quite sure where the regression would be. nodeAgg.c must\nperform the sort, or if we give the planner some pathkeys, then worst\ncase the planner adds a Sort node. That seems equivalent to me.\nHowever, in the best case, there's a suitable index and no sorting is\nrequired anywhere. Probably then we can add combine function support\nfor the remaining built-in aggregates. There was trouble doing that in\n[1] due to some concerns about messing up results for people who rely\non the order of an aggregate without actually writing an ORDER BY.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A@mail.gmail.com\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jul 2019 11:52:33 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Statistical aggregate functions are not working with PARTIAL\n aggregation"
}
] |
[
{
"msg_contents": "Greetings\n\nI am interested in participating in GSoD 2019 and more specifically I am\ninterested in working on the project of impooving the Introductory Tutorial\nfor beginners PostgreSQL and in databases. I have a practical and\ntheoretical backround in PostgreSQL due to previous work. Thus I am not in\nlack of prior knowledge which is needed. Besides, I am about to get my\ndegree as an Electrical and Computer Engineer in a while. On the other hand\nI am a starter in the field of Technical Writing but I am willing to engage\nwith this profession. For this reason I consider GSoD and this project idea\na great opportunity for me. Do you think is a good idea for me to work on a\nproposal since I am a starter Technical Writer? I am looking forward for\nyour opinion.\n\nThank you in advance. Regards.\nEvangelos Karatarakis\nElectrical and Computer Engineering Student\nTechnical University of Crete\nGreece\n\nGreetingsI am interested in participating in GSoD 2019 and more specifically I am interested in working on the project of impooving the Introductory Tutorial for beginners PostgreSQL and in databases. I have a practical and theoretical backround in PostgreSQL due to previous work. Thus I am not in lack of prior knowledge which is needed. Besides, I am about to get my degree as an Electrical and Computer Engineer in a while. On the other hand I am a starter in the field of Technical Writing but I am willing to engage with this profession. For this reason I consider GSoD and this project idea a great opportunity for me. Do you think is a good idea for me to work on a proposal since I am a starter Technical Writer? I am looking forward for your opinion.Thank you in advance. Regards.Evangelos KaratarakisElectrical and Computer Engineering StudentTechnical University of CreteGreece",
"msg_date": "Fri, 3 May 2019 16:05:05 +0300",
"msg_from": "Evangelos Karatarakis <baggeliskapa24@gmail.com>",
"msg_from_op": true,
"msg_subject": "Google Season of Docs 2019 - Starter"
},
{
"msg_contents": "Greetings,\n\n* Evangelos Karatarakis (baggeliskapa24@gmail.com) wrote:\n> I am interested in participating in GSoD 2019 and more specifically I am\n> interested in working on the project of impooving the Introductory Tutorial\n> for beginners PostgreSQL and in databases. I have a practical and\n> theoretical backround in PostgreSQL due to previous work. Thus I am not in\n> lack of prior knowledge which is needed. Besides, I am about to get my\n> degree as an Electrical and Computer Engineer in a while. On the other hand\n> I am a starter in the field of Technical Writing but I am willing to engage\n> with this profession. For this reason I consider GSoD and this project idea\n> a great opportunity for me. Do you think is a good idea for me to work on a\n> proposal since I am a starter Technical Writer? I am looking forward for\n> your opinion.\n\nThis is covered here, I believe:\n\nhttps://developers.google.com/season-of-docs/docs/\n\nSpecifically:\n\n\"Technical writers are technical writers worldwide who’re accepted to\ntake part in this year’s Season of Docs. Applicants should be able to\ndemonstrate prior technical writing experience by submitting role\ndescriptions and work samples. See the technical writer guide and\nresponsibilities.\"\n\nThis program isn't intended, as I understand the above to say, as a\nstarter for technical writers. I recommend you ask Google directly\nregarding your interest in the program.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 May 2019 09:31:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Google Season of Docs 2019 - Starter"
}
] |
[
{
"msg_contents": "Hello\n\nError reporting in extended statistics is inconsistent -- many messages\nthat are ereport() in mvdistinct.c are elog() in the other modules. I\nthink what happened is that I changed them from elog to ereport when\ncommitting mvdistinct, but Tomas and Simon didn't follow suit when\ncommitting the other two modules. As a result, some messages that\nshould be essentially duplicates only show up once, because the elog()\nones are not marked translatable.\n\nI think this should be cleaned up, while at the same time not giving too\nmuch hassle for translators; for example, this message\n\ndependencies.c: elog(ERROR, \"invalid MVDependencies size %zd (expected at least %zd)\",\n\nshould not only be turned into an ereport(), but also the MVDependencies\npart turned into a %s. (Alternatively, we could decide I was wrong and\nturn them all back into elogs, but I obviously vote against that.)\n\n$ git grep 'elog\\|errmsg' src/backend/statistics\n\ndependencies.c: elog(ERROR, \"cache lookup failed for ordering operator for type %u\",\ndependencies.c: elog(ERROR, \"invalid MVDependencies size %zd (expected at least %zd)\",\ndependencies.c: elog(ERROR, \"invalid dependency magic %d (expected %d)\",\ndependencies.c: elog(ERROR, \"invalid dependency type %d (expected %d)\",\ndependencies.c: errmsg(\"invalid zero-length item array in MVDependencies\")));\ndependencies.c: elog(ERROR, \"invalid dependencies size %zd (expected at least %zd)\",\ndependencies.c: elog(ERROR, \"cache lookup failed for statistics object %u\", mvoid);\ndependencies.c: elog(ERROR,\ndependencies.c: errmsg(\"cannot accept a value of type %s\", \"pg_dependencies\")));\ndependencies.c: errmsg(\"cannot accept a value of type %s\", \"pg_dependencies\")));\nextended_stats.c: errmsg(\"statistics object \\\"%s.%s\\\" could not be computed for relation \\\"%s.%s\\\"\",\nextended_stats.c: elog(ERROR, \"unexpected statistics type requested: %d\", type);\nextended_stats.c: elog(ERROR, \"stxkind is not a 1-D char array\");\nextended_stats.c: elog(ERROR, \"cache lookup failed for statistics object %u\", statOid);\nmcv.c: elog(ERROR, \"cache lookup failed for ordering operator for type %u\",\nmcv.c: elog(ERROR, \"cache lookup failed for statistics object %u\", mvoid);\nmcv.c: elog(ERROR,\nmcv.c: elog(ERROR, \"invalid MCV size %zd (expected at least %zu)\",\nmcv.c: elog(ERROR, \"invalid MCV magic %u (expected %u)\",\nmcv.c: elog(ERROR, \"invalid MCV type %u (expected %u)\",\nmcv.c: elog(ERROR, \"invalid zero-length dimension array in MCVList\");\nmcv.c: elog(ERROR, \"invalid length (%d) dimension array in MCVList\",\nmcv.c: elog(ERROR, \"invalid zero-length item array in MCVList\");\nmcv.c: elog(ERROR, \"invalid length (%u) item array in MCVList\",\nmcv.c: elog(ERROR, \"invalid MCV size %zd (expected %zu)\",\nmcv.c: elog(ERROR, \"invalid MCV size %zd (expected %zu)\",\nmcv.c: errmsg(\"function returning record called in context \"\nmcv.c: errmsg(\"cannot accept a value of type %s\", \"pg_mcv_list\")));\nmcv.c: errmsg(\"cannot accept a value of type %s\", \"pg_mcv_list\")));\nmcv.c: elog(ERROR, \"unknown clause type: %d\", clause->type);\nmvdistinct.c: elog(ERROR, \"cache lookup failed for statistics object %u\", mvoid);\nmvdistinct.c: elog(ERROR,\nmvdistinct.c: elog(ERROR, \"invalid MVNDistinct size %zd (expected at least %zd)\",\nmvdistinct.c: errmsg(\"invalid ndistinct magic %08x (expected %08x)\",\nmvdistinct.c: errmsg(\"invalid ndistinct type %d (expected %d)\",\nmvdistinct.c: errmsg(\"invalid zero-length item array in MVNDistinct\")));\nmvdistinct.c: errmsg(\"invalid MVNDistinct size %zd (expected at least %zd)\",\nmvdistinct.c: errmsg(\"cannot accept a value of type %s\", \"pg_ndistinct\")));\nmvdistinct.c: errmsg(\"cannot accept a value of type %s\", \"pg_ndistinct\")));\nmvdistinct.c: elog(ERROR, \"cache lookup failed for ordering operator for type %u\",\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/\n\n\n",
"msg_date": "Fri, 3 May 2019 11:44:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "error messages in extended statistics"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Error reporting in extended statistics is inconsistent -- many messages\n> that are ereport() in mvdistinct.c are elog() in the other modules.\n> ...\n> I think this should be cleaned up, while at the same time not giving too\n> much hassle for translators; for example, this message\n> dependencies.c: elog(ERROR, \"invalid MVDependencies size %zd (expected at least %zd)\",\n> should not only be turned into an ereport(), but also the MVDependencies\n> part turned into a %s. (Alternatively, we could decide I was wrong and\n> turn them all back into elogs, but I obviously vote against that.)\n\nFWIW, I'd vote the other way: that seems like a clear \"internal error\",\nso making translators deal with it is just make-work. It should be an\nelog. If there's a reasonably plausible way for a user to trigger an\nerror condition, then yes ereport, but if we're reporting situations\nthat couldn't happen without a server bug then elog seems fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2019 12:21:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On Fri, May 03, 2019 at 12:21:36PM -0400, Tom Lane wrote:\n>Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> Error reporting in extended statistics is inconsistent -- many messages\n>> that are ereport() in mvdistinct.c are elog() in the other modules.\n>> ...\n>> I think this should be cleaned up, while at the same time not giving too\n>> much hassle for translators; for example, this message\n>> dependencies.c: elog(ERROR, \"invalid MVDependencies size %zd (expected at least %zd)\",\n>> should not only be turned into an ereport(), but also the MVDependencies\n>> part turned into a %s. (Alternatively, we could decide I was wrong and\n>> turn them all back into elogs, but I obviously vote against that.)\n>\n>FWIW, I'd vote the other way: that seems like a clear \"internal error\",\n>so making translators deal with it is just make-work. It should be an\n>elog. If there's a reasonably plausible way for a user to trigger an\n>error condition, then yes ereport, but if we're reporting situations\n>that couldn't happen without a server bug then elog seems fine.\n>\n\nYeah, I agree. Most of (peshaps all) those errors are internal errors,\nand thus should be elog. I'll take care of clening this up a bit.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 3 May 2019 21:42:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On Fri, May 03, 2019 at 09:42:17PM +0200, Tomas Vondra wrote:\n>On Fri, May 03, 2019 at 12:21:36PM -0400, Tom Lane wrote:\n>>Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>>Error reporting in extended statistics is inconsistent -- many messages\n>>>that are ereport() in mvdistinct.c are elog() in the other modules.\n>>>...\n>>>I think this should be cleaned up, while at the same time not giving too\n>>>much hassle for translators; for example, this message\n>>>dependencies.c: elog(ERROR, \"invalid MVDependencies size %zd (expected at least %zd)\",\n>>>should not only be turned into an ereport(), but also the MVDependencies\n>>>part turned into a %s. (Alternatively, we could decide I was wrong and\n>>>turn them all back into elogs, but I obviously vote against that.)\n>>\n>>FWIW, I'd vote the other way: that seems like a clear \"internal error\",\n>>so making translators deal with it is just make-work. It should be an\n>>elog. If there's a reasonably plausible way for a user to trigger an\n>>error condition, then yes ereport, but if we're reporting situations\n>>that couldn't happen without a server bug then elog seems fine.\n>>\n>\n>Yeah, I agree. Most of (peshaps all) those errors are internal errors,\n>and thus should be elog. I'll take care of clening this up a bit.\n>\n\nOK, so here is a patch, using elog() for all places except for the\ninput function, where we simply report we don't accept those values.\n\nI agree those are internal errors, usually meaning the statistics object\nwas either corrupted or there's a bug in how it's built/serialized.\nUsers should not be able to trigger those cases (the only thing I can\nthink of is sending a bogus value through send/recv functions, that\nsimply do byteaout/byteain).\n\nNow, what about backpatch? It's a small tweak, but it makes the life a\nbit easier for translators ... \n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 5 May 2019 01:42:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On 2019-May-05, Tomas Vondra wrote:\n\n> OK, so here is a patch, using elog() for all places except for the\n> input function, where we simply report we don't accept those values.\n\nHmm, does this actually work? I didn't know that elog() supported\nerrcode()/errmsg()/etc. I thought the macro definition didn't allow for\nthat.\n\nAnyway, since the messages are still passed with errmsg(), they would\nstill end up in the message catalog, so this patch doesn't help my case.\nI would suggest that instead of changing ereport to elog, you should\nchange errmsg() to errmsg_internal(). That prevents the translation\nmarking, and achieves the desired effect. (You can verify by running\n\"make update-po\" in src/backend/ and seeing that the msgid no longer\nappears in postgres.pot).\n\n> Now, what about backpatch? It's a small tweak, but it makes the life a\n> bit easier for translators ...\n\n+1 for backpatching.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 12:17:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On Wed, May 15, 2019 at 12:17:29PM -0400, Alvaro Herrera wrote:\n>On 2019-May-05, Tomas Vondra wrote:\n>\n>> OK, so here is a patch, using elog() for all places except for the\n>> input function, where we simply report we don't accept those values.\n>\n>Hmm, does this actually work? I didn't know that elog() supported\n>errcode()/errmsg()/etc. I thought the macro definition didn't allow for\n>that.\n>\n\nD'oh, it probably does not. I might not have tried to compile it before\nsending it to the mailing list, not sure ... :-(\n\n>Anyway, since the messages are still passed with errmsg(), they would\n>still end up in the message catalog, so this patch doesn't help my case.\n>I would suggest that instead of changing ereport to elog, you should\n>change errmsg() to errmsg_internal(). That prevents the translation\n>marking, and achieves the desired effect. (You can verify by running\n>\"make update-po\" in src/backend/ and seeing that the msgid no longer\n>appears in postgres.pot).\n>\n>> Now, what about backpatch? It's a small tweak, but it makes the life a\n>> bit easier for translators ...\n>\n>+1 for backpatching.\n>\n\nOK.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 15 May 2019 18:35:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On Wed, May 15, 2019 at 06:35:47PM +0200, Tomas Vondra wrote:\n>On Wed, May 15, 2019 at 12:17:29PM -0400, Alvaro Herrera wrote:\n>>On 2019-May-05, Tomas Vondra wrote:\n>>\n>>>OK, so here is a patch, using elog() for all places except for the\n>>>input function, where we simply report we don't accept those values.\n>>\n>>Hmm, does this actually work? I didn't know that elog() supported\n>>errcode()/errmsg()/etc. I thought the macro definition didn't allow for\n>>that.\n>>\n>\n>D'oh, it probably does not. I might not have tried to compile it before\n>sending it to the mailing list, not sure ... :-(\n>\n>>Anyway, since the messages are still passed with errmsg(), they would\n>>still end up in the message catalog, so this patch doesn't help my case.\n>>I would suggest that instead of changing ereport to elog, you should\n>>change errmsg() to errmsg_internal(). That prevents the translation\n>>marking, and achieves the desired effect. (You can verify by running\n>>\"make update-po\" in src/backend/ and seeing that the msgid no longer\n>>appears in postgres.pot).\n>>\n>>>Now, what about backpatch? It's a small tweak, but it makes the life a\n>>>bit easier for translators ...\n>>\n>>+1 for backpatching.\n>>\n\nPushed and backpatched, changing most places to elog().\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 30 May 2019 17:10:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: error messages in extended statistics"
},
{
"msg_contents": "On 2019-May-30, Tomas Vondra wrote:\n\n> Pushed and backpatched, changing most places to elog().\n\nThanks :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 May 2019 12:21:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: error messages in extended statistics"
}
] |
[
{
"msg_contents": "I spent a significant chunk of today burning through roughly 2^31 XIDs\njust to see what would happen. My test setup consisted of\nautovacuum=off plus a trivial prepared transaction plus a lot of this:\n\n+ BeginInternalSubTransaction(\"txid_burn\");\n+ (void) GetCurrentTransactionId();\n+ ReleaseCurrentSubTransaction();\n\nObservations:\n\n1. As soon as the XID of the prepared transaction gets old enough to\ntrigger autovacuum, autovacuum goes nuts. It vacuums everything in\nthe database over and over again, but that does no good, because the\nprepared transaction holds back the XID horizon. There are previous\nreports of this and related problems, such as this one from 2014:\n\nhttp://postgr.es/m/CAMkU=1yE4YyCC00W_GcNoOZ4X2qxF7x5DUAR_kMt-Ta=YPyFPQ@mail.gmail.com\n\nThat thread got hung up on the question of prioritization: if there's\na lot of stuff that needs to be autovacuumed, which stuff should we do\nfirst? But I think that we overlooked a related issue, which is that\nthere's no point in autovacuuming a table in the first place if doing\nso won't advance help advance relfrozenxid and/or relminmxid. The\nautovacuum launcher will happily compute a force limit that is newer\nthan OldestXmin and decide on that basis to route a worker to a\nparticular database, and that worker will then compute a force limit\nthat is newer than OldestXmin examine relations in that database and\ndecide to vacuum them, and then the vacuum operation itself will\ndecide on a similar basis that it's going to be aggressive vacuum.\nBut we can't actually remove any tuples that are newer than\nOldestXmin, so we have no actual hope of accomplishing anything by\nthat aggressive vacuum. I am not sure exactly how to fix this,\nbecause the calculation we use to determine the XID that can be used\nto vacuum a specific table is pretty complex; how can the postmaster\nknow whether it's going to be able to make any progress in *any* table\nin some database to which it's not even connected? But it's surely\ncrazy to just keep doing something over and over that can't possibly\nwork.\n\n2. Once you get to the point where you start to emit errors when\nattempting to assign an XID, you can still run plain old VACUUM\nbecause it doesn't consume an XID ... except that if it tries to\ntruncate the relation, then it will take AccessExclusiveLock, which\nhas to be logged, which forces an XID assignment, which makes VACUUM\nfail. So if you burn through XIDs until the system gets to this\npoint, and then you roll back the prepared transaction that caused the\nproblem in the first place, autovacuum sits there trying to vacuum\ntables in a tight loop and fails over and over again as soon as hits a\ntable that it thinks needs to be truncated. This seems really lame,\nand also easily fixed.\n\nAttached is a patch that disables vacuum truncation if xidWarnLimit\nhas been reached. With this patch, in my testing, autovacuum is able\nto recover the system once the prepared transaction has been rolled\nback. Without this patch, not only does that not happen, but if you\nhad a database with enough relations that need truncation, you could\nconceivably cause XID wraparound just from running a database-wide\nVACUUM, the one tool you have available to avoid XID wraparound. I\nthink that this amounts to a back-patchable bug fix.\n\n(One could argue that truncation should be disabled sooner than this,\nlike when we've exceed autovacuum_freeze_max_age, or later than this,\nlike when we hit xidStopLimit, but I think xidWarnLimit is probably\nthe best compromise.)\n\n3. The message you get when you hit xidStopLimit seems like bad advice to me:\n\nERROR: database is not accepting commands to avoid wraparound data\nloss in database \"%s\"\nHINT: Stop the postmaster and vacuum that database in single-user mode.\nYou might also need to commit or roll back old prepared transactions,\nor drop stale replication slots.\n\nWhy do we want people to stop the postmaster and vacuum that database\nin single user mode? Why not just run VACUUM in multi-user mode, or\nlet autovacuum take care of the problem? Granted, if VACUUM is going\nto fail in multi-user mode, and if switching to single-user mode is\ngoing to make it succeed, then it's a good suggestion. But it seems\nthat it doesn't fail in multi-user mode, unless it tries to truncate\nsomething, which is a bug we should fix. Telling people to go to\nsingle-user mode where they can continue to assign XIDs even though\nthey have almost no XIDs left seems extremely dangerous, actually.\n\nAlso, I think that old prepared transactions and stale replication\nslots should be emphasized more prominently. Maybe something like:\n\nHINT: Commit or roll back old prepared transactions, drop stale\nreplication slots, or kill long-running sessions.\nEnsure that autovacuum is progressing, or run a manual database-wide VACUUM.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 3 May 2019 16:26:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 16:26:46 -0400, Robert Haas wrote:\n> 2. Once you get to the point where you start to emit errors when\n> attempting to assign an XID, you can still run plain old VACUUM\n> because it doesn't consume an XID ... except that if it tries to\n> truncate the relation, then it will take AccessExclusiveLock, which\n> has to be logged, which forces an XID assignment, which makes VACUUM\n> fail. So if you burn through XIDs until the system gets to this\n> point, and then you roll back the prepared transaction that caused the\n> problem in the first place, autovacuum sits there trying to vacuum\n> tables in a tight loop and fails over and over again as soon as hits a\n> table that it thinks needs to be truncated. This seems really lame,\n> and also easily fixed.\n> \n> Attached is a patch that disables vacuum truncation if xidWarnLimit\n> has been reached. With this patch, in my testing, autovacuum is able\n> to recover the system once the prepared transaction has been rolled\n> back. Without this patch, not only does that not happen, but if you\n> had a database with enough relations that need truncation, you could\n> conceivably cause XID wraparound just from running a database-wide\n> VACUUM, the one tool you have available to avoid XID wraparound. I\n> think that this amounts to a back-patchable bug fix.\n> \n> (One could argue that truncation should be disabled sooner than this,\n> like when we've exceed autovacuum_freeze_max_age, or later than this,\n> like when we hit xidStopLimit, but I think xidWarnLimit is probably\n> the best compromise.)\n\nI'd actually say the proper fix would be to instead move the truncation\nto *after* finishing updating relfrozenxid etc. If we truncate, the\nadditional cost of another in-place pg_class update, to update relpages,\nis basically insignificant. And the risk of errors, or being cancelled,\nduring truncation is much higher than before (due to the AEL).\n\n\n\n> Also, I think that old prepared transactions and stale replication\n> slots should be emphasized more prominently. Maybe something like:\n> \n> HINT: Commit or roll back old prepared transactions, drop stale\n> replication slots, or kill long-running sessions.\n> Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\n\nI think it'd be good to instead compute what the actual problem is. It'd\nnot be particularly hard to show some these in the errdetail:\n\n1) the xid horizon (xid + age) of the problematic database; potentially,\n if connected to that database, additionally compute what the oldest\n xid is (although that's computationally potentially too expensive)\n2) the xid horizon (xid + age) due to prepared transactions, and the\n oldest transaction's name\n3) the xid horizon (xid + age) due to replication slot, and the \"oldest\"\n slot's name\n4) the xid horizon (xid + age) and pid for the connection with the\n oldest snapshot.\n\nI think that'd allow users much much easier to pinpoint what's going on.\n\nIn fact, I think we probably should additionally add a function that can\ndisplay the above. That'd make it much easier to write monitoring\nqueries.\n\n\nIMO we also ought to compute the *actual* relfrozenxid/relminmxid for a\ntable. I.e. the oldest xid actually present. It's pretty common for most\ntables to have effective horizons that are much newer than what\nGetOldestXmin()/vacuum_set_xid_limits() can return. Obviously we can do\nso only when scanning all non-frozen pages. But being able to record\n\"more aggressive\" horizons would often save unnecessary work. And it\nought to not be hard. I think especially for regular non-freeze,\nnon-wraparound vacuums that'll often result in a much newer relfrozenxid\n(as we'll otherwise just have GetOldestXmin() - vacuum_freeze_min_age).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 13:47:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "On Fri, May 3, 2019 at 4:47 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd actually say the proper fix would be to instead move the truncation\n> to *after* finishing updating relfrozenxid etc. If we truncate, the\n> additional cost of another in-place pg_class update, to update relpages,\n> is basically insignificant. And the risk of errors, or being cancelled,\n> during truncation is much higher than before (due to the AEL).\n\nThat would prevent the ERROR from impeding relfrozenxid advancement,\nbut it does not prevent the error itself, nor the XID consumption. If\nautovacuum is hitting that ERROR, it will spew errors in the log but\nsucceed in advancing relfrozenxid anyway. I don't think that's as\nnice as the behavior I proposed, but it's certainly better than the\nstatus quo. If you are hitting that error due to a manual VACUUM,\nunder your proposal, you'll stop the manual VACUUM as soon as you hit\nthe first table where this happens, which is not what you want.\nYou'll also keep consuming XIDs, which is not what you want either,\nespecially if you are in single-user mode because the number of\nremaining XIDs is less that a million.\n\n> > Also, I think that old prepared transactions and stale replication\n> > slots should be emphasized more prominently. Maybe something like:\n> >\n> > HINT: Commit or roll back old prepared transactions, drop stale\n> > replication slots, or kill long-running sessions.\n> > Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\n>\n> I think it'd be good to instead compute what the actual problem is. It'd\n> not be particularly hard to show some these in the errdetail:\n>\n> 1) the xid horizon (xid + age) of the problematic database; potentially,\n> if connected to that database, additionally compute what the oldest\n> xid is (although that's computationally potentially too expensive)\n> 2) the xid horizon (xid + age) due to prepared transactions, and the\n> oldest transaction's name\n> 3) the xid horizon (xid + age) due to replication slot, and the \"oldest\"\n> slot's name\n> 4) the xid horizon (xid + age) and pid for the connection with the\n> oldest snapshot.\n>\n> I think that'd allow users much much easier to pinpoint what's going on.\n>\n> In fact, I think we probably should additionally add a function that can\n> display the above. That'd make it much easier to write monitoring\n> queries.\n\nI think that the error and hint that you get from\nGetNewTransactionId() has to be something that we can generate very\nquickly, without doing anything that might hang on cluster with lots\nof databases or lots of relations; but if there's useful detail we can\ndisplay there, that's good. With a view, it's more OK if it takes a\nlong time on a big cluster.\n\n> IMO we also ought to compute the *actual* relfrozenxid/relminmxid for a\n> table. I.e. the oldest xid actually present. It's pretty common for most\n> tables to have effective horizons that are much newer than what\n> GetOldestXmin()/vacuum_set_xid_limits() can return. Obviously we can do\n> so only when scanning all non-frozen pages. But being able to record\n> \"more aggressive\" horizons would often save unnecessary work. And it\n> ought to not be hard. I think especially for regular non-freeze,\n> non-wraparound vacuums that'll often result in a much newer relfrozenxid\n> (as we'll otherwise just have GetOldestXmin() - vacuum_freeze_min_age).\n\nSure, that would make sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 May 2019 18:42:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I spent a significant chunk of today burning through roughly 2^31 XIDs\n> just to see what would happen. ...\n\n> 2. Once you get to the point where you start to emit errors when\n> attempting to assign an XID, you can still run plain old VACUUM\n> because it doesn't consume an XID ... except that if it tries to\n> truncate the relation, then it will take AccessExclusiveLock, which\n> has to be logged, which forces an XID assignment, which makes VACUUM\n> fail.\n\nYeah. I tripped over that earlier this week in connection with the\nREINDEX business: taking an AEL only forces XID assignment when\nwal_level is above \"minimal\", so it's easy to come to the wrong\nconclusions depending on your test environment. I suspect that\nprevious testing of wraparound behavior (yes there has been some)\ndidn't see this effect because the default wal_level didn't use to\ncause it to happen. But anyway, it's there now and I agree we'd\nbetter do something about it.\n\nMy brain is too fried from release-note-writing to have any trustworthy\nopinion right now about whether your patch is the best way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2019 18:46:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 18:42:35 -0400, Robert Haas wrote:\n> On Fri, May 3, 2019 at 4:47 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd actually say the proper fix would be to instead move the truncation\n> > to *after* finishing updating relfrozenxid etc. If we truncate, the\n> > additional cost of another in-place pg_class update, to update relpages,\n> > is basically insignificant. And the risk of errors, or being cancelled,\n> > during truncation is much higher than before (due to the AEL).\n> \n> That would prevent the ERROR from impeding relfrozenxid advancement,\n> but it does not prevent the error itself, nor the XID consumption. If\n> autovacuum is hitting that ERROR, it will spew errors in the log but\n> succeed in advancing relfrozenxid anyway. I don't think that's as\n> nice as the behavior I proposed, but it's certainly better than the\n> status quo. If you are hitting that error due to a manual VACUUM,\n> under your proposal, you'll stop the manual VACUUM as soon as you hit\n> the first table where this happens, which is not what you want.\n> You'll also keep consuming XIDs, which is not what you want either,\n> especially if you are in single-user mode because the number of\n> remaining XIDs is less that a million.\n\nPart of my opposition to just disabling it when close to a wraparound,\nis that it still allows to get close to wraparound because of truncation\nissues. IMO preventing getting closer to wraparound is more important\nthan making it more \"comfortable\" to be in a wraparound situation.\n\nThe second problem I see is that even somebody close to a wraparound\nmight have an urgent need to free up some space. So I'm a bit wary of\njust disabling it.\n\nWonder if there's a reasonable way that'd allow to do the WAL logging\nfor the truncation without using an xid. One way would be to just get\nrid of the lock on the primary as previously discussed. But we could\nalso drive the locking through the WAL records that do the actual\ntruncation - then there'd not be a need for an xid. It's probably not a\nentirely trivial change, but I don't think it'd be too bad?\n\n\n> > > Also, I think that old prepared transactions and stale replication\n> > > slots should be emphasized more prominently. Maybe something like:\n> > >\n> > > HINT: Commit or roll back old prepared transactions, drop stale\n> > > replication slots, or kill long-running sessions.\n> > > Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\n> >\n> > I think it'd be good to instead compute what the actual problem is. It'd\n> > not be particularly hard to show some these in the errdetail:\n> >\n> > 1) the xid horizon (xid + age) of the problematic database; potentially,\n> > if connected to that database, additionally compute what the oldest\n> > xid is (although that's computationally potentially too expensive)\n\ns/oldest xid/oldest relfrozenxid/\n\n\n> > 2) the xid horizon (xid + age) due to prepared transactions, and the\n> > oldest transaction's name\n> > 3) the xid horizon (xid + age) due to replication slot, and the \"oldest\"\n> > slot's name\n> > 4) the xid horizon (xid + age) and pid for the connection with the\n> > oldest snapshot.\n> >\n> > I think that'd allow users much much easier to pinpoint what's going on.\n> >\n> > In fact, I think we probably should additionally add a function that can\n> > display the above. That'd make it much easier to write monitoring\n> > queries.\n> \n> I think that the error and hint that you get from\n> GetNewTransactionId() has to be something that we can generate very\n> quickly, without doing anything that might hang on cluster with lots\n> of databases or lots of relations; but if there's useful detail we can\n> display there, that's good. With a view, it's more OK if it takes a\n> long time on a big cluster.\n\nYea, I agree it has to be reasonably fast. But all of the above, with\nthe exception of the optional \"oldest table\", should be cheap enough to\ncompute. Sure, a scan through PGXACT isn't cheap, but in comparison to\nan ereport() and an impending shutdown it's peanuts. In contrast to\nscanning a pg_class that could be many many gigabytes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 17:45:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "On Fri, May 3, 2019 at 8:45 PM Andres Freund <andres@anarazel.de> wrote:\n> Part of my opposition to just disabling it when close to a wraparound,\n> is that it still allows to get close to wraparound because of truncation\n> issues.\n\nSure ... it would definitely be better if vacuum didn't consume XIDs\nwhen it truncates. On the other hand, only a minority of VACUUM\noperations will truncate, so I don't think there's really a big\nproblem in practice here.\n\n> IMO preventing getting closer to wraparound is more important\n> than making it more \"comfortable\" to be in a wraparound situation.\n\nI think that's a false dichotomy. It's impossible to create a\nsituation where no user ever gets into a wraparound situation, unless\nwe're prepared to do things like automatically drop replication slots\nand automatically roll back (or commit?) prepared transactions. So,\nwhile it is good to prevent a user from getting into a wraparound\nsituation where we can, it is ALSO good to make it easy to recover\nfrom those situations as painlessly as possible when they do happen.\n\n> The second problem I see is that even somebody close to a wraparound\n> might have an urgent need to free up some space. So I'm a bit wary of\n> just disabling it.\n\nI would find that ... really surprising. If you have < 11 million\nXIDs left before your data gets eaten by a grue, and you file a bug\nreport complaining that vacuum won't truncate your tables until you\ncatch up on vacuuming a bit, I am prepared to offer you no sympathy at\nall. I mean, I'm not going to say that we couldn't invent more\ncomplicated behavior, at least on master, like making the new VACUUM\n(TRUNCATE) object ternary-valued: default is on when you have more\nthan 11 million XIDs remaining and off otherwise, but you can override\neither value by saying VACUUM (TRUNCATE { ON | OFF }). But is that\nreally a thing? People have less than 11 million XIDs left and\nthey're like \"forget anti-wraparound, I want to truncate away some\nempty pages\"?\n\n> Wonder if there's a reasonable way that'd allow to do the WAL logging\n> for the truncation without using an xid. One way would be to just get\n> rid of the lock on the primary as previously discussed. But we could\n> also drive the locking through the WAL records that do the actual\n> truncation - then there'd not be a need for an xid. It's probably not a\n> entirely trivial change, but I don't think it'd be too bad?\n\nBeats me. For me, this is just a bug, not an excuse to redesign\nvacuum truncation. Before Hot Standby, when you got into severe\nwraparound trouble, you could vacuum all your tables without consuming\nany XIDs. Now you can't. That's bad, and I think we should come up\nwith some kind of back-patchable solution to that problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 May 2019 21:36:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> I am not sure exactly how to fix this,\n> because the calculation we use to determine the XID that can be used\n> to vacuum a specific table is pretty complex; how can the postmaster\n> know whether it's going to be able to make any progress in *any* table\n> in some database to which it's not even connected? But it's surely\n> crazy to just keep doing something over and over that can't possibly\n> work.\n\nI definitely agree that it's foolish to keep doing something that isn't\ngoing to work, and it seems like a pretty large part of the issue here\nis that we don't have enough information to be more intelligent because\nwe aren't connected to the database that needs the work to be done.\n\nNow, presuming we're talking about 'new feature work' here to try and\naddress this, and not something that we think we can back-patch, I had\nanother thought.\n\nI certainly get that having lots of extra processes around can be a\nwaste of resources... but I don't recall a lot of people complaining\nabout the autovacuum launcher process using up lots of resources\nunnecessairly.\n\nPerhaps we should consider having the 'global' autovacuum launcher, when\nit's decided that a database needs work to be done, launch a 'per-DB'\nlauncher which manages launching autovacuum processes for that database?\nIf the launcher is still running then it's because there's still work to\nbe done on that database and the 'global' autovacuum launcher can skip\nit. If the 'per-DB' launcher runs out of things to do, and the database\nit's working on is no longer in a danger zone, then it exits.\n\nThere are certainly some other variations on this idea and I don't know\nthat it's really better than keeping more information in shared\nmemory or something else, but it seems like part of the issue is that\nthe thing firing off the processes hasn't got enough info to do so\nintelligently and maybe we could fix that by having per-DB launchers\nthat are actually connected to a DB.\n\n> 2. Once you get to the point where you start to emit errors when\n> attempting to assign an XID, you can still run plain old VACUUM\n> because it doesn't consume an XID ... except that if it tries to\n> truncate the relation, then it will take AccessExclusiveLock, which\n> has to be logged, which forces an XID assignment, which makes VACUUM\n> fail. So if you burn through XIDs until the system gets to this\n> point, and then you roll back the prepared transaction that caused the\n> problem in the first place, autovacuum sits there trying to vacuum\n> tables in a tight loop and fails over and over again as soon as hits a\n> table that it thinks needs to be truncated. This seems really lame,\n> and also easily fixed.\n> \n> Attached is a patch that disables vacuum truncation if xidWarnLimit\n> has been reached. With this patch, in my testing, autovacuum is able\n> to recover the system once the prepared transaction has been rolled\n> back. Without this patch, not only does that not happen, but if you\n> had a database with enough relations that need truncation, you could\n> conceivably cause XID wraparound just from running a database-wide\n> VACUUM, the one tool you have available to avoid XID wraparound. I\n> think that this amounts to a back-patchable bug fix.\n\nSounds reasonable to me but I've not looked at the patch at all.\n\n> 3. The message you get when you hit xidStopLimit seems like bad advice to me:\n> \n> ERROR: database is not accepting commands to avoid wraparound data\n> loss in database \"%s\"\n> HINT: Stop the postmaster and vacuum that database in single-user mode.\n> You might also need to commit or roll back old prepared transactions,\n> or drop stale replication slots.\n> \n> Why do we want people to stop the postmaster and vacuum that database\n> in single user mode? Why not just run VACUUM in multi-user mode, or\n> let autovacuum take care of the problem? Granted, if VACUUM is going\n> to fail in multi-user mode, and if switching to single-user mode is\n> going to make it succeed, then it's a good suggestion. But it seems\n> that it doesn't fail in multi-user mode, unless it tries to truncate\n> something, which is a bug we should fix. Telling people to go to\n> single-user mode where they can continue to assign XIDs even though\n> they have almost no XIDs left seems extremely dangerous, actually.\n> \n> Also, I think that old prepared transactions and stale replication\n> slots should be emphasized more prominently. Maybe something like:\n> \n> HINT: Commit or roll back old prepared transactions, drop stale\n> replication slots, or kill long-running sessions.\n> Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\n\nI agree that a better message would definitely be good and that\nrecommending single-user isn't a terribly useful thing to do. I might\nhave misunderstood it, but it sounded like Andres was proposing a new\nfunction which would basically tell you what's holding back the xid\nhorizon and that sounds fantastic and would be great to include in this\nmessage, if possible.\n\nAs in:\n\nHINT: Run the function pg_what_is_holding_xmin_back() to identify what\nis preventing autovacuum from progressing and address it.\n\nOr some such.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 May 2019 22:03:18 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 21:36:24 -0400, Robert Haas wrote:\n> On Fri, May 3, 2019 at 8:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > Part of my opposition to just disabling it when close to a wraparound,\n> > is that it still allows to get close to wraparound because of truncation\n> > issues.\n> \n> Sure ... it would definitely be better if vacuum didn't consume XIDs\n> when it truncates. On the other hand, only a minority of VACUUM\n> operations will truncate, so I don't think there's really a big\n> problem in practice here.\n\nI've seen a number of out-of-xid shutdowns precisely because of\ntruncations. Initially because autovacuum commits suicide because\nsomebody else wants a conflicting lock, later because there's so much\ndead space that people kill (auto)vacuum to get rid of the exclusive\nlocks.\n\n\n> > IMO preventing getting closer to wraparound is more important\n> > than making it more \"comfortable\" to be in a wraparound situation.\n> \n> I think that's a false dichotomy. It's impossible to create a\n> situation where no user ever gets into a wraparound situation, unless\n> we're prepared to do things like automatically drop replication slots\n> and automatically roll back (or commit?) prepared transactions. So,\n> while it is good to prevent a user from getting into a wraparound\n> situation where we can, it is ALSO good to make it easy to recover\n> from those situations as painlessly as possible when they do happen.\n\nSure, but I've seen a number of real-world cases of xid wraparound\nshutdowns related to truncations, and no real world problem due to\ntruncations assigning an xid.\n\n\n> > The second problem I see is that even somebody close to a wraparound\n> > might have an urgent need to free up some space. So I'm a bit wary of\n> > just disabling it.\n> \n> I would find that ... really surprising. If you have < 11 million\n> XIDs left before your data gets eaten by a grue, and you file a bug\n> report complaining that vacuum won't truncate your tables until you\n> catch up on vacuuming a bit, I am prepared to offer you no sympathy at\n> all.\n\nI've seen wraparound issues triggered by auto-vacuum generating so much\nWAL that the system ran out of space, crash-restart, repeat. And being\nunable to reclaim space could make that even harder to tackle.\n\n\n> > Wonder if there's a reasonable way that'd allow to do the WAL logging\n> > for the truncation without using an xid. One way would be to just get\n> > rid of the lock on the primary as previously discussed. But we could\n> > also drive the locking through the WAL records that do the actual\n> > truncation - then there'd not be a need for an xid. It's probably not a\n> > entirely trivial change, but I don't think it'd be too bad?\n> \n> Beats me. For me, this is just a bug, not an excuse to redesign\n> vacuum truncation. Before Hot Standby, when you got into severe\n> wraparound trouble, you could vacuum all your tables without consuming\n> any XIDs. Now you can't. That's bad, and I think we should come up\n> with some kind of back-patchable solution to that problem.\n\nI agree we need to do at least a minimal version that can be\nbackpatched.\n\nI don't think we necessarily need a new WAL record for what I'm\ndescribing above (as XLOG_SMGR_TRUNCATE already carries information\nabout which forks are truncated, we could just have it acquire the\nexclusive lock), and I don't think we'd need a ton of code for eliding\nthe WAL logged lock either. Think the issue with backpatching would be\nthat we can't remove the logged lock, without creating hazards for\nstandbys running older versions of postgres.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 19:06:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> I don't think we necessarily need a new WAL record for what I'm\n> describing above (as XLOG_SMGR_TRUNCATE already carries information\n> about which forks are truncated, we could just have it acquire the\n> exclusive lock), and I don't think we'd need a ton of code for eliding\n> the WAL logged lock either. Think the issue with backpatching would be\n> that we can't remove the logged lock, without creating hazards for\n> standbys running older versions of postgres.\n\nWhile it's pretty rare, I don't believe this would be the only case of\n\"you need to upgrade your replicas before your primary\" due to changes\nin WAL. Of course, we need to make sure that we actually figure out\nthat the WAL being sent is something that the replica doesn't know how\nto properly handle because it's from a newer primary; we can't simply do\nthe wrong thing in that case.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 May 2019 22:11:08 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 22:11:08 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > I don't think we necessarily need a new WAL record for what I'm\n> > describing above (as XLOG_SMGR_TRUNCATE already carries information\n> > about which forks are truncated, we could just have it acquire the\n> > exclusive lock), and I don't think we'd need a ton of code for eliding\n> > the WAL logged lock either. Think the issue with backpatching would be\n> > that we can't remove the logged lock, without creating hazards for\n> > standbys running older versions of postgres.\n> \n> While it's pretty rare, I don't believe this would be the only case of\n> \"you need to upgrade your replicas before your primary\" due to changes\n> in WAL. Of course, we need to make sure that we actually figure out\n> that the WAL being sent is something that the replica doesn't know how\n> to properly handle because it's from a newer primary; we can't simply do\n> the wrong thing in that case.\n\nDon't think this is severe enough to warrant a step like this. I think\nfor the back-patch case we could live with\na) move truncation to after the rest of vacuum\nb) don't truncate if it'd error out anyway, but log an error\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 19:13:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 22:03:18 -0400, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > I am not sure exactly how to fix this,\n> > because the calculation we use to determine the XID that can be used\n> > to vacuum a specific table is pretty complex; how can the postmaster\n> > know whether it's going to be able to make any progress in *any* table\n> > in some database to which it's not even connected? But it's surely\n> > crazy to just keep doing something over and over that can't possibly\n> > work.\n>\n> I definitely agree that it's foolish to keep doing something that isn't\n> going to work, and it seems like a pretty large part of the issue here\n> is that we don't have enough information to be more intelligent because\n> we aren't connected to the database that needs the work to be done.\n>\n> Now, presuming we're talking about 'new feature work' here to try and\n> address this, and not something that we think we can back-patch, I had\n> another thought.\n>\n> I certainly get that having lots of extra processes around can be a\n> waste of resources... but I don't recall a lot of people complaining\n> about the autovacuum launcher process using up lots of resources\n> unnecessairly.\n>\n> Perhaps we should consider having the 'global' autovacuum launcher, when\n> it's decided that a database needs work to be done, launch a 'per-DB'\n> launcher which manages launching autovacuum processes for that database?\n> If the launcher is still running then it's because there's still work to\n> be done on that database and the 'global' autovacuum launcher can skip\n> it. If the 'per-DB' launcher runs out of things to do, and the database\n> it's working on is no longer in a danger zone, then it exits.\n>\n> There are certainly some other variations on this idea and I don't know\n> that it's really better than keeping more information in shared\n> memory or something else, but it seems like part of the issue is that\n> the thing firing off the processes hasn't got enough info to do so\n> intelligently and maybe we could fix that by having per-DB launchers\n> that are actually connected to a DB.\n\nThis sounds a lot more like a wholesale redesign than some small\nincremental work. Which I think we should do, but we probably ought to\ndo something more minimal before the resources for something like this\nare there. Perfect being the enemy of the good, and all that.\n\n\n> I might have misunderstood it, but it sounded like Andres was\n> proposing a new function which would basically tell you what's holding\n> back the xid horizon and that sounds fantastic and would be great to\n> include in this message, if possible.\n>\n> As in:\n>\n> HINT: Run the function pg_what_is_holding_xmin_back() to identify what\n> is preventing autovacuum from progressing and address it.\n\nI was basically thinking of doing *both*, amending the message, *and*\nhaving a new UDF.\n\nBasically, instead of the current:\n\n char *oldest_datname = get_database_name(oldest_datoid);\n\n /* complain even if that DB has disappeared */\n if (oldest_datname)\n ereport(WARNING,\n (errmsg(\"database \\\"%s\\\" must be vacuumed within %u transactions\",\n oldest_datname,\n xidWrapLimit - xid),\n errhint(\"To avoid a database shutdown, execute a database-wide VACUUM in that database.\\n\"\n \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n else\n ereport(WARNING,\n (errmsg(\"database with OID %u must be vacuumed within %u transactions\",\n oldest_datoid,\n xidWrapLimit - xid),\n errhint(\"To avoid a database shutdown, execute a database-wide VACUUM in that database.\\n\"\n \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n }\n\nwhich is dramatically unhelpful, because often vacuuming won't do squat\n(because of old [prepared] transaction, replication connection, or slot).\n\nI'm thinking that we'd do something roughly like (in actual code) for\nGetNewTransactionId():\n\n TransactionId dat_limit = ShmemVariableCache->oldestXid;\n TransactionId slot_limit = Min(replication_slot_xmin, replication_slot_catalog_xmin);\n Transactionid walsender_limit;\n Transactionid prepared_xact_limit;\n Transactionid backend_limit;\n\n ComputeOldestXminFromProcarray(&walsender_limit, &prepared_xact_limit, &backend_limit);\n\n if (IsOldest(dat_limit))\n ereport(elevel,\n errmsg(\"close to xid wraparound, held back by database %s\"),\n errdetail(\"current xid %u, horizon for database %u, shutting down at %u\"),\n errhint(\"...\"));\n else if (IsOldest(slot_limit))\n ereport(elevel, errmsg(\"close to xid wraparound, held back by replication slot %s\"),\n ...);\n\nwhere IsOldest wouldn't actually compare plainly numerically, but would\nactually prefer showing the slot, backend, walsender, prepared_xact, as\nlong as they are pretty close to the dat_limit - as in those cases\nvacuuming wouldn't actually solve the issue, unless the other problems\nare addressed first (as autovacuum won't compute a cutoff horizon that's\nnewer than any of those).\n\nand for the function I was thinking of an SRF that'd return roughly rows\nlike:\n* \"database horizon\", xid, age(xid), database oid, database name\n* \"slot horizon\", xid, age(xid), NULL, slot name\n* \"backend horizon\", xid, age(xid), backend pid, query string?\n* \"walsender horizon\", xid, age(xid), backend pid, connection string?\n* \"anti-wraparound-vacuums-start\", xid, NULL, NULL\n* \"current xid\", xid, NULL, NULL\n* \"xid warn limit\", xid, NULL, NULL\n* \"xid shutdown limit\", xid, NULL, NULL\n* \"xid wraparound limit\", xid, NULL, NULL\n\nNot sure if an SRF is really the best approach, but it seems like it'd\nbe the easist way to show a good overview of the problems. I don't know\nhow many times this'd have prevented support escalations, but it'd be\nmany.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 19:30:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-05-03 22:03:18 -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > I am not sure exactly how to fix this,\n> > > because the calculation we use to determine the XID that can be used\n> > > to vacuum a specific table is pretty complex; how can the postmaster\n> > > know whether it's going to be able to make any progress in *any* table\n> > > in some database to which it's not even connected? But it's surely\n> > > crazy to just keep doing something over and over that can't possibly\n> > > work.\n> >\n> > I definitely agree that it's foolish to keep doing something that isn't\n> > going to work, and it seems like a pretty large part of the issue here\n> > is that we don't have enough information to be more intelligent because\n> > we aren't connected to the database that needs the work to be done.\n> >\n> > Now, presuming we're talking about 'new feature work' here to try and\n> > address this, and not something that we think we can back-patch, I had\n> > another thought.\n> >\n> > I certainly get that having lots of extra processes around can be a\n> > waste of resources... but I don't recall a lot of people complaining\n> > about the autovacuum launcher process using up lots of resources\n> > unnecessairly.\n> >\n> > Perhaps we should consider having the 'global' autovacuum launcher, when\n> > it's decided that a database needs work to be done, launch a 'per-DB'\n> > launcher which manages launching autovacuum processes for that database?\n> > If the launcher is still running then it's because there's still work to\n> > be done on that database and the 'global' autovacuum launcher can skip\n> > it. If the 'per-DB' launcher runs out of things to do, and the database\n> > it's working on is no longer in a danger zone, then it exits.\n> >\n> > There are certainly some other variations on this idea and I don't know\n> > that it's really better than keeping more information in shared\n> > memory or something else, but it seems like part of the issue is that\n> > the thing firing off the processes hasn't got enough info to do so\n> > intelligently and maybe we could fix that by having per-DB launchers\n> > that are actually connected to a DB.\n> \n> This sounds a lot more like a wholesale redesign than some small\n> incremental work. Which I think we should do, but we probably ought to\n> do something more minimal before the resources for something like this\n> are there. Perfect being the enemy of the good, and all that.\n\nI suppose it is a pretty big change in the base autovacuum launcher to\nbe something that's run per database instead and then deal with the\ncoordination between the two... but I can't help but feel like it\nwouldn't be that much *work*. I'm not against doing something smaller\nbut was something smaller actually proposed for this specific issue..?\n\n> > I might have misunderstood it, but it sounded like Andres was\n> > proposing a new function which would basically tell you what's holding\n> > back the xid horizon and that sounds fantastic and would be great to\n> > include in this message, if possible.\n> >\n> > As in:\n> >\n> > HINT: Run the function pg_what_is_holding_xmin_back() to identify what\n> > is preventing autovacuum from progressing and address it.\n> \n> I was basically thinking of doing *both*, amending the message, *and*\n> having a new UDF.\n> \n> Basically, instead of the current:\n> \n> char *oldest_datname = get_database_name(oldest_datoid);\n> \n> /* complain even if that DB has disappeared */\n> if (oldest_datname)\n> ereport(WARNING,\n> (errmsg(\"database \\\"%s\\\" must be vacuumed within %u transactions\",\n> oldest_datname,\n> xidWrapLimit - xid),\n> errhint(\"To avoid a database shutdown, execute a database-wide VACUUM in that database.\\n\"\n> \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n> else\n> ereport(WARNING,\n> (errmsg(\"database with OID %u must be vacuumed within %u transactions\",\n> oldest_datoid,\n> xidWrapLimit - xid),\n> errhint(\"To avoid a database shutdown, execute a database-wide VACUUM in that database.\\n\"\n> \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n> }\n> \n> which is dramatically unhelpful, because often vacuuming won't do squat\n> (because of old [prepared] transaction, replication connection, or slot).\n> \n> I'm thinking that we'd do something roughly like (in actual code) for\n> GetNewTransactionId():\n> \n> TransactionId dat_limit = ShmemVariableCache->oldestXid;\n> TransactionId slot_limit = Min(replication_slot_xmin, replication_slot_catalog_xmin);\n> Transactionid walsender_limit;\n> Transactionid prepared_xact_limit;\n> Transactionid backend_limit;\n> \n> ComputeOldestXminFromProcarray(&walsender_limit, &prepared_xact_limit, &backend_limit);\n> \n> if (IsOldest(dat_limit))\n> ereport(elevel,\n> errmsg(\"close to xid wraparound, held back by database %s\"),\n> errdetail(\"current xid %u, horizon for database %u, shutting down at %u\"),\n> errhint(\"...\"));\n> else if (IsOldest(slot_limit))\n> ereport(elevel, errmsg(\"close to xid wraparound, held back by replication slot %s\"),\n> ...);\n> \n> where IsOldest wouldn't actually compare plainly numerically, but would\n> actually prefer showing the slot, backend, walsender, prepared_xact, as\n> long as they are pretty close to the dat_limit - as in those cases\n> vacuuming wouldn't actually solve the issue, unless the other problems\n> are addressed first (as autovacuum won't compute a cutoff horizon that's\n> newer than any of those).\n\nWhere the errhint() above includes a recommendation to run the SRF\ndescribed below, I take it?\n\nAlso, should this really be an 'else if', or should it be just a set of\n'if()'s, thereby giving users more info right up-front? If there's one\nthing I quite dislike, it's cases where you're playing whack-a-mole over\nand over because the system says \"X is bad!\" and when you fix X, it just\nturns around and says \"Y is bad!\", even though it had all the info it\nneeded to say \"X and Y are bad!\" right up-front.\n\n> and for the function I was thinking of an SRF that'd return roughly rows\n> like:\n> * \"database horizon\", xid, age(xid), database oid, database name\n> * \"slot horizon\", xid, age(xid), NULL, slot name\n> * \"backend horizon\", xid, age(xid), backend pid, query string?\n> * \"walsender horizon\", xid, age(xid), backend pid, connection string?\n> * \"anti-wraparound-vacuums-start\", xid, NULL, NULL\n> * \"current xid\", xid, NULL, NULL\n> * \"xid warn limit\", xid, NULL, NULL\n> * \"xid shutdown limit\", xid, NULL, NULL\n> * \"xid wraparound limit\", xid, NULL, NULL\n> \n> Not sure if an SRF is really the best approach, but it seems like it'd\n> be the easist way to show a good overview of the problems. I don't know\n> how many times this'd have prevented support escalations, but it'd be\n> many.\n\nYeah, I'm not sure if an SRF makes the most sense here... but I'm also\nnot sure that it's a bad idea either. Once it's written, it'll be a lot\neasier to critique the specific interface. ;)\n\nThanks!\n\nStephen",
"msg_date": "Fri, 3 May 2019 22:41:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 22:41:11 -0400, Stephen Frost wrote:\n> I suppose it is a pretty big change in the base autovacuum launcher to\n> be something that's run per database instead and then deal with the\n> coordination between the two... but I can't help but feel like it\n> wouldn't be that much *work*. I'm not against doing something smaller\n> but was something smaller actually proposed for this specific issue..?\n\nI think it'd be fairly significant. And that we should redo it from\nscratch if we go there - because what we have isn't worth using as a\nbasis.\n\n\n> > I'm thinking that we'd do something roughly like (in actual code) for\n> > GetNewTransactionId():\n> > \n> > TransactionId dat_limit = ShmemVariableCache->oldestXid;\n> > TransactionId slot_limit = Min(replication_slot_xmin, replication_slot_catalog_xmin);\n> > Transactionid walsender_limit;\n> > Transactionid prepared_xact_limit;\n> > Transactionid backend_limit;\n> > \n> > ComputeOldestXminFromProcarray(&walsender_limit, &prepared_xact_limit, &backend_limit);\n> > \n> > if (IsOldest(dat_limit))\n> > ereport(elevel,\n> > errmsg(\"close to xid wraparound, held back by database %s\"),\n> > errdetail(\"current xid %u, horizon for database %u, shutting down at %u\"),\n> > errhint(\"...\"));\n> > else if (IsOldest(slot_limit))\n> > ereport(elevel, errmsg(\"close to xid wraparound, held back by replication slot %s\"),\n> > ...);\n> > \n> > where IsOldest wouldn't actually compare plainly numerically, but would\n> > actually prefer showing the slot, backend, walsender, prepared_xact, as\n> > long as they are pretty close to the dat_limit - as in those cases\n> > vacuuming wouldn't actually solve the issue, unless the other problems\n> > are addressed first (as autovacuum won't compute a cutoff horizon that's\n> > newer than any of those).\n> \n> Where the errhint() above includes a recommendation to run the SRF\n> described below, I take it?\n\nNot necessarily. I feel conciseness is important too, and this would be\nthe most imporant thing to tackle.\n\n\n> Also, should this really be an 'else if', or should it be just a set of\n> 'if()'s, thereby giving users more info right up-front?\n\nPossibly? But it'd also make it even harder to read the log / the system\nto keep up with logging, because we already log *so* much when close to\nwraparound.\n\nIf we didn't order it, it'd be hard for users to figure out which to\naddress first. If we ordered it, people have to further up in the log to\nfigure out which is the most urgent one (unless we reverse the order,\nwhich is odd too).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 May 2019 19:47:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-05-03 22:41:11 -0400, Stephen Frost wrote:\n> > I suppose it is a pretty big change in the base autovacuum launcher to\n> > be something that's run per database instead and then deal with the\n> > coordination between the two... but I can't help but feel like it\n> > wouldn't be that much *work*. I'm not against doing something smaller\n> > but was something smaller actually proposed for this specific issue..?\n> \n> I think it'd be fairly significant. And that we should redo it from\n> scratch if we go there - because what we have isn't worth using as a\n> basis.\n\nAlright, what I'm hearing here is that we should probably have a\ndedicated thread for this discussion, if someone has the cycles to spend\non it. I'm not against that.\n\n> > > I'm thinking that we'd do something roughly like (in actual code) for\n> > > GetNewTransactionId():\n> > > \n> > > TransactionId dat_limit = ShmemVariableCache->oldestXid;\n> > > TransactionId slot_limit = Min(replication_slot_xmin, replication_slot_catalog_xmin);\n> > > Transactionid walsender_limit;\n> > > Transactionid prepared_xact_limit;\n> > > Transactionid backend_limit;\n> > > \n> > > ComputeOldestXminFromProcarray(&walsender_limit, &prepared_xact_limit, &backend_limit);\n> > > \n> > > if (IsOldest(dat_limit))\n> > > ereport(elevel,\n> > > errmsg(\"close to xid wraparound, held back by database %s\"),\n> > > errdetail(\"current xid %u, horizon for database %u, shutting down at %u\"),\n> > > errhint(\"...\"));\n> > > else if (IsOldest(slot_limit))\n> > > ereport(elevel, errmsg(\"close to xid wraparound, held back by replication slot %s\"),\n> > > ...);\n> > > \n> > > where IsOldest wouldn't actually compare plainly numerically, but would\n> > > actually prefer showing the slot, backend, walsender, prepared_xact, as\n> > > long as they are pretty close to the dat_limit - as in those cases\n> > > vacuuming wouldn't actually solve the issue, unless the other problems\n> > > are addressed first (as autovacuum won't compute a cutoff horizon that's\n> > > newer than any of those).\n> > \n> > Where the errhint() above includes a recommendation to run the SRF\n> > described below, I take it?\n> \n> Not necessarily. I feel conciseness is important too, and this would be\n> the most imporant thing to tackle.\n\nI'm imagining a relatively rare scenario, just to be clear, where\n\"pretty close to the dat_limit\" would apply to more than just one thing.\n\n> > Also, should this really be an 'else if', or should it be just a set of\n> > 'if()'s, thereby giving users more info right up-front?\n> \n> Possibly? But it'd also make it even harder to read the log / the system\n> to keep up with logging, because we already log *so* much when close to\n> wraparound.\n\nYes, we definitely log a *lot*, and probably too much since other\ncritical messages might get lost in the noise.\n\n> If we didn't order it, it'd be hard for users to figure out which to\n> address first. If we ordered it, people have to further up in the log to\n> figure out which is the most urgent one (unless we reverse the order,\n> which is odd too).\n\nThis makes me think we should both order it and combine it into one\nmessage... but that'd then be pretty difficult to deal with,\npotentially, from a translation standpoint and just from a \"wow, that's\na huge log message\", which is kind of the idea behind the SRF- to give\nyou all that info in a more easily digestible manner.\n\nNot sure I've got any great ideas on how to improve on this. I do think\nthat if we know that there's multiple different things that are within a\nsmall number of xids of the oldest xmin then we should notify the user\nabout all of them, either directly in the error messages or by referring\nthem to the SRF, so they have the opportunity to address them all, or\nat least know about them all. As mentioned though, it's likely to be a\nquite rare thing to run into, so you'd have to be extra unlucky to even\nhit this case and perhaps the extra code complication just isn't worth\nit.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 May 2019 23:08:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Em sex, 3 de mai de 2019 às 17:27, Robert Haas <robertmhaas@gmail.com> escreveu:\n>\n> HINT: Commit or roll back old prepared transactions, drop stale\n> replication slots, or kill long-running sessions.\n> Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\n>\nFirst of all, +1 for this patch. But after reading this email, I\ncouldn't resist to expose an idea about stale XID horizon. [A cup of\ntea...] I realized that there is no automatic way to recover from old\nprepared transactions, stale replication slots or even long-running\nsessions when we have a wraparound situation. Isn't it the case to add\na parameter that recover from stale XID horizon? I mean if we reach a\ncritical situation (xidStopLimit), free resource that is preventing\nthe XID advance (and hope autovacuum have some time to prevent a\nstop-assigning-xids situation). The situation is analogous to OOM\nkiller that it kills some processes that are starving resources.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Sat, 4 May 2019 00:11:32 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-03 23:08:44 -0400, Stephen Frost wrote:\n> This makes me think we should both order it and combine it into one\n> message... but that'd then be pretty difficult to deal with,\n> potentially, from a translation standpoint and just from a \"wow, that's\n> a huge log message\", which is kind of the idea behind the SRF- to give\n> you all that info in a more easily digestible manner.\n> \n> Not sure I've got any great ideas on how to improve on this. I do think\n> that if we know that there's multiple different things that are within a\n> small number of xids of the oldest xmin then we should notify the user\n> about all of them, either directly in the error messages or by referring\n> them to the SRF, so they have the opportunity to address them all, or\n> at least know about them all. As mentioned though, it's likely to be a\n> quite rare thing to run into, so you'd have to be extra unlucky to even\n> hit this case and perhaps the extra code complication just isn't worth\n> it.\n\nI think just having an actual reason for the problem would be so much\nbetter than the current status, that I'd tackle the \"oops, just about\neverything is screwed, here's the reasons in order\" case separately (or\njust plain never).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 May 2019 00:06:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-05-03 23:08:44 -0400, Stephen Frost wrote:\n> > As mentioned though, it's likely to be a\n> > quite rare thing to run into, so you'd have to be extra unlucky to even\n> > hit this case and perhaps the extra code complication just isn't worth\n> > it.\n> \n> I think just having an actual reason for the problem would be so much\n> better than the current status, that I'd tackle the \"oops, just about\n> everything is screwed, here's the reasons in order\" case separately (or\n> just plain never).\n\nI certainly agree with that, but if we really think it's so rare that we\ndon't feel that it's worth worrying about then I'd say we should remove\nthe 'else' in the 'else if', as I initially suggested, since the chances\nof users getting more than one is quite rare and more than two would\nbe.. impressive, which negates the concern you raised earlier about\nthere being a bunch of these messages that make it hard for users to\nreason about what is the most critical. Removing the 'else' would make\nit strictly less code (albeit not much, but it certainly isn't adding to\nthe code) and would be more informative for the user who does end up\nhaving two cases happen at (or nearly) the same time.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 6 May 2019 07:09:55 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: improving wraparound behavior"
}
] |
[
{
"msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b3bce2017b15e05f000c3c5947653a3e2c5a29f\n\nPlease send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2019 18:29:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First-draft release notes for back branches are up"
},
{
"msg_contents": "On Sat, May 4, 2019 at 10:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b3bce2017b15e05f000c3c5947653a3e2c5a29f\n>\n> Please send any corrections by Sunday.\n\n+ Tolerate <literal>EINVAL</literal> and <literal>ENOSYS</literal>\n+ error results, where appropriate, for fsync calls (Thomas Munro,\n+ James Sewell)\n\nNit-picking: ENOSYS is for sync_file_range, EINVAL is for fsync.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 May 2019 13:07:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, May 4, 2019 at 10:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> + Tolerate <literal>EINVAL</literal> and <literal>ENOSYS</literal>\n> + error results, where appropriate, for fsync calls (Thomas Munro,\n> + James Sewell)\n\n> Nit-picking: ENOSYS is for sync_file_range, EINVAL is for fsync.\n\nYeah, I didn't really think it was worth distinguishing. If there\nis some more general term that covers both calls, maybe we should\nuse that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2019 21:29:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On Sat, May 4, 2019 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Sat, May 4, 2019 at 10:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > + Tolerate <literal>EINVAL</literal> and <literal>ENOSYS</literal>\n> > + error results, where appropriate, for fsync calls (Thomas Munro,\n> > + James Sewell)\n>\n> > Nit-picking: ENOSYS is for sync_file_range, EINVAL is for fsync.\n>\n> Yeah, I didn't really think it was worth distinguishing. If there\n> is some more general term that covers both calls, maybe we should\n> use that?\n\nI would just do s/fsync/fsync and sync_file_range/. And I guess also\nwrap them in <function>?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 May 2019 15:56:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On Fri, May 03, 2019 at 06:29:35PM -0400, Tom Lane wrote:\n> Please send any corrections by Sunday.\n\nI have noticed a typo:\n--- a/doc/src/sgml/release-11.sgml\n+++ b/doc/src/sgml/release-11.sgml\n@@ -982,7 +982,7 @@ Branch: REL9_4_STABLE [81f5b3283] 2019-03-04\n09:50:24 +0900\n <para>\n Errors, such as lack of permissions to read the directory, were not\n detected or reported correctly; instead the code silently acted as\n- though the directory were empty.\n+ though the directory was empty.\n </para>\n--\nMichael",
"msg_date": "Sat, 4 May 2019 17:39:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": ">>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n\n Michael> I have noticed a typo:\n\n Michael> Errors, such as lack of permissions to read the directory, were not\n Michael> detected or reported correctly; instead the code silently acted as\n Michael> - though the directory were empty.\n Michael> + though the directory was empty.\n\n\"as though ... were ...\" is correct English; it's a counterfactual\nclause so the subjunctive is appropriate here.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 04 May 2019 15:54:08 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n> Michael> I have noticed a typo:\n> Michael> Errors, such as lack of permissions to read the directory, were not\n> Michael> detected or reported correctly; instead the code silently acted as\n> Michael> - though the directory were empty.\n> Michael> + though the directory was empty.\n\n> \"as though ... were ...\" is correct English; it's a counterfactual\n> clause so the subjunctive is appropriate here.\n\nThanks. It's been too long since high school English, so I was having\na hard time remembering the correct grammatical term for this; but\nI'd just automatically written it that way and was pretty sure it\nwas right.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 11:15:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, May 4, 2019 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, I didn't really think it was worth distinguishing. If there\n>> is some more general term that covers both calls, maybe we should\n>> use that?\n\n> I would just do s/fsync/fsync and sync_file_range/. And I guess also\n> wrap them in <function>?\n\nOK, will do it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 15:55:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On 5/3/19 6:29 PM, Tom Lane wrote:\n> See\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b3bce2017b15e05f000c3c5947653a3e2c5a29f\n> \n> Please send any corrections by Sunday.\n\nAttached is a draft of the press release to go out. Please let me know\nif there are any inaccuracies / glaring omissions / awkward language etc.\n\nThanks,\n\nJonathan",
"msg_date": "Sat, 4 May 2019 22:34:51 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On Sun, May 5, 2019 at 2:35 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 5/3/19 6:29 PM, Tom Lane wrote:\n> > See\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b3bce2017b15e05f000c3c5947653a3e2c5a29f\n> >\n> > Please send any corrections by Sunday.\n>\n> Attached is a draft of the press release to go out. Please let me know\n> if there are any inaccuracies / glaring omissions / awkward language etc.\n\nHi Jonathan,\n\n> * Relax panics on fsync failures for certain cases where a failure indicated \"operation not supported\"\n\nMaybe \"fsync and sync_file_range\". I suspect the latter actually\naffected more people, but either way I think we should mention both.\n\n> * Fix handling of lc\\_time settings that imply an encoding different from the database's encoding\n\nMaybe this should be marked up as `lc_time` like other\nidentifier/file/whatever names?\n\n> * Several fixes for `conritb/postgres_fdw`, including one for remote partitions where an UPDATE could lead to incorrect results or a crash\n\nTypo: contrib\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 May 2019 16:24:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On 5/5/19 12:24 AM, Thomas Munro wrote:\n> On Sun, May 5, 2019 at 2:35 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> On 5/3/19 6:29 PM, Tom Lane wrote:\n>>> See\n>>>\n>>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8b3bce2017b15e05f000c3c5947653a3e2c5a29f\n>>>\n>>> Please send any corrections by Sunday.\n>>\n>> Attached is a draft of the press release to go out. Please let me know\n>> if there are any inaccuracies / glaring omissions / awkward language etc.\n> \n> Hi Jonathan,\n> \n>> * Relax panics on fsync failures for certain cases where a failure indicated \"operation not supported\"\n> \n> Maybe \"fsync and sync_file_range\". I suspect the latter actually\n> affected more people, but either way I think we should mention both.\n> \n>> * Fix handling of lc\\_time settings that imply an encoding different from the database's encoding\n> \n> Maybe this should be marked up as `lc_time` like other\n> identifier/file/whatever names?\n> \n>> * Several fixes for `conritb/postgres_fdw`, including one for remote partitions where an UPDATE could lead to incorrect results or a crash\n> \n> Typo: contrib\n\nThanks! I have made all of those changes.\n\nBest,\n\nJonathan",
"msg_date": "Mon, 6 May 2019 13:29:35 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
}
] |
[
{
"msg_contents": "Hello,\n\nI wrote an extension to add a range_agg function with similar behavior \nto existing *_agg functions, and I'm wondering if folks would like to \nhave it in core? Here is the repo: https://github.com/pjungwir/range_agg\n\nI'm also working on a patch for temporal foreign keys, and having \nrange_agg would make the FK check easier and faster, which is why I'd \nlike to get it added. But also it just seems useful, like array_agg, \njson_agg, etc.\n\nOne question is how to aggregate ranges that would leave gaps and/or \noverlaps. So in my extension there is a one-param version that forbids \ngaps & overlaps, but I let you permit them by passing extra parameters, \nso the signature is:\n\n range_agg(r anyrange, permit_gaps boolean, permit_overlaps boolean)\n\nPerhaps another useful choice would be to return NULL if a gap/overlap \nis found, so that each param would have three choices instead of just \ntwo: accept the inputs, raise an error, return a NULL.\n\nWhat do people think? I plan to work on a patch regardless, so that I \ncan use it for temporal FKs, but I'd appreciate some feedback on the \n\"user interface\".\n\nThanks,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Fri, 3 May 2019 15:56:41 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "range_agg"
},
{
"msg_contents": "On Fri, May 03, 2019 at 03:56:41PM -0700, Paul Jungwirth wrote:\n> Hello,\n> \n> I wrote an extension to add a range_agg function with similar behavior to\n> existing *_agg functions, and I'm wondering if folks would like to have it\n> in core? Here is the repo: https://github.com/pjungwir/range_agg\n> \n> I'm also working on a patch for temporal foreign keys, and having range_agg\n> would make the FK check easier and faster, which is why I'd like to get it\n> added. But also it just seems useful, like array_agg, json_agg, etc.\n> \n> One question is how to aggregate ranges that would leave gaps and/or\n> overlaps.\n\nThis suggests two different ways to extend ranges over aggregation:\none which is a union of (in general) disjoint intervals, two others\nare a union of intervals, each of which has a weight. Please pardon\nthe ASCII art.\n\nThe aggregation of:\n\n[1, 4)\n [2, 5)\n [8, 10)\n\ncould turn into:\n\n{[1,5), [8, 10)} (union without weight)\n\n{{[1,2),1}, {[2,4),2}, {[4,5),1}, {[8,10),1}} (strictly positive weights which don't (in general) cover the space)\n\n{{[1,2),1}, {[2,4),2}, {[4,5),1}, {[5,8),0}, {[8,10),1}} (non-negative weights which guarantee the space is covered)\n\nThere is no principled reason to choose one over the others.\n\n> What do people think? I plan to work on a patch regardless, so that I can\n> use it for temporal FKs, but I'd appreciate some feedback on the \"user\n> interface\".\n\nI think the cases above, or at least the first two of them, should be\navailable. They could be called range_agg, weighted_range_agg, and\ncovering_range_agg.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 4 May 2019 03:41:47 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": ">\n> One question is how to aggregate ranges that would leave gaps and/or\n> overlaps. So in my extension there is a one-param version that forbids\n> gaps & overlaps, but I let you permit them by passing extra parameters,\n> so the signature is:\n>\n\nPerhaps a third way would be to allow and preserve the gaps.\n\nA while back I wrote an extension called disjoint_date_range for storing\nsets of dates where it was assumed that most dates would be contiguous. The\nbasic idea was that The core datatype was an array of ranges of dates, and\nwith every modification you'd unnest them all to their discrete elements\nand use a window function to identify \"runs\" of dates and recompose them\ninto a canonical set. It was an efficient way of representing \"Every day\nlast year except for June 2nd and August 4th, when we closed business for\nspecial events.\"\n\nFor arrays of ranges the principle is the same but it'd get a bit more\ntricky, you'd have to order by low bound, use window functions to detect\nadjacency/overlap to identify your runs, and the generate the canonical\nminimum set of ranges in your array.\n\nOne question is how to aggregate ranges that would leave gaps and/or \noverlaps. So in my extension there is a one-param version that forbids \ngaps & overlaps, but I let you permit them by passing extra parameters, \nso the signature is:Perhaps a third way would be to allow and preserve the gaps.A while back I wrote an extension called disjoint_date_range for storing sets of dates where it was assumed that most dates would be contiguous. The basic idea was that The core datatype was an array of ranges of dates, and with every modification you'd unnest them all to their discrete elements and use a window function to identify \"runs\" of dates and recompose them into a canonical set. It was an efficient way of representing \"Every day last year except for June 2nd and August 4th, when we closed business for special events.\"For arrays of ranges the principle is the same but it'd get a bit more tricky, you'd have to order by low bound, use window functions to detect adjacency/overlap to identify your runs, and the generate the canonical minimum set of ranges in your array.",
"msg_date": "Sat, 4 May 2019 18:11:34 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 5/4/19 3:11 PM, Corey Huinker wrote:\n> One question is how to aggregate ranges that would leave gaps and/or\n> overlaps. So in my extension there is a one-param version that forbids\n> gaps & overlaps, but I let you permit them by passing extra parameters,\n> so the signature is:\n> \n> \n> Perhaps a third way would be to allow and preserve the gaps.\n\nThanks for the feedback! I think this is what I'm doing already \n(returning an array of ranges), but let me know if I'm misunderstanding. \nMy extension has these signatures:\n\n range_agg(anyrange) returning anyrange\n range_agg(anyrange, bool) returning anyarray\n range_agg(anyrange, bool, bool) returning anyarray.\n\nThe first variant raises an error if there are gaps or overlaps and \nalways returns a single range, but the other two return an array of ranges.\n\nI was planning to use the same signatures for my patch to pg, unless \nsomeone thinks they should be different. But I'm starting to wonder if \nthey shouldn't *all* return arrays. I have two concrete use-cases for \nthese functions and they both require the array-returning versions. Is \nit helpful to have a version that always returns a single range? Or \nshould I make them all consistent?\n\nThanks,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Mon, 6 May 2019 11:17:46 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 5/3/19 6:41 PM, David Fetter wrote:\n> This suggests two different ways to extend ranges over aggregation:\n> one which is a union of (in general) disjoint intervals, two others\n> are a union of intervals, each of which has a weight.\n> . . .\n> I think the cases above, or at least the first two of them, should be\n> available. They could be called range_agg, weighted_range_agg, and\n> covering_range_agg.\n\nThanks David! I think these two new functions make sense. Before I \nimplement them too I wonder if anyone else has uses for them?\n\nThanks,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Mon, 6 May 2019 11:19:27 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, May 06, 2019 at 11:19:27AM -0700, Paul Jungwirth wrote:\n> On 5/3/19 6:41 PM, David Fetter wrote:\n> > This suggests two different ways to extend ranges over aggregation:\n> > one which is a union of (in general) disjoint intervals, two others\n> > are a union of intervals, each of which has a weight.\n> > . . .\n> > I think the cases above, or at least the first two of them, should be\n> > available. They could be called range_agg, weighted_range_agg, and\n> > covering_range_agg.\n> \n> Thanks David! I think these two new functions make sense. Before I implement\n> them too I wonder if anyone else has uses for them?\n\nI suspect that if you build it, the will come, \"they\" being anyone who\nhas to schedule coverage, check usage of a resource over time, etc. Is\nthis something you want help with at some level? Coding, testing,\npromoting...\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 7 May 2019 00:07:04 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "> I suspect that if you build it, the will come, \"they\" being anyone who\n> has to schedule coverage, check usage of a resource over time, etc. Is\n> this something you want help with at some level? Coding, testing,\n> promoting...\n\nYou might be right. :-) Most of this is done already, since it was \nlargely copy/paste from my extension plus figuring out how to register \nbuilt-in functions with the .dat files. I need to write some docs and do \nsome cleanup and I'll have a CF entry. And I'll probably go ahead and \nadd your two suggestions too.... Things I'd love help with:\n\n- Getting more opinions about the functions' interface, either from you \nor others, especially:\n - In the extension I have a boolean param to let you accept gaps or \nraise an error, and another for overlaps. But what about \naccepting/raising/returning null? How should the parameters expose that? \nMaybe leave them as bools but accept true/false/null for \npermit/raise/nullify respectively? That seems like a questionable UI, \nbut I'm not sure what would be better. Maybe someone with better taste \ncan weigh in. :-) I tried to find existing built-in functions that gave \na enumeration of options like that but couldn't find an existing example.\n - Also: what do you think of the question I asked in my reply to \nCorey? Is it better to have *all* range_agg functions return an array of \nranges, or it is nicer to have a variant that always returns a single range?\n- Getting it reviewed.\n- Advice about sequencing it with respect to my temporal foreign keys \npatch, where I'm planning to call range_agg to check an FK. E.g. should \nmy FK patch be a diff on top of the range_agg code? I assume they should \nhave separate CF entries though?\n\nOh and here's something specific:\n\n- I gave oids to my new functions starting with 8000, because I thought \nI saw some discussion about that recently, and the final committer will \ncorrect the oids to the current n+1? But I can't find that discussion \nanymore, so if that's the wrong approach let me know.\n\nThanks!\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Mon, 6 May 2019 16:21:45 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, May 6, 2019 at 4:21 PM Paul Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> I need to write some docs and do\n> some cleanup and I'll have a CF entry.\n\nHere is an initial patch. I'd love to have some feedback! :-)\n\nOne challenge was handling polymorphism, since I want to have this:\n\n anyrange[] range_agg(anyrange, bool, bool)\n\nBut there is no anyrange[]. I asked about this back when I wrote my\nextension too:\n\nhttps://www.postgresql.org/message-id/CA%2BrenyVOjb4xQZGjdCnA54-1nzVSY%2B47-h4nkM-EP5J%3D1z%3Db9w%40mail.gmail.com\n\nLike then, I handled it by overloading the function with separate\nsignatures for each built-in range type:\n\n int4range[] range_agg(int4range, bool, bool);\n int8range[] range_agg(int8range, bool, bool);\n numrange[] range_agg(numrange, bool, bool);\n tsrange[] range_agg(tsrange, bool, bool);\n tstzrange[] range_agg(tstzrange, bool, bool);\n daterange[] range_agg(daterange, bool, bool);\n\n(And users can still define a range_agg for their own custom range\ntypes using similar instructions to those in my extension's README.)\n\nThe problem was the opr_sanity regression test, which rejects\nfunctions that share the same C-function implementation (roughly). I\ntook a few stabs at changing my code to pass that test, e.g. defining\nseparate wrapper functions for everything like this:\n\n Datum\n int4range_agg_transfn(PG_FUNCTION_ARGS) {\n return range_agg_transfn(fcinfo);\n }\n\nBut that felt like it was getting ridiculous (8 range types *\ntransfn+finalfn * 1-bool and 2-bool variants), so in the end I just\nadded my functions to the \"permitted\" output in opr_sanity. Let me\nknow if I should avoid that though.\n\nAlso I would still appreciate some bikeshedding over the functions' UI\nsince I don't feel great about it.\n\nIf the overall approach seems okay, I'm still open to adding David's\nsuggestions for weighted_range_agg and covering_range_agg.\n\nThanks!\nPaul",
"msg_date": "Wed, 8 May 2019 21:54:11 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, May 8, 2019 at 9:54 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> Here is an initial patch. I'd love to have some feedback! :-)\n\nHere is a v2 rebased off current master. No substantive changes, but\nit does fix one trivial git conflict.\n\nAfter talking with David in Ottawa and hearing a good use-case from\none other person for his proposed weighted_range_agg and\ncovering_range_agg, I think *will* add those to this patch, but if\nanyone wants to offer feedback on my approach so far, I'd appreciate\nthat too.\n\nYours,\nPaul",
"msg_date": "Sun, 16 Jun 2019 15:59:29 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, 2019-05-03 at 15:56 -0700, Paul Jungwirth wrote:\n> Hello,\n> \n> I wrote an extension to add a range_agg function with similar\n> behavior \n> to existing *_agg functions, and I'm wondering if folks would like\n> to \n> have it in core? Here is the repo: \n> https://github.com/pjungwir/range_agg\n\nThis seems like a very useful extension, thank you.\n\nFor getting into core though, it should be a more complete set of\nrelated operations. The patch is implicitly introducing the concept of\na \"multirange\" (in this case, an array of ranges), but it's not making\nthe concept whole.\n\nWhat else should return a multirange? This patch implements the union-\nagg of ranges, but we also might want an intersection-agg of ranges\n(that is, the set of ranges that are subranges of every input). Given\nthat there are other options here, the name \"range_agg\" is too generic\nto mean union-agg specifically.\n\nWhat can we do with a multirange? A lot of range operators still make\nsense, like \"contains\" or \"overlaps\"; but \"adjacent\" doesn't quite\nwork. What about new operations like inverting a multirange to get the\ngaps?\n\nDo we want to continue with the array-of-ranges implementation of a\nmultirange, or do we want a first-class multirange concept that might\neliminate the boilerplate around defining all of these operations?\n\nIf we have a more complete set of operators here, the flags for\nhandling overlapping ranges and gaps will be unnecessary.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 01 Jul 2019 15:38:30 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I noticed that this patch has a // comment about it segfaulting. Did\nyou ever figure that out? Is the resulting code the one you intend as\nfinal?\n\nDid you make any inroads regarding Jeff Davis' suggestion about\nimplementing \"multiranges\"? I wonder if that's going to end up being a\nPandora's box.\n\nStylistically, the code does not match pgindent's choices very closely.\nI can return a diff to a pgindented version of your v0002 for your\nperusal, if it would be useful for you to learn its style. (A committer\nwould eventually run pgindent himself[1], but it's good to have\nsubmissions be at least close to the final form.) BTW, I suggest \"git\nformat-patch -vN\" to prepare files for submission.\n\n\n[1] No female committers yet ... is this 2019?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 14:33:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\nút 2. 7. 2019 v 0:38 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n\n> On Fri, 2019-05-03 at 15:56 -0700, Paul Jungwirth wrote:\n> > Hello,\n> >\n> > I wrote an extension to add a range_agg function with similar\n> > behavior\n> > to existing *_agg functions, and I'm wondering if folks would like\n> > to\n> > have it in core? Here is the repo:\n> > https://github.com/pjungwir/range_agg\n>\n> This seems like a very useful extension, thank you.\n>\n> For getting into core though, it should be a more complete set of\n> related operations. The patch is implicitly introducing the concept of\n> a \"multirange\" (in this case, an array of ranges), but it's not making\n> the concept whole.\n>\n> What else should return a multirange? This patch implements the union-\n> agg of ranges, but we also might want an intersection-agg of ranges\n> (that is, the set of ranges that are subranges of every input). Given\n> that there are other options here, the name \"range_agg\" is too generic\n> to mean union-agg specifically.\n>\n> What can we do with a multirange? A lot of range operators still make\n> sense, like \"contains\" or \"overlaps\"; but \"adjacent\" doesn't quite\n> work. What about new operations like inverting a multirange to get the\n> gaps?\n>\n> Do we want to continue with the array-of-ranges implementation of a\n> multirange, or do we want a first-class multirange concept that might\n> eliminate the boilerplate around defining all of these operations?\n>\n> If we have a more complete set of operators here, the flags for\n> handling overlapping ranges and gaps will be unnecessary.\n>\n\nI think so scope of this patch is correct. Introduction of set of ranges\ndata type based on a array or not, should be different topic.\n\nThe question is naming - should be this agg function named \"range_agg\", and\nmulti range agg \"multirange_agg\"? Personally, I have not a problem with\nrange_agg, and I have not a problem so it is based on union operation. It\nis true so only result of union can be implemented as range simply. When I\nthough about multi range result, then there are really large set of\npossible algorithms how to do some operations over two, three values. So\npersonally, I am satisfied with scope of simple range_agg functions,\nbecause I see a benefits, and I don't think so this implementation block\nany more complex designs in the future. There is really big questions how\nto implement multi range, and now I think so special data type will be\nbetter than possible unordered arrays.\n\nRegards\n\nPavel\n\n\n\nRegards,\n> Jeff Davis\n>\n>\n>\n>\n>\n\nHiút 2. 7. 2019 v 0:38 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Fri, 2019-05-03 at 15:56 -0700, Paul Jungwirth wrote:\n> Hello,\n> \n> I wrote an extension to add a range_agg function with similar\n> behavior \n> to existing *_agg functions, and I'm wondering if folks would like\n> to \n> have it in core? Here is the repo: \n> https://github.com/pjungwir/range_agg\n\nThis seems like a very useful extension, thank you.\n\nFor getting into core though, it should be a more complete set of\nrelated operations. The patch is implicitly introducing the concept of\na \"multirange\" (in this case, an array of ranges), but it's not making\nthe concept whole.\n\nWhat else should return a multirange? This patch implements the union-\nagg of ranges, but we also might want an intersection-agg of ranges\n(that is, the set of ranges that are subranges of every input). Given\nthat there are other options here, the name \"range_agg\" is too generic\nto mean union-agg specifically.\n\nWhat can we do with a multirange? A lot of range operators still make\nsense, like \"contains\" or \"overlaps\"; but \"adjacent\" doesn't quite\nwork. What about new operations like inverting a multirange to get the\ngaps?\n\nDo we want to continue with the array-of-ranges implementation of a\nmultirange, or do we want a first-class multirange concept that might\neliminate the boilerplate around defining all of these operations?\n\nIf we have a more complete set of operators here, the flags for\nhandling overlapping ranges and gaps will be unnecessary.I think so scope of this patch is correct. Introduction of set of ranges data type based on a array or not, should be different topic. The question is naming - should be this agg function named \"range_agg\", and multi range agg \"multirange_agg\"? Personally, I have not a problem with range_agg, and I have not a problem so it is based on union operation. It is true so only result of union can be implemented as range simply. When I though about multi range result, then there are really large set of possible algorithms how to do some operations over two, three values. So personally, I am satisfied with scope of simple range_agg functions, because I see a benefits, and I don't think so this implementation block any more complex designs in the future. There is really big questions how to implement multi range, and now I think so special data type will be better than possible unordered arrays.RegardsPavel\nRegards,\n Jeff Davis",
"msg_date": "Fri, 5 Jul 2019 07:58:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "čt 4. 7. 2019 v 20:34 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com>\nnapsal:\n\n> I noticed that this patch has a // comment about it segfaulting. Did\n> you ever figure that out? Is the resulting code the one you intend as\n> final?\n>\n> Did you make any inroads regarding Jeff Davis' suggestion about\n> implementing \"multiranges\"? I wonder if that's going to end up being a\n> Pandora's box.\n\n\nIntroduction of new datatype can be better, because we can ensure so data\nare correctly ordered\n\n\n> Stylistically, the code does not match pgindent's choices very closely.\n> I can return a diff to a pgindented version of your v0002 for your\n> perusal, if it would be useful for you to learn its style. (A committer\n> would eventually run pgindent himself[1], but it's good to have\n> submissions be at least close to the final form.) BTW, I suggest \"git\n> format-patch -vN\" to prepare files for submission.\n>\n\nThe first issue is unstable regress tests - there is a problem with\nopr_sanity\n\nSELECT p1.oid, p1.proname, p2.oid, p2.proname\nFROM pg_proc AS p1, pg_proc AS p2\nWHERE p1.oid < p2.oid AND\n p1.prosrc = p2.prosrc AND\n p1.prolang = 12 AND p2.prolang = 12 AND\n (p1.prokind != 'a' OR p2.prokind != 'a') AND\n (p1.prolang != p2.prolang OR\n p1.prokind != p2.prokind OR\n p1.prosecdef != p2.prosecdef OR\n p1.proleakproof != p2.proleakproof OR\n p1.proisstrict != p2.proisstrict OR\n p1.proretset != p2.proretset OR\n p1.provolatile != p2.provolatile OR\n p1.pronargs != p2.pronargs)\nORDER BY p1.oid, p2.oid; -- requires explicit ORDER BY now\n\n+ rangeTypeId = get_fn_expr_argtype(fcinfo->flinfo, 1);\n+ if (!type_is_range(rangeTypeId))\n+ {\n+ ereport(ERROR, (errmsg(\"range_agg must be called with a range\")));\n+ }\n\n???\n\n+ r1Str = \"lastRange\";\n+ r2Str = \"currentRange\";\n+ // TODO: Why is this segfaulting?:\n+ // r1Str =\nDatumGetCString(DirectFunctionCall1(range_out,\nRangeTypePGetDatum(lastRange)));\n+ // r2Str =\nDatumGetCString(DirectFunctionCall1(range_out,\nRangeTypePGetDatum(currentRange)));\n+ ereport(ERROR, (errmsg(\"range_agg: gap detected between\n%s and %s\", r1Str, r2Str)));\n+ }\n\n???\n\nThe patch doesn't respect Postgres formatting\n\n+# Making 2- and 3-param range_agg polymorphic is difficult\n+# because it would take an anyrange and return an anyrange[],\n+# which doesn't exist.\n+# As a workaround we define separate functions for each built-in range\ntype.\n+# This is what causes the mess in src/test/regress/expected/opr_sanity.out.\n+{ oid => '8003', descr => 'aggregate transition function',\n\nThis is strange. Does it means so range_agg will not work with custom range\ntypes?\n\nI am not sure about implementation. I think so accumulating all ranges,\nsorting and processing on the end can be memory and CPU expensive.\n\nRegards\n\nPavel\n\n\n\n>\n> [1] No female committers yet ... is this 2019?\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nčt 4. 7. 2019 v 20:34 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com> napsal:I noticed that this patch has a // comment about it segfaulting. Did\nyou ever figure that out? Is the resulting code the one you intend as\nfinal?\n\nDid you make any inroads regarding Jeff Davis' suggestion about\nimplementing \"multiranges\"? I wonder if that's going to end up being a\nPandora's box.Introduction of new datatype can be better, because we can ensure so data are correctly ordered \n\nStylistically, the code does not match pgindent's choices very closely.\nI can return a diff to a pgindented version of your v0002 for your\nperusal, if it would be useful for you to learn its style. (A committer\nwould eventually run pgindent himself[1], but it's good to have\nsubmissions be at least close to the final form.) BTW, I suggest \"git\nformat-patch -vN\" to prepare files for submission.The first issue is unstable regress tests - there is a problem with opr_sanitySELECT p1.oid, p1.proname, p2.oid, p2.pronameFROM pg_proc AS p1, pg_proc AS p2WHERE p1.oid < p2.oid AND p1.prosrc = p2.prosrc AND p1.prolang = 12 AND p2.prolang = 12 AND (p1.prokind != 'a' OR p2.prokind != 'a') AND (p1.prolang != p2.prolang OR p1.prokind != p2.prokind OR p1.prosecdef != p2.prosecdef OR p1.proleakproof != p2.proleakproof OR p1.proisstrict != p2.proisstrict OR p1.proretset != p2.proretset OR p1.provolatile != p2.provolatile OR p1.pronargs != p2.pronargs)ORDER BY p1.oid, p2.oid; -- requires explicit ORDER BY now+ rangeTypeId = get_fn_expr_argtype(fcinfo->flinfo, 1);+ if (!type_is_range(rangeTypeId))+ {+ ereport(ERROR, (errmsg(\"range_agg must be called with a range\")));+ }???+ r1Str = \"lastRange\";+ r2Str = \"currentRange\";+ // TODO: Why is this segfaulting?:+ // r1Str = DatumGetCString(DirectFunctionCall1(range_out, RangeTypePGetDatum(lastRange)));+ // r2Str = DatumGetCString(DirectFunctionCall1(range_out, RangeTypePGetDatum(currentRange)));+ ereport(ERROR, (errmsg(\"range_agg: gap detected between %s and %s\", r1Str, r2Str)));+ }???The patch doesn't respect Postgres formatting+# Making 2- and 3-param range_agg polymorphic is difficult+# because it would take an anyrange and return an anyrange[],+# which doesn't exist.+# As a workaround we define separate functions for each built-in range type.+# This is what causes the mess in src/test/regress/expected/opr_sanity.out.+{ oid => '8003', descr => 'aggregate transition function',This is strange. Does it means so range_agg will not work with custom range types?I am not sure about implementation. I think so accumulating all ranges, sorting and processing on the end can be memory and CPU expensive.RegardsPavel \n\n\n[1] No female committers yet ... is this 2019?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jul 2019 13:30:52 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, 2019-07-05 at 07:58 +0200, Pavel Stehule wrote:\n> The question is naming - should be this agg function named\n> \"range_agg\", and multi range agg \"multirange_agg\"? Personally, I have\n> not a problem with range_agg, and I have not a problem so it is based\n> on union operation. It is true so only result of union can be\n> implemented as range simply. When I though about multi range result,\n> then there are really large set of possible algorithms how to do some\n> operations over two, three values.\n\nHi Pavel,\n\nCan you explain in more detail? Would an intersection-based aggregate\nbe useful? If so, and we implement it in the future, what would we call\nit?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 05 Jul 2019 09:48:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Jul 1, 2019 at 3:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> For getting into core though, it should be a more complete set of\n> related operations. The patch is implicitly introducing the concept of\n> a \"multirange\" (in this case, an array of ranges), but it's not making\n> the concept whole.\n>\n> What else should return a multirange? This patch implements the union-\n> agg of ranges, but we also might want an intersection-agg of ranges\n> (that is, the set of ranges that are subranges of every input). Given\n> that there are other options here, the name \"range_agg\" is too generic\n> to mean union-agg specifically.\n\nThanks for the review!\n\nI like the idea of adding a multirange type that works like range\ntypes, although I'm not sure I want to build it. :-)\n\nMy own motivations for the range_agg patch are for temporal databases,\nwhere I have two use-cases: checking temporal foreign keys [1] and\ndoing a Snodgrass \"coalesce\" operation [2]. The FKs use-case is more\nimportant. For coalesce I just immediately UNNEST the array, so a\nmultirange would sort of \"get in the way\". It's no big deal if I can\ncast a multirange to an array, although for something that would run\non every INSERT/UPDATE/DELETE I'd like to understand what the cast\nwould cost us in performance. But coalesce isn't part of SQL:2011 and\nshould be optional behavior or just something in an extension. The FKs\nuse case matters to me a lot more, and I think a multirange would be\nfine for that. Also a multirange would solve the generics problem\nPavel asked about. (I'll say more about that in a reply to him.)\n\nI'm not convinced that an intersection aggregate function for\nmultiranges would be used by anyone, but I don't mind building one.\nEvery other *_agg function has an \"additive\" sense, not a\n\"subtractive\" sense. json{,b}_object_agg are the closest since you\n*could* imagine intersection semantics for those, but they are unions.\nSo in terms of *naming* I think using \"range_agg\" for union semantics\nis natural and would fit expectations. (I'm not the first to name this\nfunction range_agg btw: [3]).\n\nBut there is clearly more than one worthwhile way to aggregate ranges:\n\n- David suggested weighted_range_agg and covering_range_agg. At PGCon\n2019 someone else said he has had to build something that was\nessentially weighted_range_agg. I can see it used for\nscheduling/capacity/max-utilization problems.\n- You suggested an intersection range_agg.\n- At [4] there is a missing_ranges function that only gives the *gaps*\nbetween the input ranges.\n\nNonetheless I still think I would call the union behavior simply\nrange_agg, and then use weighted_range_agg, covering_range_agg,\nintersection_range_agg, and missing_range_agg for the rest (assuming\nwe built them all). I'm not going to insist on any of that, but it's\nwhat feels most user-friendly to me.\n\n> What can we do with a multirange? A lot of range operators still make\n> sense, like \"contains\" or \"overlaps\"; but \"adjacent\" doesn't quite\n> work. What about new operations like inverting a multirange to get the\n> gaps?\n\nI can think of a lot of cool operators for `multirange \\bigoplus\nmultirange` or `multirange \\bigoplus range` (commuting of course). And\nI've certainly wanted `range + range` and `range - range` in the past,\nwhich would both return a multirange.\n\n> If we have a more complete set of operators here, the flags for\n> handling overlapping ranges and gaps will be unnecessary.\n\nBoth of my use cases should permit overlaps & gaps (range_agg(r, true,\ntrue)), so I'm actually pretty okay with dropping the flags entirely\nand just giving a one-param function that behavior. Or defining a\nstrict_range_agg that offers more control. But also I don't understand\nhow richer *operators* make the flags for the *aggregate* unnecessary.\n\nSo I really like the idea of multiranges, but I'm reluctant to take it\non myself, especially since this patch is just a utility function for\nmy other temporal patches. But I don't want my rush to leave a blemish\nin our standard library either. But I think what really persuades me\nto add multiranges is making a range_agg that more easily supports\nuser-defined range types. So how about I start on it and see how it\ngoes? I expect I can follow the existing code for range types pretty\nclosely, so maybe it won't be too hard.\n\nAnother option would be to rename my function range_array_agg (or\nsomething) so that we are leaving space for a multirange function in\nthe future. I don't love this idea myself but it would could a Plan B.\nWhat do you think of that?\n\nRegards,\nPaul\n\n[1] With range_agg I can make the temporal FK check use similar SQL to\nthe non-temporal FK check:\n\n SELECT 1\n FROM [ONLY] <pktable> x\n WHERE pkatt1 = $1 [AND ...]\n FOR KEY SHARE OF x\n\nvs\n\n SELECT 1\n FROM (\n SELECT range_agg(r, true, true) AS r\n FROM (\n SELECT pkperiodatt AS r\n FROM [ONLY] pktable x\n WHERE pkatt1 = $1 [AND ...]\n FOR KEY SHARE OF x\n ) x1\n ) x2\n WHERE $n <@ x2.r[1]\n\n(The temporal version could be simplified a bit if FOR KEY SHARE ever\nsupports aggregate functions.)\n\n[2] page number 159ff in https://www2.cs.arizona.edu/~rts/tdbbook.pdf\n(section 6.5.1, starts on page 183 of the PDF)\n\n[3] https://git.proteus-tech.com/open-source/django-postgres/blob/fa91cf9b43ce942e84b1a9be22f445f3515ca360/postgres/sql/range_agg.sql\n\n[4] https://schinckel.net/2014/11/18/aggregating-ranges-in-postgres/\n\n\n",
"msg_date": "Fri, 5 Jul 2019 09:58:02 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 11:34 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> I noticed that this patch has a // comment about it segfaulting. Did\n> you ever figure that out? Is the resulting code the one you intend as\n> final?\n\nThanks for the review! I haven't revisited it but I'll see if I can\ntrack it down. I consider this a WIP patch, not something final. (I\ndon't think Postgres likes C++-style comments, so anything that is //\nmarks something I consider needs more work.)\n\n> Stylistically, the code does not match pgindent's choices very closely.\n> I can return a diff to a pgindented version of your v0002 for your\n> perusal, if it would be useful for you to learn its style.\n\nSorry about that, and thank you for making it easier for me to learn\nhow to do it the right way. :-)\n\nPaul\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:00:15 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 4:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> The first issue is unstable regress tests - there is a problem with opr_sanity\n\nI would prefer to avoid needing to add anything to opr_sanity really.\nA multirange would let me achieve that I think. But otherwise I'll add\nthe ordering. Thanks!\n\n> + rangeTypeId = get_fn_expr_argtype(fcinfo->flinfo, 1);\n> + if (!type_is_range(rangeTypeId))\n> + {\n> + ereport(ERROR, (errmsg(\"range_agg must be called with a range\")));\n> + }\n\nI think this was left over from my attempts to support user-defined\nranges. I believe I can remove it now.\n\n> +# Making 2- and 3-param range_agg polymorphic is difficult\n> +# because it would take an anyrange and return an anyrange[],\n> +# which doesn't exist.\n> +# As a workaround we define separate functions for each built-in range type.\n> +# This is what causes the mess in src/test/regress/expected/opr_sanity.out.\n> +{ oid => '8003', descr => 'aggregate transition function',\n>\n> This is strange. Does it means so range_agg will not work with custom range types?\n\nYou would have to define your own range_agg with your own custom type\nin the signature. I gave instructions about that in my range_agg\nextension (https://github.com/pjungwir/range_agg), but it's not as\neasy with range_agg in core because I don't think you can define a\ncustom function that is backed by a built-in C function. At least I\ncouldn't get that to work.\n\nThe biggest argument for a multirange to me is that we could have\nanymultirange, with similar rules as anyarray. That way I could have\n`range_agg(r anyrange) RETURNS anymultirange`. [1] is a conversation\nwhere I asked about doing this before multiranges were suggested. Also\nI'm aware of your own recent work on polymorphic types at [2] but I\nhaven't read it closely enough to see if it would help me here. Do you\nthink it applies?\n\n> I am not sure about implementation. I think so accumulating all ranges, sorting and processing on the end can be memory and CPU expensive.\n\nI did some research and couldn't find any algorithm that was better\nthan O(n log n), but if you're aware of any I'd like to know about it.\nAssuming we can't improve on the complexity bounds, I think a sort +\niteration is desirable because it keeps things simple and benefits\nfrom past & future work on the sorting code. I care more about\noptimizing time here than RAM, especially since we'll use this in FK\nchecks, where the inputs will rarely be more than a few and probably\nnever in the millions.\n\nWe especially want to avoid an O(n^2) algorithm, which would be easy\nto stumble into if we naively accumulated the result as we go. For\nexample if we had multiranges you could imagine an implementation that\njust did `result + r` for all inputs r. But that would have the same\nn^2 pattern as looping over strcat.\n\nWe could use a tree to keep things sorted and accumulate as we go, but\nthe implementation would be more complicated and I think slower.\n\nThanks for the review!\n\nPaul\n\n[1] https://www.postgresql.org/message-id/CA%2BrenyVOjb4xQZGjdCnA54-1nzVSY%2B47-h4nkM-EP5J%3D1z%3Db9w%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:22:14 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Jul 1, 2019 at 3:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> The patch is implicitly introducing the concept of\n> a \"multirange\" (in this case, an array of ranges),\n\nI meant to say before: this patch always returns a sorted array, and I\nthink a multirange should always act as if sorted when we stringify it\nor cast it to an array. If you disagree let me know. :-)\n\nYou could imagine that when returning arrays we rely on the caller to\ndo the sorting (range_agg(r ORDER BY r)) and otherwise give wrong\nresults. But hopefully everyone agrees that would not be nice. :-) So\neven the array-returning version should always return a sorted array I\nthink. (I'm not sure anything else is really coherent or at least easy\nto describe.)\n\nPaul\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:38:26 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 09:58:02AM -0700, Paul A Jungwirth wrote:\n> On Mon, Jul 1, 2019 at 3:38 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > For getting into core though, it should be a more complete set of\n> > related operations. The patch is implicitly introducing the concept of\n> > a \"multirange\" (in this case, an array of ranges), but it's not making\n> > the concept whole.\n> >\n> > What else should return a multirange? This patch implements the union-\n> > agg of ranges, but we also might want an intersection-agg of ranges\n> > (that is, the set of ranges that are subranges of every input). Given\n> > that there are other options here, the name \"range_agg\" is too generic\n> > to mean union-agg specifically.\n> \n> Thanks for the review!\n> \n> I like the idea of adding a multirange type that works like range\n> types, although I'm not sure I want to build it. :-)\n> \n> My own motivations for the range_agg patch are for temporal databases,\n> where I have two use-cases: checking temporal foreign keys [1] and\n> doing a Snodgrass \"coalesce\" operation [2]. The FKs use-case is more\n> important. For coalesce I just immediately UNNEST the array, so a\n> multirange would sort of \"get in the way\". It's no big deal if I can\n> cast a multirange to an array, although for something that would run\n> on every INSERT/UPDATE/DELETE I'd like to understand what the cast\n> would cost us in performance. But coalesce isn't part of SQL:2011 and\n> should be optional behavior or just something in an extension. The FKs\n> use case matters to me a lot more, and I think a multirange would be\n> fine for that. Also a multirange would solve the generics problem\n> Pavel asked about. (I'll say more about that in a reply to him.)\n> \n> I'm not convinced that an intersection aggregate function for\n> multiranges would be used by anyone, but I don't mind building one.\n> Every other *_agg function has an \"additive\" sense, not a\n> \"subtractive\" sense. json{,b}_object_agg are the closest since you\n> *could* imagine intersection semantics for those, but they are unions.\n> So in terms of *naming* I think using \"range_agg\" for union semantics\n> is natural and would fit expectations. (I'm not the first to name this\n> function range_agg btw: [3]).\n> \n> But there is clearly more than one worthwhile way to aggregate ranges:\n> \n> - David suggested weighted_range_agg and covering_range_agg. At PGCon\n\nIf I understand the cases correctly, the combination of covering_range\nand multi_range types covers all cases. To recap, covering_range_agg\nassigns a weight, possibly 0, to each non-overlapping sub-range. A\ncast from covering_range to multi_range would meld adjacent ranges\nwith non-zero weights into single ranges in O(N) (N sub-ranges) time\nand some teensy amount of memory for tracking progress of adjacent\nranges of non-zero weight. Gaps would just be multi_range consisting\nof the sub-ranges of the covering_range with weight 0, and wouldn't\nrequire any tracking.\n\nWhat have I missed?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 5 Jul 2019 19:45:32 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 10:45 AM David Fetter <david@fetter.org> wrote:\n> If I understand the cases correctly, the combination of covering_range\n> and multi_range types covers all cases. To recap, covering_range_agg\n> assigns a weight, possibly 0, to each non-overlapping sub-range. A\n> cast from covering_range to multi_range would meld adjacent ranges\n> with non-zero weights into single ranges in O(N) (N sub-ranges) time\n> and some teensy amount of memory for tracking progress of adjacent\n> ranges of non-zero weight. Gaps would just be multi_range consisting\n> of the sub-ranges of the covering_range with weight 0, and wouldn't\n> require any tracking.\n\nI take it that a multirange contains of *disjoint* ranges, so instead\nof {[1,2), [2,3), [6,7)} you'd have {[1,3), [6,7)}. Jeff does that\nmatch your expectation?\n\nI just realized that since weighted_range_agg and covering_range_agg\nreturn tuples of (anyrange, integer) (maybe other numeric types too?),\ntheir elements are *not ranges*, so they couldn't return a multirange.\nThey would have to return an array of those tuples.\n\nI agree that if you had a pre-sorted list of weighted ranges (with or\nwithout zero-weight elements), you could convert it to a multirange in\nO(n) very easily.\n\nPaul\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:57:53 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 10:57 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> I take it that a multirange contains of *disjoint* ranges,\n\n*consists* of. :-)\n\n\n",
"msg_date": "Fri, 5 Jul 2019 10:59:26 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\npá 5. 7. 2019 v 18:48 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n\n> On Fri, 2019-07-05 at 07:58 +0200, Pavel Stehule wrote:\n> > The question is naming - should be this agg function named\n> > \"range_agg\", and multi range agg \"multirange_agg\"? Personally, I have\n> > not a problem with range_agg, and I have not a problem so it is based\n> > on union operation. It is true so only result of union can be\n> > implemented as range simply. When I though about multi range result,\n> > then there are really large set of possible algorithms how to do some\n> > operations over two, three values.\n>\n> Hi Pavel,\n>\n> Can you explain in more detail? Would an intersection-based aggregate\n> be useful? If so, and we implement it in the future, what would we call\n> it?\n>\n\nIntersection can be interesting - you can use it for planning - \"is there a\nintersection of free time ranges?\"\n\nAbout naming - what range_union_agg(), range_intersect_agg() ?\n\nRegards\n\nPavel\n\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\nHipá 5. 7. 2019 v 18:48 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Fri, 2019-07-05 at 07:58 +0200, Pavel Stehule wrote:\n> The question is naming - should be this agg function named\n> \"range_agg\", and multi range agg \"multirange_agg\"? Personally, I have\n> not a problem with range_agg, and I have not a problem so it is based\n> on union operation. It is true so only result of union can be\n> implemented as range simply. When I though about multi range result,\n> then there are really large set of possible algorithms how to do some\n> operations over two, three values.\n\nHi Pavel,\n\nCan you explain in more detail? Would an intersection-based aggregate\nbe useful? If so, and we implement it in the future, what would we call\nit?Intersection can be interesting - you can use it for planning - \"is there a intersection of free time ranges?\"About naming - what range_union_agg(), range_intersect_agg() ?RegardsPavel \n\nRegards,\n Jeff Davis",
"msg_date": "Sat, 6 Jul 2019 08:09:40 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "pá 5. 7. 2019 v 19:22 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Fri, Jul 5, 2019 at 4:31 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > The first issue is unstable regress tests - there is a problem with\n> opr_sanity\n>\n> I would prefer to avoid needing to add anything to opr_sanity really.\n> A multirange would let me achieve that I think. But otherwise I'll add\n> the ordering. Thanks!\n>\n> > + rangeTypeId = get_fn_expr_argtype(fcinfo->flinfo, 1);\n> > + if (!type_is_range(rangeTypeId))\n> > + {\n> > + ereport(ERROR, (errmsg(\"range_agg must be called with a\n> range\")));\n> > + }\n>\n> I think this was left over from my attempts to support user-defined\n> ranges. I believe I can remove it now.\n>\n> > +# Making 2- and 3-param range_agg polymorphic is difficult\n> > +# because it would take an anyrange and return an anyrange[],\n> > +# which doesn't exist.\n> > +# As a workaround we define separate functions for each built-in range\n> type.\n> > +# This is what causes the mess in\n> src/test/regress/expected/opr_sanity.out.\n> > +{ oid => '8003', descr => 'aggregate transition function',\n> >\n> > This is strange. Does it means so range_agg will not work with custom\n> range types?\n>\n> You would have to define your own range_agg with your own custom type\n> in the signature. I gave instructions about that in my range_agg\n> extension (https://github.com/pjungwir/range_agg), but it's not as\n> easy with range_agg in core because I don't think you can define a\n> custom function that is backed by a built-in C function. At least I\n> couldn't get that to work.\n>\n\nI understand so anybody can implement own function, but this is fundamental\nproblem in implementation.\n\nIn this case I prefer to start implement anyrangearray type first. It is\nnot hard work.\n\n\n>\n> The biggest argument for a multirange to me is that we could have\n> anymultirange, with similar rules as anyarray. That way I could have\n> `range_agg(r anyrange) RETURNS anymultirange`. [1] is a conversation\n> where I asked about doing this before multiranges were suggested. Also\n> I'm aware of your own recent work on polymorphic types at [2] but I\n> haven't read it closely enough to see if it would help me here. Do you\n> think it applies?\n>\n\nI don't see any difference between anymultirange or anyrangearray - it is\njust name\n\nANSI SQL has a special kind for this purpose - \"set\". It is maybe better,\nbecause PostgreSQL's arrays revoke a idea of ordering, but it is hard to do\nsome generic order for ranges.\n\nFunctionality is important - if you can do some special functionality, that\ncannot be implemented as \"array of ranges\", then the new type is necessary.\nIf all functionality can be covered by array of ranges, then there is not\nnecessity of new type.\n\nI don't talk about polymorphic types - probably it needs one -\n\"anymultirange\" or \"anyrangearray\".\n\nIf some operation can be done smarter with new type, then I am ok for new\ntype, if not, then we should to use array of ranges.\n\n\n> > I am not sure about implementation. I think so accumulating all ranges,\n> sorting and processing on the end can be memory and CPU expensive.\n>\n> I did some research and couldn't find any algorithm that was better\n> than O(n log n), but if you're aware of any I'd like to know about it.\n> Assuming we can't improve on the complexity bounds, I think a sort +\n> iteration is desirable because it keeps things simple and benefits\n> from past & future work on the sorting code. I care more about\n> optimizing time here than RAM, especially since we'll use this in FK\n> checks, where the inputs will rarely be more than a few and probably\n> never in the millions.\n>\n> We especially want to avoid an O(n^2) algorithm, which would be easy\n> to stumble into if we naively accumulated the result as we go. For\n> example if we had multiranges you could imagine an implementation that\n> just did `result + r` for all inputs r. But that would have the same\n> n^2 pattern as looping over strcat.\n>\n> We could use a tree to keep things sorted and accumulate as we go, but\n> the implementation would be more complicated and I think slower.\n>\n\nI don't think - working with large arrays is slow, due often cache miss. I\nunderstand so it depends on data, but in this area, I can imagine\nsignificant memory reduction based on running processing.\n\narray builder doesn't respect work_men, and I think so any different way is\nsafer.\n\nRegards\n\nPavel\n\n\n\n>\n> Thanks for the review!\n>\n> Paul\n>\n> [1]\n> https://www.postgresql.org/message-id/CA%2BrenyVOjb4xQZGjdCnA54-1nzVSY%2B47-h4nkM-EP5J%3D1z%3Db9w%40mail.gmail.com\n>\n> [2]\n> https://www.postgresql.org/message-id/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com\n>\n\npá 5. 7. 2019 v 19:22 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Fri, Jul 5, 2019 at 4:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> The first issue is unstable regress tests - there is a problem with opr_sanity\n\nI would prefer to avoid needing to add anything to opr_sanity really.\nA multirange would let me achieve that I think. But otherwise I'll add\nthe ordering. Thanks!\n\n> + rangeTypeId = get_fn_expr_argtype(fcinfo->flinfo, 1);\n> + if (!type_is_range(rangeTypeId))\n> + {\n> + ereport(ERROR, (errmsg(\"range_agg must be called with a range\")));\n> + }\n\nI think this was left over from my attempts to support user-defined\nranges. I believe I can remove it now.\n\n> +# Making 2- and 3-param range_agg polymorphic is difficult\n> +# because it would take an anyrange and return an anyrange[],\n> +# which doesn't exist.\n> +# As a workaround we define separate functions for each built-in range type.\n> +# This is what causes the mess in src/test/regress/expected/opr_sanity.out.\n> +{ oid => '8003', descr => 'aggregate transition function',\n>\n> This is strange. Does it means so range_agg will not work with custom range types?\n\nYou would have to define your own range_agg with your own custom type\nin the signature. I gave instructions about that in my range_agg\nextension (https://github.com/pjungwir/range_agg), but it's not as\neasy with range_agg in core because I don't think you can define a\ncustom function that is backed by a built-in C function. At least I\ncouldn't get that to work.I understand so anybody can implement own function, but this is fundamental problem in implementation.In this case I prefer to start implement anyrangearray type first. It is not hard work. \n\nThe biggest argument for a multirange to me is that we could have\nanymultirange, with similar rules as anyarray. That way I could have\n`range_agg(r anyrange) RETURNS anymultirange`. [1] is a conversation\nwhere I asked about doing this before multiranges were suggested. Also\nI'm aware of your own recent work on polymorphic types at [2] but I\nhaven't read it closely enough to see if it would help me here. Do you\nthink it applies?I don't see any difference between anymultirange or anyrangearray - it is just nameANSI SQL has a special kind for this purpose - \"set\". It is maybe better, because PostgreSQL's arrays revoke a idea of ordering, but it is hard to do some generic order for ranges.Functionality is important - if you can do some special functionality, that cannot be implemented as \"array of ranges\", then the new type is necessary. If all functionality can be covered by array of ranges, then there is not necessity of new type.I don't talk about polymorphic types - probably it needs one - \"anymultirange\" or \"anyrangearray\". If some operation can be done smarter with new type, then I am ok for new type, if not, then we should to use array of ranges.\n\n> I am not sure about implementation. I think so accumulating all ranges, sorting and processing on the end can be memory and CPU expensive.\n\nI did some research and couldn't find any algorithm that was better\nthan O(n log n), but if you're aware of any I'd like to know about it.\nAssuming we can't improve on the complexity bounds, I think a sort +\niteration is desirable because it keeps things simple and benefits\nfrom past & future work on the sorting code. I care more about\noptimizing time here than RAM, especially since we'll use this in FK\nchecks, where the inputs will rarely be more than a few and probably\nnever in the millions.\n\nWe especially want to avoid an O(n^2) algorithm, which would be easy\nto stumble into if we naively accumulated the result as we go. For\nexample if we had multiranges you could imagine an implementation that\njust did `result + r` for all inputs r. But that would have the same\nn^2 pattern as looping over strcat.\n\nWe could use a tree to keep things sorted and accumulate as we go, but\nthe implementation would be more complicated and I think slower.I don't think - working with large arrays is slow, due often cache miss. I understand so it depends on data, but in this area, I can imagine significant memory reduction based on running processing.array builder doesn't respect work_men, and I think so any different way is safer. RegardsPavel \n\nThanks for the review!\n\nPaul\n\n[1] https://www.postgresql.org/message-id/CA%2BrenyVOjb4xQZGjdCnA54-1nzVSY%2B47-h4nkM-EP5J%3D1z%3Db9w%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/CAFj8pRDna7VqNi8gR+Tt2Ktmz0cq5G93guc3Sbn_NVPLdXAkqA@mail.gmail.com",
"msg_date": "Sat, 6 Jul 2019 08:33:55 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, 2019-07-05 at 09:58 -0700, Paul A Jungwirth wrote:\n> user-defined range types. So how about I start on it and see how it\n> goes? I expect I can follow the existing code for range types pretty\n> closely, so maybe it won't be too hard.\n\nThat would be great to at least take a look. If it starts to look like\na bad idea, then we can re-evaluate and see if it's better to just\nreturn arrays.\n\nThe \"weighted\" version of the aggregate might be interesting... what\nwould it return exactly? An array of (range, weight) pairs, or an array\nof ranges and an array of weights, or a multirange and an array of\nweights?\n\n> Another option would be to rename my function range_array_agg (or\n> something) so that we are leaving space for a multirange function in\n> the future. I don't love this idea myself but it would could a Plan\n> B.\n> What do you think of that?\n\nNot excited about that either.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sat, 06 Jul 2019 12:13:08 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, 2019-07-05 at 10:57 -0700, Paul A Jungwirth wrote:\n> I take it that a multirange contains of *disjoint* ranges, so instead\n> of {[1,2), [2,3), [6,7)} you'd have {[1,3), [6,7)}. Jeff does that\n> match your expectation?\n\nYes.\n\n> I just realized that since weighted_range_agg and covering_range_agg\n> return tuples of (anyrange, integer) (maybe other numeric types\n> too?),\n> their elements are *not ranges*, so they couldn't return a\n> multirange.\n> They would have to return an array of those tuples.\n\nI think you are right. I was originally thinking a multirange and an\narray of weights would work, but the multirange would coalesce adjacent\nranges because it would have no way to know they have different\nweights.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Sat, 06 Jul 2019 12:26:06 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Jul 6, 2019 at 12:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2019-07-05 at 09:58 -0700, Paul A Jungwirth wrote:\n> > user-defined range types. So how about I start on it and see how it\n> > goes? I expect I can follow the existing code for range types pretty\n> > closely, so maybe it won't be too hard.\n>\n> That would be great to at least take a look. If it starts to look like\n> a bad idea, then we can re-evaluate and see if it's better to just\n> return arrays.\n\nI made some progress over the weekend. I don't have a patch yet but I\nthought I'd ask for opinions on the approach I'm taking:\n\n- A multirange type is an extra thing you get when you define a range\n(just like how you get a tstzrange[]). Therefore....\n- I don't need separate commands to add/drop multirange types. You get\none when you define a range type, and if you drop a range type it gets\ndropped automatically.\n- I'm adding a new typtype for multiranges. ('m' in pg_type).\n- I'm just adding a mltrngtypeid column to pg_range. I don't think I\nneed a new pg_multirange table.\n- You can have a multirange[].\n- Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n- I'll add an anymultirange pseudotype. When it's the return type of a\nfunction that has an \"anyrange\" parameter, it will use the same base\nelement type. (So basically anymultirange : anyrange :: anyarray ::\nanyelement.)\n- You can cast from a multirange to an array. The individual ranges\nare always sorted in the result array.\n- You can cast from an array to a multirange but it will error if\nthere are overlaps (or not?). The array's ranges don't have to be\nsorted but they will be after a \"round trip\".\n- Interesting functions:\n - multirange_length\n - range_agg (range_union_agg if you like)\n - range_intersection_agg\n- You can subscript a multirange like you do an array (? This could be\na function instead.)\n- operators:\n - union (++) and intersection (*):\n - We already have + for range union but it raises an error if\nthere is a gap, so ++ is the same but with no errors.\n - r ++ r = mr (commutative, associative)\n - mr ++ r = mr\n - r ++ mr = mr\n - r * r = r (unchanged)\n - mr * r = r\n - r * mr = r\n - mr - r = mr\n - r - mr = mr\n - mr - mr = mr\n - comparison\n - mr = mr\n - mr @> x\n - mr @> r\n - mr @> mr\n - x <@ mr\n - r <@ mr\n - mr <@ mr\n - mr << mr (strictly left of)\n - mr >> mr (strictly right of)\n - mr &< mr (does not extend to the right of)\n - mr &> mr (does not extend to the left of)\n - inverse operator?:\n - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n - not sure we want this or what the symbol should be. I don't like\n-mr as an inverse because then mr - mr != mr ++ -mr.\n\nAnything in there you think should be different?\n\nThanks,\nPaul\n\n\n",
"msg_date": "Mon, 8 Jul 2019 09:46:44 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\npo 8. 7. 2019 v 18:47 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Sat, Jul 6, 2019 at 12:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Fri, 2019-07-05 at 09:58 -0700, Paul A Jungwirth wrote:\n> > > user-defined range types. So how about I start on it and see how it\n> > > goes? I expect I can follow the existing code for range types pretty\n> > > closely, so maybe it won't be too hard.\n> >\n> > That would be great to at least take a look. If it starts to look like\n> > a bad idea, then we can re-evaluate and see if it's better to just\n> > return arrays.\n>\n> I made some progress over the weekend. I don't have a patch yet but I\n> thought I'd ask for opinions on the approach I'm taking:\n>\n> - A multirange type is an extra thing you get when you define a range\n> (just like how you get a tstzrange[]). Therefore....\n>\n\nI am not against a multirange type, but I miss a explanation why you\nintroduce new kind of types and don't use just array of ranges.\n\nIntroduction of new kind of types is not like introduction new type.\n\nRegards\n\nPavel\n\n\n- I don't need separate commands to add/drop multirange types. You get\n> one when you define a range type, and if you drop a range type it gets\n> dropped automatically.\n> - I'm adding a new typtype for multiranges. ('m' in pg_type).\n> - I'm just adding a mltrngtypeid column to pg_range. I don't think I\n> need a new pg_multirange table.\n> - You can have a multirange[].\n> - Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n> - I'll add an anymultirange pseudotype. When it's the return type of a\n> function that has an \"anyrange\" parameter, it will use the same base\n> element type. (So basically anymultirange : anyrange :: anyarray ::\n> anyelement.)\n> - You can cast from a multirange to an array. The individual ranges\n> are always sorted in the result array.\n> - You can cast from an array to a multirange but it will error if\n> there are overlaps (or not?). The array's ranges don't have to be\n> sorted but they will be after a \"round trip\".\n> - Interesting functions:\n> - multirange_length\n> - range_agg (range_union_agg if you like)\n> - range_intersection_agg\n> - You can subscript a multirange like you do an array (? This could be\n> a function instead.)\n> - operators:\n> - union (++) and intersection (*):\n> - We already have + for range union but it raises an error if\n> there is a gap, so ++ is the same but with no errors.\n> - r ++ r = mr (commutative, associative)\n> - mr ++ r = mr\n> - r ++ mr = mr\n> - r * r = r (unchanged)\n> - mr * r = r\n> - r * mr = r\n> - mr - r = mr\n> - r - mr = mr\n> - mr - mr = mr\n> - comparison\n> - mr = mr\n> - mr @> x\n> - mr @> r\n> - mr @> mr\n> - x <@ mr\n> - r <@ mr\n> - mr <@ mr\n> - mr << mr (strictly left of)\n> - mr >> mr (strictly right of)\n> - mr &< mr (does not extend to the right of)\n> - mr &> mr (does not extend to the left of)\n> - inverse operator?:\n> - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n> - not sure we want this or what the symbol should be. I don't like\n> -mr as an inverse because then mr - mr != mr ++ -mr.\n>\n> Anything in there you think should be different?\n>\n> Thanks,\n> Paul\n>\n>\n>\n\nHipo 8. 7. 2019 v 18:47 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Sat, Jul 6, 2019 at 12:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2019-07-05 at 09:58 -0700, Paul A Jungwirth wrote:\n> > user-defined range types. So how about I start on it and see how it\n> > goes? I expect I can follow the existing code for range types pretty\n> > closely, so maybe it won't be too hard.\n>\n> That would be great to at least take a look. If it starts to look like\n> a bad idea, then we can re-evaluate and see if it's better to just\n> return arrays.\n\nI made some progress over the weekend. I don't have a patch yet but I\nthought I'd ask for opinions on the approach I'm taking:\n\n- A multirange type is an extra thing you get when you define a range\n(just like how you get a tstzrange[]). Therefore....I am not against a multirange type, but I miss a explanation why you introduce new kind of types and don't use just array of ranges.Introduction of new kind of types is not like introduction new type.RegardsPavel\n- I don't need separate commands to add/drop multirange types. You get\none when you define a range type, and if you drop a range type it gets\ndropped automatically.\n- I'm adding a new typtype for multiranges. ('m' in pg_type).\n- I'm just adding a mltrngtypeid column to pg_range. I don't think I\nneed a new pg_multirange table.\n- You can have a multirange[].\n- Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n- I'll add an anymultirange pseudotype. When it's the return type of a\nfunction that has an \"anyrange\" parameter, it will use the same base\nelement type. (So basically anymultirange : anyrange :: anyarray ::\nanyelement.)\n- You can cast from a multirange to an array. The individual ranges\nare always sorted in the result array.\n- You can cast from an array to a multirange but it will error if\nthere are overlaps (or not?). The array's ranges don't have to be\nsorted but they will be after a \"round trip\".\n- Interesting functions:\n - multirange_length\n - range_agg (range_union_agg if you like)\n - range_intersection_agg\n- You can subscript a multirange like you do an array (? This could be\na function instead.)\n- operators:\n - union (++) and intersection (*):\n - We already have + for range union but it raises an error if\nthere is a gap, so ++ is the same but with no errors.\n - r ++ r = mr (commutative, associative)\n - mr ++ r = mr\n - r ++ mr = mr\n - r * r = r (unchanged)\n - mr * r = r\n - r * mr = r\n - mr - r = mr\n - r - mr = mr\n - mr - mr = mr\n - comparison\n - mr = mr\n - mr @> x\n - mr @> r\n - mr @> mr\n - x <@ mr\n - r <@ mr\n - mr <@ mr\n - mr << mr (strictly left of)\n - mr >> mr (strictly right of)\n - mr &< mr (does not extend to the right of)\n - mr &> mr (does not extend to the left of)\n - inverse operator?:\n - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n - not sure we want this or what the symbol should be. I don't like\n-mr as an inverse because then mr - mr != mr ++ -mr.\n\nAnything in there you think should be different?\n\nThanks,\nPaul",
"msg_date": "Tue, 9 Jul 2019 07:08:49 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 09:46:44AM -0700, Paul A Jungwirth wrote:\n> On Sat, Jul 6, 2019 at 12:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Fri, 2019-07-05 at 09:58 -0700, Paul A Jungwirth wrote:\n> > > user-defined range types. So how about I start on it and see how it\n> > > goes? I expect I can follow the existing code for range types pretty\n> > > closely, so maybe it won't be too hard.\n> >\n> > That would be great to at least take a look. If it starts to look like\n> > a bad idea, then we can re-evaluate and see if it's better to just\n> > return arrays.\n> \n> I made some progress over the weekend. I don't have a patch yet but I\n> thought I'd ask for opinions on the approach I'm taking:\n> \n> - A multirange type is an extra thing you get when you define a range\n> (just like how you get a tstzrange[]). Therefore....\n> - I don't need separate commands to add/drop multirange types. You get\n> one when you define a range type, and if you drop a range type it gets\n> dropped automatically.\n\nYay for fewer manual steps!\n\n> - I'm adding a new typtype for multiranges. ('m' in pg_type).\n> - I'm just adding a mltrngtypeid column to pg_range. I don't think I\n> need a new pg_multirange table.\n\nMakes sense, as they'd no longer be separate concepts.\n\n> - You can have a multirange[].\n\nI can see how that would fall out of this, but I'm a little puzzled as\nto what people might use it for. Aggregates, maybe?\n\n> - Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n> - I'll add an anymultirange pseudotype. When it's the return type of a\n> function that has an \"anyrange\" parameter, it will use the same base\n> element type. (So basically anymultirange : anyrange :: anyarray ::\n> anyelement.)\n\nNeat!\n\n> - You can cast from a multirange to an array. The individual ranges\n> are always sorted in the result array.\n\nIs this so people can pick individual ranges out of the multirange,\nor...? Speaking of casts, it's possible that a multirange is also a\nrange. Would it make sense to have a cast from multirange to range?\n\n> - You can cast from an array to a multirange but it will error if\n> there are overlaps (or not?).\n\nAn alternative would be to canonicalize into non-overlapping ranges.\nThere's some precedent for this in casts to JSONB. Maybe a function\nthat isn't a cast should handle such things.\n\n> The array's ranges don't have to be sorted but they will be after a\n> \"round trip\".\n\nMakes sense.\n\n> - Interesting functions:\n> - multirange_length\n\nIs that the sum of the lengths of the ranges? Are we guaranteeing a\nmeasure in addition to ordering on ranges now?\n\n> - range_agg (range_union_agg if you like)\n> - range_intersection_agg\n> - You can subscript a multirange like you do an array (? This could be\n> a function instead.)\n\nHow would this play with the generic subscripting patch in flight?\n\n> - operators:\n> - union (++) and intersection (*):\n> - We already have + for range union but it raises an error if\n> there is a gap, so ++ is the same but with no errors.\n> - r ++ r = mr (commutative, associative)\n> - mr ++ r = mr\n> - r ++ mr = mr\n> - r * r = r (unchanged)\n> - mr * r = r\n> - r * mr = r\n\nShouldn't the two above both yield multirange ? For example, if I\nunderstand correctly,\n\n{\"[1,3)\",\"[5,7)\"} * [2,6) should be {\"[2,3)\",\"[5,6)\"}\n\n> - mr - r = mr\n> - r - mr = mr\n> - mr - mr = mr\n> - comparison\n> - mr = mr\n> - mr @> x\n\nx is in the domain of the (multi)range?\n\n> - mr @> r\n> - mr @> mr\n> - x <@ mr\n> - r <@ mr\n> - mr <@ mr\n> - mr << mr (strictly left of)\n> - mr >> mr (strictly right of)\n> - mr &< mr (does not extend to the right of)\n> - mr &> mr (does not extend to the left of)\n> - inverse operator?:\n> - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n\nIs that the same as [\"(∞, ∞)\"] - {\"[1,2)\"}? I seem to recall that the\nusual convention (at least in math) is to use intervals that are\ngenerally represented as open on the infinity side, but that might not\nfit how we do things.\n\n> - not sure we want this or what the symbol should be. I don't like\n> -mr as an inverse because then mr - mr != mr ++ -mr.\n\n!mr , perhaps?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 9 Jul 2019 17:51:12 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 10:09 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 8. 7. 2019 v 18:47 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:\n> I am not against a multirange type, but I miss a explanation why you introduce new kind of types and don't use just array of ranges.\n\nHi Pavel, I'm sorry, and thanks for your feedback! I had a response to\nyour earlier email but it was stuck in my Drafts folder.\n\nI do think a multirange would have enough new functionality to be\nworth doing. I was pretty reluctant to take it on at first but the\nidea is growing on me, and it does seem to offer a more sensible\ninterface. A lot of the value would come from range and multirange\noperators, which we can't do with just arrays (I think). Having a\nrange get merged correctly when you add it would be very helpful. Also\nit would be nice to have a data type that enforces a valid structure,\nsince not all range arrays are valid multiranges. My reply yesterday\nto Jeff expands on this with all the functions/operators/etc we could\noffer.\n\nYour other email also asked:\n> I don't think - working with large arrays is slow, due often cache miss.\n>I understand so it depends on data, but in this area, I can imagine significant memory reduction based on running processing.\n>\n> array builder doesn't respect work_men, and I think so any different way is safer.\n\nI'm still thinking about this one. I tried to work out how I'd\nimplement a tree-based sorted list of ranges so that I can quickly\ninsert/remove ranges. It is very complicated, and I started to feel\nlike I was just re-implementing GiST but in memory. I did find the\ninterval_set class from Boost's boost_icl library which could offer\nsome guidance. But for now I want to press forward with a\nsort-then-iterate implementation and consider a different\nimplementation later. If you have any guidance I would appreciate it!\nI especially want something that is O(n log n) to insert n ranges.\nOther suggestions here are very welcome! :-)\n\nRegards,\nPaul\n\n\n",
"msg_date": "Tue, 9 Jul 2019 09:31:07 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 8:51 AM David Fetter <david@fetter.org> wrote:\n> > - A multirange type is an extra thing you get when you define a range\n> > (just like how you get a tstzrange[]). Therefore....\n> > - I don't need separate commands to add/drop multirange types. You get\n> > one when you define a range type, and if you drop a range type it gets\n> > dropped automatically.\n>\n> Yay for fewer manual steps!\n\nThanks for taking a look and sharing your thoughts!\n\n> > - You can have a multirange[].\n>\n> I can see how that would fall out of this, but I'm a little puzzled as\n> to what people might use it for. Aggregates, maybe?\n\nI don't know either, but I thought it was standard to define a T[] for\nevery T. Anyway it doesn't seem difficult.\n\n> > - You can cast from a multirange to an array. The individual ranges\n> > are always sorted in the result array.\n>\n> Is this so people can pick individual ranges out of the multirange,\n> or...?\n\nYes. I want this for foreign keys actually, where I construct a\nmultirange and ask for just its first range.\n\n> Speaking of casts, it's possible that a multirange is also a\n> range. Would it make sense to have a cast from multirange to range?\n\nHmm, that seems strange to me. You don't cast from an array to one of\nits elements. If we have subscripting, why use casting to get the\nfirst element?\n\n> > - You can cast from an array to a multirange but it will error if\n> > there are overlaps (or not?).\n>\n> An alternative would be to canonicalize into non-overlapping ranges.\n> There's some precedent for this in casts to JSONB. Maybe a function\n> that isn't a cast should handle such things.\n\nI agree it'd be nice to have both.\n\n> > - Interesting functions:\n> > - multirange_length\n>\n> Is that the sum of the lengths of the ranges? Are we guaranteeing a\n> measure in addition to ordering on ranges now?\n\nJust the number of disjoint ranges in the multirange.\n\n> > - You can subscript a multirange like you do an array (? This could be\n> > a function instead.)\n>\n> How would this play with the generic subscripting patch in flight?\n\nI'm not aware of that patch but I guess I better check it out. :-)\n\n> > - operators:\n> > - mr * r = r\n> > - r * mr = r\n>\n> Shouldn't the two above both yield multirange ? For example, if I\n> understand correctly,\n\nYou're right! Thanks for the correction.\n\n> > - comparison\n> > - mr = mr\n> > - mr @> x\n>\n> x is in the domain of the (multi)range?\n\nYes. It's the scalar base type the range type is based on. I had in\nmind the math/ML convention of `x` for scalar and `X` for\nvector/matrix.\n\n> > - inverse operator?:\n> > - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n>\n> Is that the same as [\"(∞, ∞)\"] - {\"[1,2)\"}?\n\nYes.\n\n> I seem to recall that the\n> usual convention (at least in math) is to use intervals that are\n> generally represented as open on the infinity side, but that might not\n> fit how we do things.\n\nI think it does, unless I'm misunderstanding?\n\n> > - not sure we want this or what the symbol should be. I don't like\n> > -mr as an inverse because then mr - mr != mr ++ -mr.\n>\n> !mr , perhaps?\n\nI like that suggestion. Honestly I'm not sure we even want an inverse,\nbut it's so important theoretically we should at least consider\nwhether it is appropriate here. Or maybe \"inverse\" is the wrong word\nfor this, or there is a different meaning it should have.\n\nThanks,\nPaul\n\n\n",
"msg_date": "Tue, 9 Jul 2019 09:40:59 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, 2019-07-09 at 07:08 +0200, Pavel Stehule wrote:\n> \n> I am not against a multirange type, but I miss a explanation why you\n> introduce new kind of types and don't use just array of ranges.\n> \n> Introduction of new kind of types is not like introduction new type.\n\nThe biggest benefit, in my opinion, is that it means you can define\nfunctions/operators that take an \"anyrange\" and return an\n\"anymultirange\". That way you don't have to define different functions\nfor int4 ranges, date ranges, etc.\n\nIt starts to get even more complex when you want to add opclasses, etc.\n\nRanges and arrays are effectively generic types that need a type\nparameter to become a concrete type. Ideally, we'd have first-class\nsupport for generic types, but I think that's a different topic ;-)\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 09 Jul 2019 11:24:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2019-Jul-08, Paul A Jungwirth wrote:\n\n> - You can subscript a multirange like you do an array (? This could be\n> a function instead.)\n\nNote that we already have a patch in the pipe to make subscripting an\nextensible operation, which would fit pretty well here, I think.\n\nAlso, I suppose you would need unnest(multirange) to yield the set of\nranges.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Jul 2019 15:01:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, 2019-07-08 at 09:46 -0700, Paul A Jungwirth wrote:\n> - A multirange type is an extra thing you get when you define a range\n> (just like how you get a tstzrange[]). Therefore....\n\nAgreed.\n\n> - I'm adding a new typtype for multiranges. ('m' in pg_type).\n\nSounds reasonable.\n\n> - I'm just adding a mltrngtypeid column to pg_range. I don't think I\n> need a new pg_multirange table.\n> - You can have a multirange[].\n> - Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n\nIt would be cool to have a better text representation. We could go\nsimple like:\n\n '[1,3) [5,6)'\n\nOr maybe someone has another idea how to represent a multirange to be\nmore visually descriptive?\n\n> - I'll add an anymultirange pseudotype. When it's the return type of\n> a\n> function that has an \"anyrange\" parameter, it will use the same base\n> element type. (So basically anymultirange : anyrange :: anyarray ::\n> anyelement.)\n\nI like it.\n\n> - range_agg (range_union_agg if you like)\n> - range_intersection_agg\n\nI'm fine with those names.\n\n> - You can subscript a multirange like you do an array (? This could\n> be\n> a function instead.)\n\nI wouldn't try to hard to make them subscriptable. I'm not opposed to\nit, but it's easy enough to cast to an array and then subscript.\n\n> - operators:\n> - union (++) and intersection (*):\n> - We already have + for range union but it raises an error if\n> there is a gap, so ++ is the same but with no errors.\n> - r ++ r = mr (commutative, associative)\n> - mr ++ r = mr\n> - r ++ mr = mr\n\nI like it.\n\n> - inverse operator?:\n> - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n> - not sure we want this or what the symbol should be. I don't\n> like\n> -mr as an inverse because then mr - mr != mr ++ -mr.\n\nI think \"complement\" might be a better name than \"inverse\".\n\nm1 - m2 = m1 * complement(m2)\n\nWhat about \"~\"?\n\n\n\nThere will be some changes to parse_coerce.c, just like in range types.\nI took a brief look here and it looks pretty reasonable; hopefully\nthere aren't any hidden surprises.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 09 Jul 2019 12:02:38 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 7/9/19 12:01 PM, Alvaro Herrera wrote:\n> On 2019-Jul-08, Paul A Jungwirth wrote:\n> \n>> - You can subscript a multirange like you do an array (? This could be\n>> a function instead.)\n> \n> Note that we already have a patch in the pipe to make subscripting an\n> extensible operation, which would fit pretty well here, I think.\n\nI'll take a look at that!\n\n> Also, I suppose you would need unnest(multirange) to yield the set of\n> ranges.\n\nI think that would be really nice, although it isn't critical I think if \nyou can do something like UNNEST(multirange::tstzrange[]).\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 12:05:16 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "út 9. 7. 2019 v 20:25 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n\n> On Tue, 2019-07-09 at 07:08 +0200, Pavel Stehule wrote:\n> >\n> > I am not against a multirange type, but I miss a explanation why you\n> > introduce new kind of types and don't use just array of ranges.\n> >\n> > Introduction of new kind of types is not like introduction new type.\n>\n> The biggest benefit, in my opinion, is that it means you can define\n> functions/operators that take an \"anyrange\" and return an\n> \"anymultirange\". That way you don't have to define different functions\n> for int4 ranges, date ranges, etc.\n>\n>\nI am not sure how strong is this argument.\n\nI think so introduction of anyrangearray polymorphic type and enhancing\nsome type deduction can do same work.\n\nIt starts to get even more complex when you want to add opclasses, etc.\n>\n> Ranges and arrays are effectively generic types that need a type\n> parameter to become a concrete type. Ideally, we'd have first-class\n> support for generic types, but I think that's a different topic ;-)\n\n\nI afraid so with generic multiragetype there lot of array infrastructure\nwill be duplicated\n\nRegards\n\nPavel\n\n\n> Regards,\n> Jeff Davis\n>\n>\n>\n\nút 9. 7. 2019 v 20:25 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Tue, 2019-07-09 at 07:08 +0200, Pavel Stehule wrote:\n> \n> I am not against a multirange type, but I miss a explanation why you\n> introduce new kind of types and don't use just array of ranges.\n> \n> Introduction of new kind of types is not like introduction new type.\n\nThe biggest benefit, in my opinion, is that it means you can define\nfunctions/operators that take an \"anyrange\" and return an\n\"anymultirange\". That way you don't have to define different functions\nfor int4 ranges, date ranges, etc.\nI am not sure how strong is this argument.I think so introduction of anyrangearray polymorphic type and enhancing some type deduction can do same work.\nIt starts to get even more complex when you want to add opclasses, etc.\n\nRanges and arrays are effectively generic types that need a type\nparameter to become a concrete type. Ideally, we'd have first-class\nsupport for generic types, but I think that's a different topic ;-)I afraid so with generic multiragetype there lot of array infrastructure will be duplicatedRegardsPavel\n\nRegards,\n Jeff Davis",
"msg_date": "Tue, 9 Jul 2019 21:10:51 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "út 9. 7. 2019 v 21:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 9. 7. 2019 v 20:25 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:\n>\n>> On Tue, 2019-07-09 at 07:08 +0200, Pavel Stehule wrote:\n>> >\n>> > I am not against a multirange type, but I miss a explanation why you\n>> > introduce new kind of types and don't use just array of ranges.\n>> >\n>> > Introduction of new kind of types is not like introduction new type.\n>>\n>> The biggest benefit, in my opinion, is that it means you can define\n>> functions/operators that take an \"anyrange\" and return an\n>> \"anymultirange\". That way you don't have to define different functions\n>> for int4 ranges, date ranges, etc.\n>>\n>>\n> I am not sure how strong is this argument.\n>\n> I think so introduction of anyrangearray polymorphic type and enhancing\n> some type deduction can do same work.\n>\n> It starts to get even more complex when you want to add opclasses, etc.\n>>\n>> Ranges and arrays are effectively generic types that need a type\n>> parameter to become a concrete type. Ideally, we'd have first-class\n>> support for generic types, but I think that's a different topic ;-)\n>\n>\n> I afraid so with generic multiragetype there lot of array infrastructure\n> will be duplicated\n>\n\non second hand - it is true so classic array concat is not optimal for set\nof ranges, so some functionality should be redefined every time.\n\nI don't know what is possible, but for me - multiranges is special kind\n(subset) of arrays and can be implement as subset of arrays. I remember\nother possible kind of arrays - \"sets\" without duplicates. It is similar\ncase, I think.\n\nMaybe introduction of multirages as new generic type is bad direction, and\ncan be better and more enhanceable in future to introduce some like special\nkinds of arrays. So for example - unnest can be used directly for arrays\nand multiranges too - because there will be common base.\n\nRegards\n\nPavel\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Regards,\n>> Jeff Davis\n>>\n>>\n>>\n\nút 9. 7. 2019 v 21:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 9. 7. 2019 v 20:25 odesílatel Jeff Davis <pgsql@j-davis.com> napsal:On Tue, 2019-07-09 at 07:08 +0200, Pavel Stehule wrote:\n> \n> I am not against a multirange type, but I miss a explanation why you\n> introduce new kind of types and don't use just array of ranges.\n> \n> Introduction of new kind of types is not like introduction new type.\n\nThe biggest benefit, in my opinion, is that it means you can define\nfunctions/operators that take an \"anyrange\" and return an\n\"anymultirange\". That way you don't have to define different functions\nfor int4 ranges, date ranges, etc.\nI am not sure how strong is this argument.I think so introduction of anyrangearray polymorphic type and enhancing some type deduction can do same work.\nIt starts to get even more complex when you want to add opclasses, etc.\n\nRanges and arrays are effectively generic types that need a type\nparameter to become a concrete type. Ideally, we'd have first-class\nsupport for generic types, but I think that's a different topic ;-)I afraid so with generic multiragetype there lot of array infrastructure will be duplicatedon second hand - it is true so classic array concat is not optimal for set of ranges, so some functionality should be redefined every time.I don't know what is possible, but for me - multiranges is special kind (subset) of arrays and can be implement as subset of arrays. I remember other possible kind of arrays - \"sets\" without duplicates. It is similar case, I think.Maybe introduction of multirages as new generic type is bad direction, and can be better and more enhanceable in future to introduce some like special kinds of arrays. So for example - unnest can be used directly for arrays and multiranges too - because there will be common base.RegardsPavelRegardsPavel\n\nRegards,\n Jeff Davis",
"msg_date": "Tue, 9 Jul 2019 21:23:48 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 12:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > - Multirange in/out work just like arrays, e.g. '{\"[1,3)\", \"[5,6)\"}'\n>\n> It would be cool to have a better text representation. We could go\n> simple like:\n>\n> '[1,3) [5,6)'\n\nWill that work with all ranges, even user-defined ones? With a\ntstzrange[] there is a lot of quoting:\n\n=# select array[tstzrange(now(), now() + interval '1 hour')];\n array\n---------------------------------------------------------------------------\n {\"[\\\"2019-07-09 12:40:20.794054-07\\\",\\\"2019-07-09 13:40:20.794054-07\\\")\"}\n\nI'm more inclined to follow the array syntax both because it will be\nfamiliar & consistent to users (no need to remember any differences)\nand because it's already built so we can use it and know it won't have\ngotchas.\n\n> I think \"complement\" might be a better name than \"inverse\".\n>\n> m1 - m2 = m1 * complement(m2)\n>\n> What about \"~\"?\n\nThat seems like the right term and a good symbol.\n\nPaul\n\n\n",
"msg_date": "Tue, 9 Jul 2019 21:18:44 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 9. 7. 2019 v 21:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> I afraid so with generic multiragetype there lot of array infrastructure will be duplicated\n>\n> on second hand - it is true so classic array concat is not optimal for set of ranges, so some functionality should be redefined every time.\n>\n> I don't know what is possible, but for me - multiranges is special kind (subset) of arrays and can be implement as subset of arrays. I remember other possible kind of arrays - \"sets\" without duplicates. It is similar case, I think.\n>\n> Maybe introduction of multirages as new generic type is bad direction, and can be better and more enhanceable in future to introduce some like special kinds of arrays. So for example - unnest can be used directly for arrays and multiranges too - because there will be common base.\n\nWell I'm afraid of that too a bit, although I also agree it may be an\nopportunity to share some common behavior and implementation. For\nexample in the discussion about string syntax, I think keeping it the\nsame as arrays is nicer for people and lets us share more between the\ntwo types.\n\nThat said I don't think a multirange is a subtype of arrays (speaking\nas a traditional object-oriented subtype), just something that shares\na lot of the same behavior. I'm inclined to maximize the overlap where\nfeasible though, e.g. string syntax, UNNEST, indexing, function naming\n(`range_length`), etc. Something like Rust traits (or Java interfaces)\nseems a closer mental model, not that we have to formalize that\nsomehow, particularly up front.\n\nYours,\nPaul\n\n\n",
"msg_date": "Tue, 9 Jul 2019 21:26:33 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "st 10. 7. 2019 v 6:26 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Tue, Jul 9, 2019 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > út 9. 7. 2019 v 21:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> I afraid so with generic multiragetype there lot of array\n> infrastructure will be duplicated\n> >\n> > on second hand - it is true so classic array concat is not optimal for\n> set of ranges, so some functionality should be redefined every time.\n> >\n> > I don't know what is possible, but for me - multiranges is special kind\n> (subset) of arrays and can be implement as subset of arrays. I remember\n> other possible kind of arrays - \"sets\" without duplicates. It is similar\n> case, I think.\n> >\n> > Maybe introduction of multirages as new generic type is bad direction,\n> and can be better and more enhanceable in future to introduce some like\n> special kinds of arrays. So for example - unnest can be used directly for\n> arrays and multiranges too - because there will be common base.\n>\n> Well I'm afraid of that too a bit, although I also agree it may be an\n> opportunity to share some common behavior and implementation. For\n> example in the discussion about string syntax, I think keeping it the\n> same as arrays is nicer for people and lets us share more between the\n> two types.\n>\n> That said I don't think a multirange is a subtype of arrays (speaking\n> as a traditional object-oriented subtype), just something that shares\n> a lot of the same behavior. I'm inclined to maximize the overlap where\n> feasible though, e.g. string syntax, UNNEST, indexing, function naming\n> (`range_length`), etc. Something like Rust traits (or Java interfaces)\n> seems a closer mental model, not that we have to formalize that\n> somehow, particularly up front.\n>\n\nA introduction of new generic type can have some other impacts - there can\nbe necessary special support for PL languages.\n\nI understand so it is hard to decide - because we miss some more generic\nbase \"sets\".\n\nProbably we cannot to think more about it now, and we have to wait to some\npatches. Later we can see how much code is duplicated and if it is a\nproblem or not.\n\nRegards\n\nPavel\n\n\n> Yours,\n> Paul\n>\n\nst 10. 7. 2019 v 6:26 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Tue, Jul 9, 2019 at 12:24 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 9. 7. 2019 v 21:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> I afraid so with generic multiragetype there lot of array infrastructure will be duplicated\n>\n> on second hand - it is true so classic array concat is not optimal for set of ranges, so some functionality should be redefined every time.\n>\n> I don't know what is possible, but for me - multiranges is special kind (subset) of arrays and can be implement as subset of arrays. I remember other possible kind of arrays - \"sets\" without duplicates. It is similar case, I think.\n>\n> Maybe introduction of multirages as new generic type is bad direction, and can be better and more enhanceable in future to introduce some like special kinds of arrays. So for example - unnest can be used directly for arrays and multiranges too - because there will be common base.\n\nWell I'm afraid of that too a bit, although I also agree it may be an\nopportunity to share some common behavior and implementation. For\nexample in the discussion about string syntax, I think keeping it the\nsame as arrays is nicer for people and lets us share more between the\ntwo types.\n\nThat said I don't think a multirange is a subtype of arrays (speaking\nas a traditional object-oriented subtype), just something that shares\na lot of the same behavior. I'm inclined to maximize the overlap where\nfeasible though, e.g. string syntax, UNNEST, indexing, function naming\n(`range_length`), etc. Something like Rust traits (or Java interfaces)\nseems a closer mental model, not that we have to formalize that\nsomehow, particularly up front.A introduction of new generic type can have some other impacts - there can be necessary special support for PL languages. I understand so it is hard to decide - because we miss some more generic base \"sets\".Probably we cannot to think more about it now, and we have to wait to some patches. Later we can see how much code is duplicated and if it is a problem or not.RegardsPavel\n\nYours,\nPaul",
"msg_date": "Wed, 10 Jul 2019 06:50:23 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 09:40:59AM -0700, Paul A Jungwirth wrote:\n> On Tue, Jul 9, 2019 at 8:51 AM David Fetter <david@fetter.org> wrote:\n> > > - A multirange type is an extra thing you get when you define a range\n> > > (just like how you get a tstzrange[]). Therefore....\n> > > - I don't need separate commands to add/drop multirange types. You get\n> > > one when you define a range type, and if you drop a range type it gets\n> > > dropped automatically.\n> >\n> > Yay for fewer manual steps!\n> \n> Thanks for taking a look and sharing your thoughts!\n> \n> > > - You can have a multirange[].\n> >\n> > I can see how that would fall out of this, but I'm a little puzzled as\n> > to what people might use it for. Aggregates, maybe?\n> \n> I don't know either, but I thought it was standard to define a T[] for\n> every T. Anyway it doesn't seem difficult.\n> \n> > > - You can cast from a multirange to an array. The individual ranges\n> > > are always sorted in the result array.\n> >\n> > Is this so people can pick individual ranges out of the multirange,\n> > or...?\n> \n> Yes. I want this for foreign keys actually, where I construct a\n> multirange and ask for just its first range.\n\nI'm sure I'll understand this better once I get my head around\ntemporal foreign keys.\n\n> > Speaking of casts, it's possible that a multirange is also a\n> > range. Would it make sense to have a cast from multirange to range?\n> \n> Hmm, that seems strange to me. You don't cast from an array to one of\n> its elements. If we have subscripting, why use casting to get the\n> first element?\n\nExcellent point.\n\n> > > - Interesting functions:\n> > > - multirange_length\n> >\n> > Is that the sum of the lengths of the ranges? Are we guaranteeing a\n> > measure in addition to ordering on ranges now?\n> \n> Just the number of disjoint ranges in the multirange.\n\nThanks for clarifying.\n\n> > > - You can subscript a multirange like you do an array (? This could be\n> > > a function instead.)\n> >\n> > How would this play with the generic subscripting patch in flight?\n> \n> I'm not aware of that patch but I guess I better check it out. :-)\n\nLooks like I'm the second to mention it. Worth a review?\n\n> > > - inverse operator?:\n> > > - the inverse of {\"[1,2)\"} would be {\"[null, 1)\", \"[2, null)\"}.\n> >\n> > Is that the same as [\"(∞, ∞)\"] - {\"[1,2)\"}?\n> \n> Yes.\n> \n> > I seem to recall that the usual convention (at least in math) is\n> > to use intervals that are generally represented as open on the\n> > infinity side, but that might not fit how we do things.\n> \n> I think it does, unless I'm misunderstanding?\n\nOh, I was just wondering about the square bracket on the left side of\n[null, 1). It's not super important.\n\n> > > - not sure we want this or what the symbol should be. I don't like\n> > > -mr as an inverse because then mr - mr != mr ++ -mr.\n> >\n> > !mr , perhaps?\n> \n> I like that suggestion. Honestly I'm not sure we even want an inverse,\n> but it's so important theoretically we should at least consider\n> whether it is appropriate here. Or maybe \"inverse\" is the wrong word\n> for this, or there is a different meaning it should have.\n\nJeff's suggestion of ~ for complement is better.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 10 Jul 2019 08:24:00 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 7/9/19 11:24 PM, David Fetter wrote:\n>>> I seem to recall that the usual convention (at least in math) is\n>>> to use intervals that are generally represented as open on the\n>>> infinity side, but that might not fit how we do things.\n>>\n>> I think it does, unless I'm misunderstanding?\n> \n> Oh, I was just wondering about the square bracket on the left side of\n> [null, 1). It's not super important.\n\nAh, I understand now. Just a typo on my part. Thanks for catching it, \nand sorry for the confusion!\n\n>>> !mr , perhaps?\n>>\n>> I like that suggestion. Honestly I'm not sure we even want an inverse,\n>> but it's so important theoretically we should at least consider\n>> whether it is appropriate here. Or maybe \"inverse\" is the wrong word\n>> for this, or there is a different meaning it should have.\n> \n> Jeff's suggestion of ~ for complement is better.\n\nOkay, thanks. I like it better too.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Wed, 10 Jul 2019 07:55:11 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi Paul,\n\nJust checking if you've had a chance to make progress on this.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 18:32:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 3:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Just checking if you've had a chance to make progress on this.\n\nNot a lot. :-) But I should have more time for it the next few weeks\nthan I did the last few. I do have some code for creating concrete\nmultirange types (used when you create a concrete range type) and\nfilling in a TypeCacheEntry based on the range type oid---which I know\nis all very modest progress. I've been working on a multirange_in\nfunction and mostly just learning about Postgres varlena and TOASTed\nobjects by reading the code for range_in & array_in.\n\nHere is something from my multirangetypes.h:\n\n/*\n * Multiranges are varlena objects, so must meet the varlena convention that\n * the first int32 of the object contains the total object size in bytes.\n * Be sure to use VARSIZE() and SET_VARSIZE() to access it, though!\n */\ntypedef struct\n{\n int32 vl_len_; /* varlena header (do not touch\ndirectly!) */\n Oid multirangetypid; /* multirange type's own OID */\n /*\n * Following the OID are the range objects themselves.\n * Note that ranges are varlena too,\n * depending on whether they have lower/upper bounds\n * and because even their base types can be varlena.\n * So we can't really index into this list.\n */\n} MultirangeType;\n\nI'm working on parsing a multirange much like we parse an array,\nalthough it's a lot simpler because it's a single dimension and there\nare no nulls.\n\nI know that's not much to go on, but let me know if any of it worries you. :-)\n\nPaul\n\n\n",
"msg_date": "Tue, 23 Jul 2019 22:13:07 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 5:13 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Tue, Jul 23, 2019 at 3:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Just checking if you've had a chance to make progress on this.\n>\n> Not a lot. :-) But I should have more time for it the next few weeks\n> than I did the last few. ...\n\nHi Paul,\n\nI didn't follow this thread, but as the CF is coming to a close, I'm\ninterpreting the above to mean that this is being worked on and there\nis a good chance of a new patch in time for September. Therefore I\nhave moved this entry to that 'fest.\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 20:34:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 9:46 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> - A multirange type is an extra thing you get when you define a range\n> (just like how you get a tstzrange[]). Therefore....\n\nI've been able to make a little more progress on multiranges the last\nfew days, but it reminded me of an open question I've had for awhile:\ntypmods! I see places in the range code that gesture toward supporting\ntypmods, but none of the existing range types permit them. For\nexample:\n\npostgres=# select '5'::numeric(4,2);\n numeric\n---------\n 5.00\n(1 row)\n\npostgres=# select '[1,4)'::numrange(4,2);\nERROR: type modifier is not allowed for type \"numrange\"\nLINE 1: select '[1,4)'::numrange(4,2);\n\nSo I'm wondering how seriously I should take this for multiranges? I\nguess if a range type did support typmods, it would just delegate to\nthe underlying element type for their meaning, and so a multirange\nshould delegate it too? Is there any historical discussion around\ntypemods on range types?\n\nThanks!\nPaul\n\n\n",
"msg_date": "Sat, 17 Aug 2019 10:47:08 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, 2019-08-17 at 10:47 -0700, Paul A Jungwirth wrote:\n> So I'm wondering how seriously I should take this for multiranges? I\n> guess if a range type did support typmods, it would just delegate to\n> the underlying element type for their meaning, and so a multirange\n> should delegate it too? Is there any historical discussion around\n> typemods on range types?\n\nI did find a few references:\n\n\nhttps://www.postgresql.org/message-id/1288029716.8645.4.camel%40jdavis-ux.asterdata.local\n\nhttps://www.postgresql.org/message-id/20110111191334.GB11603%40fetter.org\nhttps://www.postgresql.org/message-id/1296974485.27157.136.camel@jdavis\n\nI'd be interested in ways that we can use a typmod-like concept to\nimprove the type system. Unfortunately, typmod is just not\nsophisticated enough to do very much because it's lost through function\ncalls. Improving that would be a separate and challenging project.\n\nSo, I wouldn't spend a lot of time on typmod for multiranges.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 20 Aug 2019 22:33:47 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Aug 20, 2019 at 10:33 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Is there any historical discussion around\n> > typemods on range types?\n>\n> I did find a few references:\n\nThanks for looking those up! It's very interesting to see some of the\noriginal discussion around range types.\n\nBtw this is true of so much, isn't it?:\n\n> It's more a property of the\n> column than the type.\n\nSometimes I think about having a maybe<T> type instead of null/not\nnull. SQL nulls are already very \"monadic\" I think but nullability\ndoesn't travel. I feel like someone ought to write a paper about that,\nbut I don't know of any. This is tantalizingly close (and a fun read)\nbut really about something else:\nhttps://www.researchgate.net/publication/266657590_Incomplete_data_what_went_wrong_and_how_to_fix_it\nSince you're getting into Rust maybe you can update the wiki page\nmentioned in those threads about refactoring the type system. :-)\nAnyway sorry for the digression. . . .\n\n> So, I wouldn't spend a lot of time on typmod for multiranges.\n\nOkay, thanks! There is plenty else to do. I think I'm already\nsupporting it as much as range types do.\n\nBtw I have working multirange_{send,recv,in,out} now, and I\nautomatically create a multirange type and its array type when someone\ncreates a new range type. I have a decent start on passing tests and\nno compiler warnings. I also have a start on anymultirange and\nanyrangearray. (I think I need the latter to support a range-like\nconstructor function, so you can say `int4multirange(int4range(1,4),\nint4range(8,10))`.) I want to get the any* types done and improve the\ntest coverage, and then I'll probably be ready to share a patch.\n\nHere are a couple other questions:\n\n- Does anyone have advice for the typanalyze function? I feel pretty\nout of my depth there (although I haven't looked into typanalyze stuff\nvery deeply yet). I can probably get some inspiration from\nrange_typanalyze and array_typanalyze, but those are both quite\ndetailed (their statistics functions that is).\n\n- What should a multirange do if you give it an empty range? I'm\nthinking it should just ignore it, but then `'{}'::int4multirange =\n'{empty}'::int4multirange`. Does that seem okay? (It does to me\nactually, if we think of `empty` as the additive identity. Similarly\nmr + empty = mr.\n\n- What should a multirange do if you give it a null, like\n`int4multirange(int4range(1,4), null)`. I'm thinking it should be\nnull, just like mr + null = null. Right?\n\nThanks!\nPaul\n\n\n",
"msg_date": "Wed, 21 Aug 2019 21:54:53 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, 2019-08-21 at 21:54 -0700, Paul A Jungwirth wrote:\n> Sometimes I think about having a maybe<T> type instead of null/not\n> null. SQL nulls are already very \"monadic\" I think but nullability\n> doesn't travel.\n\nYeah, that would be a great direction, but there is some additional\ncomplexity that we'd need to deal with that a \"normal\" compiler does\nnot:\n\n * handling both existing global types (like int) as well as on-the-\nfly types like Maybe<Either<Int,Bool>>\n * types need to do more things, like have serialized representations,\ninterface with indexing strategies, and present the optimizer with\nchoices that may influence which indexes can be used or not\n * at some point needs to work with normal SQL types and NULL\n * there are a lot of times we care not just whether a type is\nsortable, but we actually care about the way it's sorted (e.g.\nlocalization). typeclasses+newtype would probably be unacceptable for\ntrying to match SQL behavior here.\n\nI'm all in favor of pursuing this, but it's not going to bear fruit\nvery soon.\n\n> Btw I have working multirange_{send,recv,in,out} now, and I\n> automatically create a multirange type and its array type when\n> someone\n> creates a new range type. I have a decent start on passing tests and\n> no compiler warnings. I also have a start on anymultirange and\n> anyrangearray. (I think I need the latter to support a range-like\n> constructor function, so you can say `int4multirange(int4range(1,4),\n> int4range(8,10))`.) I want to get the any* types done and improve the\n> test coverage, and then I'll probably be ready to share a patch.\n\nAwesome!\n\n> Here are a couple other questions:\n> \n> - Does anyone have advice for the typanalyze function? I feel pretty\n> out of my depth there (although I haven't looked into typanalyze\n> stuff\n> very deeply yet). I can probably get some inspiration from\n> range_typanalyze and array_typanalyze, but those are both quite\n> detailed (their statistics functions that is).\n\nI think Alexander Korotkov did a lot of the heavy lifting here, perhaps\nhe has a comment? I'd keep it simple for now if you can, and we can try\nto improve it later.\n\n> - What should a multirange do if you give it an empty range? I'm\n> thinking it should just ignore it, but then `'{}'::int4multirange =\n> '{empty}'::int4multirange`. Does that seem okay? (It does to me\n> actually, if we think of `empty` as the additive identity. Similarly\n> mr + empty = mr.\n\nI agree. Multiranges are more than just an array of ranges, so they\ncoalesce into some canonical form.\n\n> - What should a multirange do if you give it a null, like\n> `int4multirange(int4range(1,4), null)`. I'm thinking it should be\n> null, just like mr + null = null. Right?\n\nYes. NULL is for the overall multirange datum (that is, a multirange\ncolumn can be NULL), but I don't think individual parts of a datatype\nmake much sense as NULL. So, I agree that mr + null = null. (Note that\narrays and records can have NULL parts, but I don't see a reason we\nshould follow those examples for multiranges.)\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 26 Aug 2019 11:34:27 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "> > Btw I have working multirange_{send,recv,in,out} now. . . .\n\nJust about all the other operators are done too, but I wonder what\nsymbols people like for union and minus? Range uses + for union. I\nhave working code and tests that adds this:\n\nr + mr = mr\nmr + r = mr\nmr + mr = mr\n\nBut I would like to use a different symbol instead, like ++, so I can\nhave all four:\n\nr ++ r = mr\nr ++ mr = mr\nmr ++ r = mr\nmr ++ mr = mr\n\n(The existing r + r operator throws an error if the inputs have a gap.)\n\nThe trouble is that ++ isn't allowed. (Neither is --.) From\nhttps://www.postgresql.org/docs/11/sql-createoperator.html :\n\n> A multicharacter operator name cannot end in + or -, unless the name also contains at least one of these characters:\n> ~ ! @ # % ^ & | ` ?\n\nSo are there any other suggestions? I'm open to arguments that I\nshould just use +, but I think having a way to add two simple ranges\nand get a multirange would be nice too, so my preference is to find a\nnew operator. It should work with minus and intersection (* for\nranges) too. Some proposals:\n\n+* and -* and ** (* as in regex \"zero or many\" reminds me of how a\nmultirange holds zero or many ranges. ** is confusing though because\nit's like exponentiation.)\n\n@+ and @- and @* (I dunno why but I kind of like it. We already have @> and <@.)\n\n<+> and <-> and <*> (I was hoping for (+) etc for the math connection\nto circled operators, but this is close. Maybe this would be stronger\nif multirange_{in,out} used <> delims instead of {}, although I also\nlike how {} is consistent with arrays.)\n\nAnyone else have any thoughts?\n\nThanks,\nPaul\n\n\n",
"msg_date": "Sun, 1 Sep 2019 06:26:11 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, 2019-09-01 at 06:26 -0700, Paul A Jungwirth wrote:\n> @+ and @- and @* (I dunno why but I kind of like it. We already have\n> @> and <@.)\n\nI think I like this proposal best; it reminds me of perl. Though some\nmight say that's an argument against it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 05 Sep 2019 10:15:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 10:15 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sun, 2019-09-01 at 06:26 -0700, Paul A Jungwirth wrote:\n> > @+ and @- and @* (I dunno why but I kind of like it. We already have\n> > @> and <@.)\n>\n> I think I like this proposal best; it reminds me of perl. Though some\n> might say that's an argument against it.\n\nThanks Jeff, it's my favorite too. :-) Strangely it feels the hardest\nto justify. Right now I have + and - and * implemented but I'll change\nthem to @+ and @- and @* so that I can support `range R range =\nmultirange` too.\n\nBtw is there any reason to send a \"preview\" patch with my current\nprogress, since we're starting a new commit fest? Here is what I have\nleft to do:\n\n- Change those three operators.\n- Write range_intersect_agg. (range_agg is done but needs some tests\nbefore I commit it.)\n- Write documentation.\n- Add multiranges to resolve_generic_type, and figure out how to test\nthat (see the other thread about two latent range-related bugs there).\n- Rebase on current master. (It should be just a few weeks behind right now.)\n- Run pgindent to make sure I'm conforming to whitespace/style guidelines.\n- Split it up into a few separate patch files.\n\nRight now I'm planning to do all that before sending a patch. I'm\nhappy to send something something in-progress too, but I don't want to\nwaste any reviewers' time. If folks want an early peak though let me\nknow. (You can also find my messy progress at\nhttps://github.com/pjungwir/postgresql/tree/multirange)\n\nAlso here are some other items that won't be in my next patch, but\nshould probably be done (maybe by someone else but I'm happy to figure\nit out too) before this is really committed:\n\n- typanalyze\n- selectivity\n- gist support\n- spgist support\n\nIf anyone would like to help with those, let me know. :-)\n\nYours,\nPaul\n\n\n",
"msg_date": "Thu, 5 Sep 2019 10:45:55 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, 2019-09-05 at 10:45 -0700, Paul A Jungwirth wrote:\n> Right now I'm planning to do all that before sending a patch. I'm\n> happy to send something something in-progress too, but I don't want\n> to\n> waste any reviewers' time. If folks want an early peak though let me\n> know. (You can also find my messy progress at\n> https://github.com/pjungwir/postgresql/tree/multirange)\n\nSounds good. The rule I use is: \"will the feedback I get be helpful, or\njust tell me about obvious problems I already know about\".\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 05 Sep 2019 11:52:37 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 11:52 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2019-09-05 at 10:45 -0700, Paul A Jungwirth wrote:\n> > Right now I'm planning to do all that before sending a patch. I'm\n> > happy to send something something in-progress too, but I don't want\n> > to\n> > waste any reviewers' time. If folks want an early peak though let me\n> > know. (You can also find my messy progress at\n> > https://github.com/pjungwir/postgresql/tree/multirange)\n>\n> Sounds good. The rule I use is: \"will the feedback I get be helpful, or\n> just tell me about obvious problems I already know about\".\n\nHere are some patches to add multiranges. I tried to split things up a\nbit but most things landed in parts 1 & 2.\n\nThings I haven't done (but would be interested in doing or getting help with):\n\n- gist opclass\n- spgist opclass\n- typanalyze\n- selectivity\n- anyrangearray\n- anymultirangearray?\n- UNNEST for multirange and/or a way to convert it to an array\n- indexing/subscripting (see patch for standardized subscripting)",
"msg_date": "Sat, 21 Sep 2019 21:50:55 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hello Paul, I've started to review this patch. Here's a few minor\nthings I ran across -- mostly compiler warnings (is my compiler too\nancient?). You don't have to agree with every fix -- feel free to use\ndifferent fixes if you have them. Also, feel free to squash them onto\nwhatever commit you like (I think they all belong onto 0001 except the\nlast which seems to be for 0002).\n\nDid you not push your latest version to your github repo? I pulled from\nthere and branch 'multirange' does not seem to match what you posted.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 26 Sep 2019 18:13:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 2:13 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Hello Paul, I've started to review this patch. Here's a few minor\n> things I ran across -- mostly compiler warnings (is my compiler too\n> ancient?). You don't have to agree with every fix -- feel free to use\n> different fixes if you have them. Also, feel free to squash them onto\n> whatever commit you like (I think they all belong onto 0001 except the\n> last which seems to be for 0002).\n\nHi Alvaro, sorry, I missed your note from September. I really\nappreciate your review and will take a look at your suggested changes!\nI just opened this thread to post a rebased set patches (especially\nbecause of the `const` additions to range functions). Maybe it's not\nthat helpful since they don't include your changes yet but here they\nare anyway. I'll post some more with your changes shortly.\n\n> Did you not push your latest version to your github repo? I pulled from\n> there and branch 'multirange' does not seem to match what you posted.\n\nHmm, I'll take a look. Before I made the v3 files I switched to\nmultirange-patch so I could squash things and use git to generate one\npatch file per commit. So `multirange` isn't rebased as currently as\n`multirange-patch`. If you don't mind a force-push I can update\n`multirange` to be the same.\n\nYours,\nPaul",
"msg_date": "Wed, 6 Nov 2019 15:02:35 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 3:02 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Thu, Sep 26, 2019 at 2:13 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Hello Paul, I've started to review this patch. Here's a few minor\n> > things I ran across -- mostly compiler warnings (is my compiler too\n> > ancient?).\n> I just opened this thread to post a rebased set patches (especially\n> because of the `const` additions to range functions). Maybe it's not\n> that helpful since they don't include your changes yet but here they\n> are anyway. I'll post some more with your changes shortly.\n\nHere is another batch of patches incorporating your improvements. It\nseems like almost all the warnings were about moving variable\ndeclarations above any other statements. For some reason I don't get\nwarnings about that on my end (compiling on OS X):\n\nplatter:postgres paul$ gcc --version\nConfigured with:\n--prefix=/Applications/Xcode.app/Contents/Developer/usr\n--with-gxx-include-dir=/usr/include/c++/4.2.1\nApple clang version 11.0.0 (clang-1100.0.33.12)\nTarget: x86_64-apple-darwin18.6.0\nThread model: posix\nInstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n\nFor configure I'm saying this:\n\n./configure CFLAGS=-ggdb 5-Og -g3 -fno-omit-frame-pointer\n--enable-tap-tests --enable-cassert --enable-debug\n--prefix=/Users/paul/local\n\nAny suggestions to get better warnings? On my other patch I got\nfeedback about the very same kind. I could just compile on Linux but\nit's nice to work on this away from my desk on the laptop. Maybe\ninstalling a real gcc is the way to go.\n\nThanks,\nPaul",
"msg_date": "Wed, 6 Nov 2019 18:35:28 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\nčt 7. 11. 2019 v 3:36 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Wed, Nov 6, 2019 at 3:02 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > On Thu, Sep 26, 2019 at 2:13 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > > Hello Paul, I've started to review this patch. Here's a few minor\n> > > things I ran across -- mostly compiler warnings (is my compiler too\n> > > ancient?).\n> > I just opened this thread to post a rebased set patches (especially\n> > because of the `const` additions to range functions). Maybe it's not\n> > that helpful since they don't include your changes yet but here they\n> > are anyway. I'll post some more with your changes shortly.\n>\n> Here is another batch of patches incorporating your improvements. It\n> seems like almost all the warnings were about moving variable\n> declarations above any other statements. For some reason I don't get\n> warnings about that on my end (compiling on OS X):\n>\n> platter:postgres paul$ gcc --version\n> Configured with:\n> --prefix=/Applications/Xcode.app/Contents/Developer/usr\n> --with-gxx-include-dir=/usr/include/c++/4.2.1\n> Apple clang version 11.0.0 (clang-1100.0.33.12)\n> Target: x86_64-apple-darwin18.6.0\n> Thread model: posix\n> InstalledDir:\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n>\n> For configure I'm saying this:\n>\n> ./configure CFLAGS=-ggdb 5-Og -g3 -fno-omit-frame-pointer\n> --enable-tap-tests --enable-cassert --enable-debug\n> --prefix=/Users/paul/local\n>\n> Any suggestions to get better warnings? On my other patch I got\n> feedback about the very same kind. I could just compile on Linux but\n> it's nice to work on this away from my desk on the laptop. Maybe\n> installing a real gcc is the way to go.\n>\n\nI tested last patches. I found some issues\n\n1. you should not to try patch catversion.\n\n2. there is warning\n\nparse_coerce.c: In function ‘enforce_generic_type_consistency’:\nparse_coerce.c:1975:11: warning: ‘range_typelem’ may be used uninitialized\nin this function [-Wmaybe-uninitialized]\n 1975 | else if (range_typelem != elem_typeid)\n\n3. there are problems with pg_upgrade. Regress tests fails\n\ncommand:\n\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/bin/pg_restore\"\n--host /home/pavel/src/postgresql.master/src/b\npg_restore: connecting to database for restore\npg_restore: creating DATABASE \"regression\"\npg_restore: connecting to new database \"regression\"\npg_restore: connecting to database \"regression\" as user \"pavel\"\npg_restore: creating DATABASE PROPERTIES \"regression\"\npg_restore: connecting to new database \"regression\"\npg_restore: connecting to database \"regression\" as user \"pavel\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating SCHEMA \"fkpart3\"\npg_restore: creating SCHEMA \"fkpart4\"\npg_restore: creating SCHEMA \"fkpart5\"\npg_restore: creating SCHEMA \"fkpart6\"\npg_restore: creating SCHEMA \"mvtest_mvschema\"\npg_restore: creating SCHEMA \"regress_indexing\"\npg_restore: creating SCHEMA \"regress_rls_schema\"\npg_restore: creating SCHEMA \"regress_schema_2\"\npg_restore: creating SCHEMA \"testxmlschema\"\npg_restore: creating TRANSFORM \"TRANSFORM FOR integer LANGUAGE \"sql\"\"\npg_restore: creating TYPE \"public.aggtype\"\npg_restore: creating TYPE \"public.arrayrange\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 1653; 1247 17044 TYPE arrayrange pavel\npg_restore: error: could not execute query: ERROR: pg_type array OID value\nnot set when in binary upgrade mode\nCommand was:.\n-- For binary upgrade, must preserve pg_type oid\nSELECT\npg_catalog.binary_upgrade_set_next_pg_type_oid('17044'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_type array oid\nSELECT\npg_catalog.binary_upgrade_set_next_array_pg_type_oid('17045'::pg_catalog.oid);\n\nCREATE TYPE \"public\".\"arrayrange\" AS RANGE (\n subtype = integer[]\n);\n\n4. there is a problem with doc\n\n echo \"<!ENTITY version \\\"13devel\\\">\"; \\\n echo \"<!ENTITY majorversion \\\"13\\\">\"; \\\n} > version.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl YES\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt > features-supported.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl NO\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt > features-unsupported.sgml\n'/usr/bin/perl' ./generate-errcodes-table.pl\n../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n'/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\nextend.sgml:281: parser error : Opening and ending tag mismatch: para line\n270 and type\n type of the ranges in an </type>anymultirange</type>.\n ^\nextend.sgml:281: parser error : Opening and ending tag mismatch: sect2 line\n270 and type\n type of the ranges in an </type>anymultirange</type>.\n ^\nextend.sgml:282: parser error : Opening and ending tag mismatch: sect1 line\n270 and para\n </para>\n ^\nextend.sgml:324: parser error : Opening and ending tag mismatch: chapter\nline 270 and sect2\n </sect2>\n ^\n\nI am not sure how much is correct to use <literallayout class=\"monospaced\">\nin doc. It is used for ranges, and multiranges, but no in other places\n\nAll other looks well\n\nPavel\n\n\n\n\n>\n> Thanks,\n> Paul\n>\n\nHičt 7. 11. 2019 v 3:36 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Wed, Nov 6, 2019 at 3:02 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Thu, Sep 26, 2019 at 2:13 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Hello Paul, I've started to review this patch. Here's a few minor\n> > things I ran across -- mostly compiler warnings (is my compiler too\n> > ancient?).\n> I just opened this thread to post a rebased set patches (especially\n> because of the `const` additions to range functions). Maybe it's not\n> that helpful since they don't include your changes yet but here they\n> are anyway. I'll post some more with your changes shortly.\n\nHere is another batch of patches incorporating your improvements. It\nseems like almost all the warnings were about moving variable\ndeclarations above any other statements. For some reason I don't get\nwarnings about that on my end (compiling on OS X):\n\nplatter:postgres paul$ gcc --version\nConfigured with:\n--prefix=/Applications/Xcode.app/Contents/Developer/usr\n--with-gxx-include-dir=/usr/include/c++/4.2.1\nApple clang version 11.0.0 (clang-1100.0.33.12)\nTarget: x86_64-apple-darwin18.6.0\nThread model: posix\nInstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n\nFor configure I'm saying this:\n\n./configure CFLAGS=-ggdb 5-Og -g3 -fno-omit-frame-pointer\n--enable-tap-tests --enable-cassert --enable-debug\n--prefix=/Users/paul/local\n\nAny suggestions to get better warnings? On my other patch I got\nfeedback about the very same kind. I could just compile on Linux but\nit's nice to work on this away from my desk on the laptop. Maybe\ninstalling a real gcc is the way to go.I tested last patches. I found some issues1. you should not to try patch catversion.2. there is warningparse_coerce.c: In function ‘enforce_generic_type_consistency’:parse_coerce.c:1975:11: warning: ‘range_typelem’ may be used uninitialized in this function [-Wmaybe-uninitialized] 1975 | else if (range_typelem != elem_typeid)3. there are problems with pg_upgrade. Regress tests failscommand: \"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/bin/pg_restore\" --host /home/pavel/src/postgresql.master/src/bpg_restore: connecting to database for restorepg_restore: creating DATABASE \"regression\"pg_restore: connecting to new database \"regression\"pg_restore: connecting to database \"regression\" as user \"pavel\"pg_restore: creating DATABASE PROPERTIES \"regression\"pg_restore: connecting to new database \"regression\"pg_restore: connecting to database \"regression\" as user \"pavel\"pg_restore: creating pg_largeobject \"pg_largeobject\"pg_restore: creating SCHEMA \"fkpart3\"pg_restore: creating SCHEMA \"fkpart4\"pg_restore: creating SCHEMA \"fkpart5\"pg_restore: creating SCHEMA \"fkpart6\"pg_restore: creating SCHEMA \"mvtest_mvschema\"pg_restore: creating SCHEMA \"regress_indexing\"pg_restore: creating SCHEMA \"regress_rls_schema\"pg_restore: creating SCHEMA \"regress_schema_2\"pg_restore: creating SCHEMA \"testxmlschema\"pg_restore: creating TRANSFORM \"TRANSFORM FOR integer LANGUAGE \"sql\"\"pg_restore: creating TYPE \"public.aggtype\"pg_restore: creating TYPE \"public.arrayrange\"pg_restore: while PROCESSING TOC:pg_restore: from TOC entry 1653; 1247 17044 TYPE arrayrange pavelpg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade modeCommand was:.-- For binary upgrade, must preserve pg_type oidSELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('17044'::pg_catalog.oid);-- For binary upgrade, must preserve pg_type array oidSELECT pg_catalog.binary_upgrade_set_next_array_pg_type_oid('17045'::pg_catalog.oid);CREATE TYPE \"public\".\"arrayrange\" AS RANGE ( subtype = integer[]);4. there is a problem with doc echo \"<!ENTITY version \\\"13devel\\\">\"; \\ echo \"<!ENTITY majorversion \\\"13\\\">\"; \\} > version.sgml'/usr/bin/perl' ./mk_feature_tables.pl YES ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-supported.sgml'/usr/bin/perl' ./mk_feature_tables.pl NO ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-unsupported.sgml'/usr/bin/perl' ./generate-errcodes-table.pl ../../../src/backend/utils/errcodes.txt > errcodes-table.sgml'/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml/usr/bin/xmllint --path . --noout --valid postgres.sgmlextend.sgml:281: parser error : Opening and ending tag mismatch: para line 270 and type type of the ranges in an </type>anymultirange</type>. ^extend.sgml:281: parser error : Opening and ending tag mismatch: sect2 line 270 and type type of the ranges in an </type>anymultirange</type>. ^extend.sgml:282: parser error : Opening and ending tag mismatch: sect1 line 270 and para </para> ^extend.sgml:324: parser error : Opening and ending tag mismatch: chapter line 270 and sect2 </sect2> ^I am not sure how much is correct to use <literallayout class=\"monospaced\"> in doc. It is used for ranges, and multiranges, but no in other placesAll other looks wellPavel \n\nThanks,\nPaul",
"msg_date": "Tue, 19 Nov 2019 10:16:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 1:17 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Hi\n> I tested last patches. I found some issues\n\nThank you for the review!\n\n> 1. you should not to try patch catversion.\n\nI've seen discussion on pgsql-hackers going both ways, but I'll leave\nit out of future patches. :-)\n\n> 2. there is warning\n>\n> parse_coerce.c: In function ‘enforce_generic_type_consistency’:\n> parse_coerce.c:1975:11: warning: ‘range_typelem’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n> 1975 | else if (range_typelem != elem_typeid)\n\nFixed locally, will include in my next patch.\n\n> 3. there are problems with pg_upgrade. Regress tests fails\n> . . .\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 1653; 1247 17044 TYPE arrayrange pavel\n> pg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\n\nI see what's going on here. (Sorry if this verbose explanation is\nobvious; it's as much for me as for anyone.) With pg_upgrade the\nvalues of pg_type.oid and pg_type.typarray must be the same before &\nafter. For built-in types there's no problem, because those are fixed\nby pg_type.dat. But for user-defined types we have to take extra steps\nto make sure they don't change. CREATE TYPE always uses two oids: one\nfor the type and one for the type's array type. But now when you\ncreate a range type we use *four*: the range type, the array of that\nrange, the multirange type, and the array of that multirange.\nCurrently when you run pg_dump in \"binary mode\" (i.e. as part of\npg_upgrade) it includes calls to special functions to set the next oid\nto use for pg_type.oid and pg_type.typarray. Then CREATE TYPE also has\nspecial \"binary mode\" code to check those variables and use those oids\n(e.g. AssignTypeArrayOid). After using them once it sets them back to\nInvalidOid so it doesn't keep using them. So I guess I need to add\ncode to pg_dump so that it also outputs calls to two new special\nfunctions that similarly set the oid to use for the next multirange\nand multirange[]. For v12->v13 it will chose high-enough oids like we\ndo already for arrays of domains. (For other upgrades it will use the\nexisting value.) And then I can change the CREATE TYPE code to check\nthose pre-set values when obtaining the next oid. Does that sound like\nthe right approach here?\n\n> 4. there is a problem with doc\n>\n> extend.sgml:281: parser error : Opening and ending tag mismatch: para line 270 and type\n> type of the ranges in an </type>anymultirange</type>.\n\nHmm, yikes, I'll fix that!\n\n> I am not sure how much is correct to use <literallayout class=\"monospaced\"> in doc. It is used for ranges, and multiranges, but no in other places\n\nI could use some advice here. Many operators seem best presented in\ngroups of four, where only their parameter types change, for example:\n\nint8range < int8range\nint8range < int8multirange\nint8multirange < int8range\nint8multirange < int8multirange\n\nAll I really want is to show those separated by line breaks. I\ncouldn't find any other examples of that happening inside a table cell\nthough (i.e. inside <row><entry></entry></row>). What is the best way\nto do that?\n\nThanks,\nPaul\n\n\n",
"msg_date": "Tue, 19 Nov 2019 21:49:06 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 9:49 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Tue, Nov 19, 2019 at 1:17 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > Hi\n> > I tested last patches. I found some issues\n>\n> Thank you for the review!\n\nHere is an updated patch series fixing the problems from that last\nreview. I would still like some direction about the doc formatting:\n\n> > I am not sure how much is correct to use <literallayout class=\"monospaced\"> in doc. It is used for ranges, and multiranges, but no in other places\n>\n> I could use some advice here. Many operators seem best presented in\n> groups of four, where only their parameter types change, for example:\n>\n> int8range < int8range\n> int8range < int8multirange\n> int8multirange < int8range\n> int8multirange < int8multirange\n>\n> All I really want is to show those separated by line breaks. I\n> couldn't find any other examples of that happening inside a table cell\n> though (i.e. inside <row><entry></entry></row>). What is the best way\n> to do that?\n\nThanks,\nPaul",
"msg_date": "Wed, 20 Nov 2019 11:32:10 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "st 20. 11. 2019 v 20:32 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Tue, Nov 19, 2019 at 9:49 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> >\n> > On Tue, Nov 19, 2019 at 1:17 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > > Hi\n> > > I tested last patches. I found some issues\n> >\n> > Thank you for the review!\n>\n> Here is an updated patch series fixing the problems from that last\n> review. I would still like some direction about the doc formatting:\n>\n>\nyes, these bugs are fixed\n\nthere are not compilation's issues\ntests passed\ndoc is ok\n\nI have notes\n\n1. the chapter should be renamed to \"Range Functions and Operators\" to\n\"Range and Multirange Functions and Operators\"\n\nBut now the doc is not well readable - there is not clean, what functions\nare for range type, what for multirange and what for both\n\n2. I don't like introduction \"safe\" operators - now the basic operators are\ndoubled, and nobody without documentation will use @* operators.\n\nIt is not intuitive. I think is better to map this functionality to basic\noperators +- * and implement it just for pairs (Multirange, Multirange) and\n(Multirange, Range) if it is possible\n\nIt's same relation line Numeric X integer. There should not be introduced\nnew operators. If somebody need it for ranges, then he can use cast to\nmultirange, and can continue.\n\nThe \"safe\" operators can be implement on user space - but should not be\ndefault solution.\n\n3. There are not prepared casts -\n\npostgres=# select int8range(10,15)::int8multirange;\nERROR: cannot cast type int8range to int8multirange\nLINE 1: select int8range(10,15)::int8multirange;\n ^\nThere should be some a) fully generic solution, or b) possibility to build\nimplicit cast when any multirange type is created.\n\nRegards\n\nPavel\n\n\n\n\n> > > I am not sure how much is correct to use <literallayout\n> class=\"monospaced\"> in doc. It is used for ranges, and multiranges, but no\n> in other places\n> >\n> > I could use some advice here. Many operators seem best presented in\n> > groups of four, where only their parameter types change, for example:\n> >\n> > int8range < int8range\n> > int8range < int8multirange\n> > int8multirange < int8range\n> > int8multirange < int8multirange\n> >\n> > All I really want is to show those separated by line breaks. I\n> > couldn't find any other examples of that happening inside a table cell\n> > though (i.e. inside <row><entry></entry></row>). What is the best way\n> > to do that?\n>\n\nPersonally I think it should be cleaned. Mainly if there is not visible\ndifferences. But range related doc it uses, so it is consistent with it.\nAnd then this is not big issue.\n\n\n\n> Thanks,\n> Paul\n>\n\nst 20. 11. 2019 v 20:32 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Tue, Nov 19, 2019 at 9:49 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Tue, Nov 19, 2019 at 1:17 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > Hi\n> > I tested last patches. I found some issues\n>\n> Thank you for the review!\n\nHere is an updated patch series fixing the problems from that last\nreview. I would still like some direction about the doc formatting:\nyes, these bugs are fixedthere are not compilation's issuestests passeddoc is okI have notes1. the chapter should be renamed to \"Range Functions and Operators\" to \"Range and Multirange Functions and Operators\"But now the doc is not well readable - there is not clean, what functions are for range type, what for multirange and what for both2. I don't like introduction \"safe\" operators - now the basic operators are doubled, and nobody without documentation will use @* operators.It is not intuitive. I think is better to map this functionality to basic operators +- * and implement it just for pairs (Multirange, Multirange) and (Multirange, Range) if it is possibleIt's same relation line Numeric X integer. There should not be introduced new operators. If somebody need it for ranges, then he can use cast to multirange, and can continue.The \"safe\" operators can be implement on user space - but should not be default solution.3. There are not prepared casts - postgres=# select int8range(10,15)::int8multirange;ERROR: cannot cast type int8range to int8multirangeLINE 1: select int8range(10,15)::int8multirange; ^There should be some a) fully generic solution, or b) possibility to build implicit cast when any multirange type is created.RegardsPavel \n> > I am not sure how much is correct to use <literallayout class=\"monospaced\"> in doc. It is used for ranges, and multiranges, but no in other places\n>\n> I could use some advice here. Many operators seem best presented in\n> groups of four, where only their parameter types change, for example:\n>\n> int8range < int8range\n> int8range < int8multirange\n> int8multirange < int8range\n> int8multirange < int8multirange\n>\n> All I really want is to show those separated by line breaks. I\n> couldn't find any other examples of that happening inside a table cell\n> though (i.e. inside <row><entry></entry></row>). What is the best way\n> to do that?Personally I think it should be cleaned. Mainly if there is not visible differences. But range related doc it uses, so it is consistent with it. And then this is not big issue. \n\nThanks,\nPaul",
"msg_date": "Thu, 21 Nov 2019 10:06:01 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 11/21/19 1:06 AM, Pavel Stehule wrote:\n> 2. I don't like introduction \"safe\" operators - now the basic operators \n> are doubled, and nobody without documentation will use @* operators.\n> \n> It is not intuitive. I think is better to map this functionality to \n> basic operators +- * and implement it just for pairs (Multirange, \n> Multirange) and (Multirange, Range) if it is possible\n> \n> It's same relation line Numeric X integer. There should not be \n> introduced new operators. If somebody need it for ranges, then he can \n> use cast to multirange, and can continue.\n > [snip]\n> 3. There are not prepared casts -\n> \n> postgres=# select int8range(10,15)::int8multirange;\n> ERROR: cannot cast type int8range to int8multirange\n> LINE 1: select int8range(10,15)::int8multirange;\n> ^\n> There should be some a) fully generic solution, or b) possibility to \n> build implicit cast when any multirange type is created.\n\nOkay, I like the idea of just having `range + range` and `multirange + \nmultirange`, then letting you cast between ranges and multiranges. The \nanalogy to int/numeric seems strong. I guess if you cast a multirange \nwith more than one element to a range it will raise an error. That will \nlet me clean up the docs a lot too.\n\nThanks!\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Thu, 21 Nov 2019 12:15:28 -0800",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "čt 21. 11. 2019 v 21:15 odesílatel Paul Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On 11/21/19 1:06 AM, Pavel Stehule wrote:\n> > 2. I don't like introduction \"safe\" operators - now the basic operators\n> > are doubled, and nobody without documentation will use @* operators.\n> >\n> > It is not intuitive. I think is better to map this functionality to\n> > basic operators +- * and implement it just for pairs (Multirange,\n> > Multirange) and (Multirange, Range) if it is possible\n> >\n> > It's same relation line Numeric X integer. There should not be\n> > introduced new operators. If somebody need it for ranges, then he can\n> > use cast to multirange, and can continue.\n> > [snip]\n> > 3. There are not prepared casts -\n> >\n> > postgres=# select int8range(10,15)::int8multirange;\n> > ERROR: cannot cast type int8range to int8multirange\n> > LINE 1: select int8range(10,15)::int8multirange;\n> > ^\n> > There should be some a) fully generic solution, or b) possibility to\n> > build implicit cast when any multirange type is created.\n>\n> Okay, I like the idea of just having `range + range` and `multirange +\n> multirange`, then letting you cast between ranges and multiranges. The\n> analogy to int/numeric seems strong. I guess if you cast a multirange\n> with more than one element to a range it will raise an error. That will\n> let me clean up the docs a lot too.\n>\n\nI though about it, and I think so cast from multirange to range is useless,\nminimally it should be explicit.\n\nOn second hand - from range to multirange should be implicit.\n\nThe original patch did\n\n1. MR @x MR = MR\n2. R @x R = MR\n3. MR @x R = MR\n\nI think so @1 & @3 has sense, but without introduction of special operator.\n@2 is bad and can be solved by cast one or second operand.\n\nPavel\n\n\n> Thanks!\n>\n> --\n> Paul ~{:-)\n> pj@illuminatedcomputing.com\n>\n\nčt 21. 11. 2019 v 21:15 odesílatel Paul Jungwirth <pj@illuminatedcomputing.com> napsal:On 11/21/19 1:06 AM, Pavel Stehule wrote:\n> 2. I don't like introduction \"safe\" operators - now the basic operators \n> are doubled, and nobody without documentation will use @* operators.\n> \n> It is not intuitive. I think is better to map this functionality to \n> basic operators +- * and implement it just for pairs (Multirange, \n> Multirange) and (Multirange, Range) if it is possible\n> \n> It's same relation line Numeric X integer. There should not be \n> introduced new operators. If somebody need it for ranges, then he can \n> use cast to multirange, and can continue.\n > [snip]\n> 3. There are not prepared casts -\n> \n> postgres=# select int8range(10,15)::int8multirange;\n> ERROR: cannot cast type int8range to int8multirange\n> LINE 1: select int8range(10,15)::int8multirange;\n> ^\n> There should be some a) fully generic solution, or b) possibility to \n> build implicit cast when any multirange type is created.\n\nOkay, I like the idea of just having `range + range` and `multirange + \nmultirange`, then letting you cast between ranges and multiranges. The \nanalogy to int/numeric seems strong. I guess if you cast a multirange \nwith more than one element to a range it will raise an error. That will \nlet me clean up the docs a lot too.I though about it, and I think so cast from multirange to range is useless, minimally it should be explicit.On second hand - from range to multirange should be implicit.The original patch did1. MR @x MR = MR2. R @x R = MR3. MR @x R = MRI think so @1 & @3 has sense, but without introduction of special operator. @2 is bad and can be solved by cast one or second operand.Pavel\n\nThanks!\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Fri, 22 Nov 2019 06:21:04 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 9:21 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I though about it, and I think so cast from multirange to range is useless, minimally it should be explicit.\n\nI agree: definitely not implicit. If I think of a good reason for it\nI'll add it, but otherwise I'll leave it out.\n\n> On second hand - from range to multirange should be implicit.\n\nOkay.\n\n> The original patch did\n>\n> 1. MR @x MR = MR\n> 2. R @x R = MR\n> 3. MR @x R = MR\n>\n> I think so @1 & @3 has sense, but without introduction of special operator. @2 is bad and can be solved by cast one or second operand.\n\nYes. I like how #2 follows the int/numeric analogy: if you want a\nnumeric result from `int / int` you can say `int::numeric / int`.\n\nSo my understanding is that conventionally cast functions are named\nafter the destination type, e.g. int8multirange(int8range) would be\nthe function to cast an int8range to an int8multirange. And\nint8range(int8multirange) would go the other way (if we do that). We\nalready use these names for the \"constructor\" functions, but I think\nthat is actually okay. For the multirange->range cast, the parameter\ntype & number are different, so there is no real conflict. For the\nrange->multirange cast, the parameter type is the same, and the\nconstructor function is variadic---but I think that's fine, because\nthe semantics are the same: build a multirange whose only element is\nthe given range:\n\nregression=# select int8multirange(int8range(1,2));\n int8multirange\n----------------\n {[1,2)}\n(1 row)\n\nEven the NULL handling is already what we want:\n\nregression=# select int8multirange(null);\n int8multirange\n----------------\n NULL\n(1 row)\n\nSo I think it's fine, but I'm curious whether you see any problems\nthere? (I guess if there is a problem it's no big deal to name the\nfunction something else....)\n\nThanks,\nPaul\n\n\n",
"msg_date": "Fri, 22 Nov 2019 08:25:41 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "pá 22. 11. 2019 v 17:25 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Thu, Nov 21, 2019 at 9:21 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I though about it, and I think so cast from multirange to range is\n> useless, minimally it should be explicit.\n>\n> I agree: definitely not implicit. If I think of a good reason for it\n> I'll add it, but otherwise I'll leave it out.\n>\n> > On second hand - from range to multirange should be implicit.\n>\n> Okay.\n>\n> > The original patch did\n> >\n> > 1. MR @x MR = MR\n> > 2. R @x R = MR\n> > 3. MR @x R = MR\n> >\n> > I think so @1 & @3 has sense, but without introduction of special\n> operator. @2 is bad and can be solved by cast one or second operand.\n>\n> Yes. I like how #2 follows the int/numeric analogy: if you want a\n> numeric result from `int / int` you can say `int::numeric / int`.\n>\n> So my understanding is that conventionally cast functions are named\n> after the destination type, e.g. int8multirange(int8range) would be\n> the function to cast an int8range to an int8multirange. And\n> int8range(int8multirange) would go the other way (if we do that). We\n> already use these names for the \"constructor\" functions, but I think\n> that is actually okay. For the multirange->range cast, the parameter\n> type & number are different, so there is no real conflict. For the\n> range->multirange cast, the parameter type is the same, and the\n> constructor function is variadic---but I think that's fine, because\n> the semantics are the same: build a multirange whose only element is\n> the given range:\n>\n> regression=# select int8multirange(int8range(1,2));\n> int8multirange\n> ----------------\n> {[1,2)}\n> (1 row)\n>\n> Even the NULL handling is already what we want:\n>\n> regression=# select int8multirange(null);\n> int8multirange\n> ----------------\n> NULL\n> (1 row)\n>\n> So I think it's fine, but I'm curious whether you see any problems\n> there? (I guess if there is a problem it's no big deal to name the\n> function something else....)\n>\n\nIt looks well now. I am not sure about benefit of cast from MR to R if MR\nhas more than one values. But it can be there for completeness.\n\nI think in this moment is not important to implement all functionality -\nfor start is good to implement basic functionality that can be good. It can\nbe enhanced step by step in next versions.\n\nPavel\n\n>\n> Thanks,\n> Paul\n>\n\npá 22. 11. 2019 v 17:25 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Thu, Nov 21, 2019 at 9:21 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I though about it, and I think so cast from multirange to range is useless, minimally it should be explicit.\n\nI agree: definitely not implicit. If I think of a good reason for it\nI'll add it, but otherwise I'll leave it out.\n\n> On second hand - from range to multirange should be implicit.\n\nOkay.\n\n> The original patch did\n>\n> 1. MR @x MR = MR\n> 2. R @x R = MR\n> 3. MR @x R = MR\n>\n> I think so @1 & @3 has sense, but without introduction of special operator. @2 is bad and can be solved by cast one or second operand.\n\nYes. I like how #2 follows the int/numeric analogy: if you want a\nnumeric result from `int / int` you can say `int::numeric / int`.\n\nSo my understanding is that conventionally cast functions are named\nafter the destination type, e.g. int8multirange(int8range) would be\nthe function to cast an int8range to an int8multirange. And\nint8range(int8multirange) would go the other way (if we do that). We\nalready use these names for the \"constructor\" functions, but I think\nthat is actually okay. For the multirange->range cast, the parameter\ntype & number are different, so there is no real conflict. For the\nrange->multirange cast, the parameter type is the same, and the\nconstructor function is variadic---but I think that's fine, because\nthe semantics are the same: build a multirange whose only element is\nthe given range:\n\nregression=# select int8multirange(int8range(1,2));\n int8multirange\n----------------\n {[1,2)}\n(1 row)\n\nEven the NULL handling is already what we want:\n\nregression=# select int8multirange(null);\n int8multirange\n----------------\n NULL\n(1 row)\n\nSo I think it's fine, but I'm curious whether you see any problems\nthere? (I guess if there is a problem it's no big deal to name the\nfunction something else....)It looks well now. I am not sure about benefit of cast from MR to R if MR has more than one values. But it can be there for completeness.I think in this moment is not important to implement all functionality - for start is good to implement basic functionality that can be good. It can be enhanced step by step in next versions.Pavel\n\nThanks,\nPaul",
"msg_date": "Fri, 22 Nov 2019 17:57:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi Paul\n\nI'm starting to look at this again. Here's a few proposed changes to\nthe current code as I read along.\n\nI noticed that 0001 does not compile on its own. It works as soon as I\nadd 0002. What this is telling me is that the current patch split is\nnot serving any goals; I think it's okay to merge them all into a single\ncommit. If you want to split it in smaller pieces, it needs to be stuff\nthat can be committed separately.\n\nOne possible candidate for that is the new makeUniqueTypeName function\nyou propose. I added this comment to explain what it does:\n\n /*\n- * makeUniqueTypeName: Prepend underscores as needed until we make a name that\n- * doesn't collide with anything. Tries the original typeName if requested.\n+ * makeUniqueTypeName\n+ * Generate a unique name for a prospective new type\n+ *\n+ * Given a typeName of length namelen, produce a new name into dest (an output\n+ * buffer allocated by caller, which must of length NAMEDATALEN) by prepending\n+ * underscores, until a non-conflicting name results.\n+ *\n+ * If tryOriginalName, first try with zero underscores.\n *\n * Returns the number of underscores added.\n */\n\nThis seems a little too strange; why not have the function allocate its\noutput buffer instead, and return it? In order to support the case of\nit failing to find an appropriate name, have it return NULL, for caller\nto throw the \"could not form ... name\" error.\n\nThe attached 0001 simplifies makeMultirangeConstructors; I think it was\ntoo baroque just because of it trying to imitate makeRangeConstructors;\nit seems easier to just repeat the ProcedureCreate call than trying to\nbe clever. (makeRangeConstructors's comment is lying about the number\nof constructors it creates after commit df73584431e7, BTW. But note\nthat the cruft in it to support doing it twice is not as much as in the\nnew code).\n\nThe other patches should be self-explanatory (I already submitted 0002\npreviously.)\n\nI'll keep going over the rest of it. Thanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Nov 2019 23:10:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I took the liberty of rebasing this series on top of recent branch\nmaster. The first four are mostly Paul's originals, except for conflict\nfixes; the rest are changes I'm proposing as I go along figuring out the\nwhole thing. (I would post just my proposed changes, if it weren't for\nthe rebasing; apologies for the messiness.)\n\nI am not convinced that adding TYPTYPE_MULTIRANGE is really necessary.\nWhy can't we just treat those types as TYPTYPE_RANGE and distinguish\nthem using TYPCATEGORY_MULTIRANGE? That's what we do for arrays. I'll\ntry to do that next.\n\nI think the algorithm for coming up with the multirange name is\nsuboptimal. It works fine with the name is short enough that we can add\na few extra letters, but otherwise the result look pretty silly. I\nthink we can still improve on that. I propose to make\nmakeUniqueTypeName accept a suffix, and truncate the letters that appear\n*before* the suffix rather than truncating after it's been appended.\n\nThere's a number of ereport() calls that should become elog(); and a\nbunch of others that should probably acquire errcode() and be\nreformatted per our style.\n\n\nRegarding Pavel's documentation markup issue,\n\n> I am not sure how much is correct to use <literallayout class=\"monospaced\">\n> in doc. It is used for ranges, and multiranges, but no in other places\n\nI looked at the generated PDF and the table looks pretty bad; the words\nin those entries overlap the words in the cell to their right. But that\nalso happens with entries that do not use <literallayout class=\"x\">!\nSee [1] for an example of the existing docs being badly formatted. The\ndocbook documentation [2] seems to suggest that what Paul used is the\nappropriate way to do this.\n\nMaybe a way is to make each entry have more than one row -- so the\nexample would appear below the other three fields in its own row, and\nwould be able to use the whole width of the table.\n\n[1] https://twitter.com/alvherre/status/1205563468595781633\n[2] https://tdg.docbook.org/tdg/5.1/literallayout.html \n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 20 Dec 2019 14:43:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "pá 20. 12. 2019 v 18:43 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com>\nnapsal:\n\n> I took the liberty of rebasing this series on top of recent branch\n> master. The first four are mostly Paul's originals, except for conflict\n> fixes; the rest are changes I'm proposing as I go along figuring out the\n> whole thing. (I would post just my proposed changes, if it weren't for\n> the rebasing; apologies for the messiness.)\n>\n> I am not convinced that adding TYPTYPE_MULTIRANGE is really necessary.\n> Why can't we just treat those types as TYPTYPE_RANGE and distinguish\n> them using TYPCATEGORY_MULTIRANGE? That's what we do for arrays. I'll\n> try to do that next.\n>\n> I think the algorithm for coming up with the multirange name is\n> suboptimal. It works fine with the name is short enough that we can add\n> a few extra letters, but otherwise the result look pretty silly. I\n> think we can still improve on that. I propose to make\n> makeUniqueTypeName accept a suffix, and truncate the letters that appear\n> *before* the suffix rather than truncating after it's been appended.\n>\n> There's a number of ereport() calls that should become elog(); and a\n> bunch of others that should probably acquire errcode() and be\n> reformatted per our style.\n>\n>\n> Regarding Pavel's documentation markup issue,\n>\n> > I am not sure how much is correct to use <literallayout\n> class=\"monospaced\">\n> > in doc. It is used for ranges, and multiranges, but no in other places\n>\n> I looked at the generated PDF and the table looks pretty bad; the words\n> in those entries overlap the words in the cell to their right. But that\n> also happens with entries that do not use <literallayout class=\"x\">!\n> See [1] for an example of the existing docs being badly formatted. The\n> docbook documentation [2] seems to suggest that what Paul used is the\n> appropriate way to do this.\n>\n> Maybe a way is to make each entry have more than one row -- so the\n> example would appear below the other three fields in its own row, and\n> would be able to use the whole width of the table.\n>\n\nI had a talk with Paul about possible simplification of designed operators.\nLast message from Paul was - he is working on new version.\n\nRegards\n\nPavel\n\n\n\n> [1] https://twitter.com/alvherre/status/1205563468595781633\n> [2] https://tdg.docbook.org/tdg/5.1/literallayout.html\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 20. 12. 2019 v 18:43 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com> napsal:I took the liberty of rebasing this series on top of recent branch\nmaster. The first four are mostly Paul's originals, except for conflict\nfixes; the rest are changes I'm proposing as I go along figuring out the\nwhole thing. (I would post just my proposed changes, if it weren't for\nthe rebasing; apologies for the messiness.)\n\nI am not convinced that adding TYPTYPE_MULTIRANGE is really necessary.\nWhy can't we just treat those types as TYPTYPE_RANGE and distinguish\nthem using TYPCATEGORY_MULTIRANGE? That's what we do for arrays. I'll\ntry to do that next.\n\nI think the algorithm for coming up with the multirange name is\nsuboptimal. It works fine with the name is short enough that we can add\na few extra letters, but otherwise the result look pretty silly. I\nthink we can still improve on that. I propose to make\nmakeUniqueTypeName accept a suffix, and truncate the letters that appear\n*before* the suffix rather than truncating after it's been appended.\n\nThere's a number of ereport() calls that should become elog(); and a\nbunch of others that should probably acquire errcode() and be\nreformatted per our style.\n\n\nRegarding Pavel's documentation markup issue,\n\n> I am not sure how much is correct to use <literallayout class=\"monospaced\">\n> in doc. It is used for ranges, and multiranges, but no in other places\n\nI looked at the generated PDF and the table looks pretty bad; the words\nin those entries overlap the words in the cell to their right. But that\nalso happens with entries that do not use <literallayout class=\"x\">!\nSee [1] for an example of the existing docs being badly formatted. The\ndocbook documentation [2] seems to suggest that what Paul used is the\nappropriate way to do this.\n\nMaybe a way is to make each entry have more than one row -- so the\nexample would appear below the other three fields in its own row, and\nwould be able to use the whole width of the table.I had a talk with Paul about possible simplification of designed operators. Last message from Paul was - he is working on new version.RegardsPavel\n\n[1] https://twitter.com/alvherre/status/1205563468595781633\n[2] https://tdg.docbook.org/tdg/5.1/literallayout.html \n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 20 Dec 2019 19:19:07 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2019-Dec-20, Alvaro Herrera wrote:\n\n> I am not convinced that adding TYPTYPE_MULTIRANGE is really necessary.\n> Why can't we just treat those types as TYPTYPE_RANGE and distinguish\n> them using TYPCATEGORY_MULTIRANGE? That's what we do for arrays. I'll\n> try to do that next.\n\nI think this can be simplified if we make the the multirange's\npg_type.typelem carry the base range's OID (the link in the other\ndirection already appears as pg_range.mltrngtypid, though I'd recommend\nrenaming that to pg_range.rngmultitypid to maintain the \"rng\" prefix\nconvention). Then we can distinguish a multirange from a plain range\neasily, both of which have typtype as TYPTYPE_RANGE, because typelem !=\n0 in a multi. That knowledge can be encapsulated easily in\ntype_is_multirange and pg_dump's getTypes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:13:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 10:19 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I had a talk with Paul about possible simplification of designed operators. Last message from Paul was - he is working on new version.\n\nThanks Alvaro & Pavel for helping move this forward. I've added the\ncasts but they aren't used automatically for things like\n`range_union(r, mr)` or `mr + r`, even though they are implicit.\nThat's because the casts are for concrete types (e.g. int4range ->\nint4multirange) but the functions & operators are for polymorphic\ntypes (anymultirange + anymultirange). So I'd like to get some\nfeedback about the best way to proceed.\n\nIs it permitted to add casts with polymorphic inputs & outputs? Is\nthat something that we would actually want to do? I'd probably need\nboth the polymorphic and concrete casts so that you could still say\n`int4range(1,2)::int4multirange`.\n\nShould I change the coerce code to look for casts among concrete types\nwhen the function has polymorphic types? I'm pretty scared to do\nsomething like that though, both because of the complexity and lest I\ncause unintended effects.\n\nShould I just give up on implicit casts and require you to specify\none? That makes it a little more annoying to mix range & multirange\ntypes, but personally I'm okay with that. This is my preferred\napproach.\n\nI have some time over the holidays to work on the other changes Alvaro\nhas suggested.\n\nThanks,\nPaul\n\n\n",
"msg_date": "Fri, 20 Dec 2019 11:20:28 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2019-Dec-20, Paul A Jungwirth wrote:\n\n> Is it permitted to add casts with polymorphic inputs & outputs? Is\n> that something that we would actually want to do? I'd probably need\n> both the polymorphic and concrete casts so that you could still say\n> `int4range(1,2)::int4multirange`.\n\nI'm embarrased to admit that I don't grok the type system well enough\n(yet) to answer this question.\n\n> Should I change the coerce code to look for casts among concrete types\n> when the function has polymorphic types? I'm pretty scared to do\n> something like that though, both because of the complexity and lest I\n> cause unintended effects.\n\nYeah, I suggest to stay away from that. I think this multirange thing\nis groundbreaking enough that we don't need to cause additional\npotential breakage.\n\n> Should I just give up on implicit casts and require you to specify\n> one? That makes it a little more annoying to mix range & multirange\n> types, but personally I'm okay with that. This is my preferred\n> approach.\n\n+1\n\n> I have some time over the holidays to work on the other changes Alvaro\n> has suggested.\n\nI hope not to have made things worse by posting a rebase. Anyway,\nthat's the reason I posted my other changes separately.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:29:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-20, Paul A Jungwirth wrote:\n>> Is it permitted to add casts with polymorphic inputs & outputs? Is\n>> that something that we would actually want to do? I'd probably need\n>> both the polymorphic and concrete casts so that you could still say\n>> `int4range(1,2)::int4multirange`.\n\n> I'm embarrased to admit that I don't grok the type system well enough\n> (yet) to answer this question.\n\nI would say no; if you want behavior like that you'd have to add code for\nit into the coercion machinery, much like the casts around, say, types\nrecord and record[] versus named composites and arrays of same. Expecting\nthe generic cast machinery to do the right thing would be foolhardy.\n\nIn any case, even if it did do the right thing, you'd still need\nsome additional polymorphic type to express the behavior you wanted,\nno? So it's not clear there'd be any net savings of effort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:15:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 11:29 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > Should I just give up on implicit casts and require you to specify\n> > one? That makes it a little more annoying to mix range & multirange\n> > types, but personally I'm okay with that. This is my preferred\n> > approach.\n>\n> +1\n\nHere is a patch adding the casts, rebasing on the latest master, and\nincorporating Alvaro's changes. Per his earlier suggestion I combined\nit all into one patch file, which also makes it easier to apply\nrebases & updates.\n\nMy work on adding casts also removes the @+ / @- / @* operators and\nadds + / - / * operators where both parameters are multiranges. I\nretained other operators with mixed range/multirange parameters, both\nbecause there are already range operators with mixed range/scalar\nparameters (e.g. <@), and because it seemed like the objection to @+ /\n@- / @* was not mixed parameters per se, but rather their\nunguessability. Since the other operators are the same as the existing\nrange operators, they don't share that problem.\n\nThis still leaves the question of how best to format the docs for\nthese operators. I like being able to combine all the <@ variations\n(e.g.) into one table row, but if that is too ugly I could give them\nseparate rows instead. Giving them all their own row consumes a lot of\nvertical space though, and to me that makes the docs more tedious to\nread & browse, so it's harder to grasp all the available range-related\noperations at a glance.\n\nI'm skeptical of changing pg_type.typtype from 'm' to 'r'. A\nmultirange isn't a range, so why should we give it the same type? Also\nwon't this break any queries that are using that column to find range\ntypes? What is the motivation to use the same typtype for both ranges\nand multiranges? (There is plenty I don't understand here, e.g. why we\nhave both typtype and typcategory, so maybe there is a good reason I'm\nmissing.)\n\nI experimented with setting pg_type.typelem to the multirange's range\ntype, but it seemed to break a lot of things, and reading the code I\nsaw some places that treat a non-zero typelem as synonymous with being\nan array. So I'm reluctant to make this change also, especially when\nit is just as easy to query pg_range to get a multirange's range type.\nAlso range types themselves don't set typelem to their base type, and\nit seems like we'd want to treat ranges and multiranges the same way\nhere.\n\nAlvaro also suggested renaming pg_range.mltrngtypid to\npg_range.rngmultitypid, so it shares the same \"rng\" prefix as the\nother columns in this table. Having a different prefix does stand out.\nI've included that change in this patch too.\n\nYours,\nPaul",
"msg_date": "Fri, 3 Jan 2020 21:29:33 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\nso 4. 1. 2020 v 6:29 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Fri, Dec 20, 2019 at 11:29 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > > Should I just give up on implicit casts and require you to specify\n> > > one? That makes it a little more annoying to mix range & multirange\n> > > types, but personally I'm okay with that. This is my preferred\n> > > approach.\n> >\n> > +1\n>\n> Here is a patch adding the casts, rebasing on the latest master, and\n> incorporating Alvaro's changes. Per his earlier suggestion I combined\n> it all into one patch file, which also makes it easier to apply\n> rebases & updates.\n>\n\nThis patch was applied cleanly and all tests passed\n\n\n\n>\n> My work on adding casts also removes the @+ / @- / @* operators and\n> adds + / - / * operators where both parameters are multiranges. I\n> retained other operators with mixed range/multirange parameters, both\n> because there are already range operators with mixed range/scalar\n> parameters (e.g. <@), and because it seemed like the objection to @+ /\n> @- / @* was not mixed parameters per se, but rather their\n> unguessability. Since the other operators are the same as the existing\n> range operators, they don't share that problem.\n>\n\nlooks well\n\n\n>\n> This still leaves the question of how best to format the docs for\n> these operators. I like being able to combine all the <@ variations\n> (e.g.) into one table row, but if that is too ugly I could give them\n> separate rows instead. Giving them all their own row consumes a lot of\n> vertical space though, and to me that makes the docs more tedious to\n> read & browse, so it's harder to grasp all the available range-related\n> operations at a glance.\n>\n\nI have similar opinion - maybe is better do documentation for range and\nmultirange separately. Sometimes there are still removed operators @+\n\n\n> I'm skeptical of changing pg_type.typtype from 'm' to 'r'. A\n> multirange isn't a range, so why should we give it the same type? Also\n> won't this break any queries that are using that column to find range\n> types? What is the motivation to use the same typtype for both ranges\n> and multiranges? (There is plenty I don't understand here, e.g. why we\n> have both typtype and typcategory, so maybe there is a good reason I'm\n> missing.)\n>\n\nIf you can share TYPTYPE_RANGE in code for multiranges, then it should be\n'r'. If not, then it needs own value.\n\n\n> I experimented with setting pg_type.typelem to the multirange's range\n> type, but it seemed to break a lot of things, and reading the code I\n> saw some places that treat a non-zero typelem as synonymous with being\n> an array. So I'm reluctant to make this change also, especially when\n> it is just as easy to query pg_range to get a multirange's range type.\n>\n\nok, it is unhappy, but it is true. This note should be somewhere in code,\nplease\n\n\n> Also range types themselves don't set typelem to their base type, and\n> it seems like we'd want to treat ranges and multiranges the same way\n> here.\n>\n> Alvaro also suggested renaming pg_range.mltrngtypid to\n> pg_range.rngmultitypid, so it shares the same \"rng\" prefix as the\n> other columns in this table. Having a different prefix does stand out.\n> I've included that change in this patch too.\n>\n\nPersonally I have not any comments to implemented functionality.\n\n>\n> Yours,\n> Paul\n>\n\nHiso 4. 1. 2020 v 6:29 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Fri, Dec 20, 2019 at 11:29 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > Should I just give up on implicit casts and require you to specify\n> > one? That makes it a little more annoying to mix range & multirange\n> > types, but personally I'm okay with that. This is my preferred\n> > approach.\n>\n> +1\n\nHere is a patch adding the casts, rebasing on the latest master, and\nincorporating Alvaro's changes. Per his earlier suggestion I combined\nit all into one patch file, which also makes it easier to apply\nrebases & updates.This patch was applied cleanly and all tests passed \n\nMy work on adding casts also removes the @+ / @- / @* operators and\nadds + / - / * operators where both parameters are multiranges. I\nretained other operators with mixed range/multirange parameters, both\nbecause there are already range operators with mixed range/scalar\nparameters (e.g. <@), and because it seemed like the objection to @+ /\n@- / @* was not mixed parameters per se, but rather their\nunguessability. Since the other operators are the same as the existing\nrange operators, they don't share that problem.looks well \n\nThis still leaves the question of how best to format the docs for\nthese operators. I like being able to combine all the <@ variations\n(e.g.) into one table row, but if that is too ugly I could give them\nseparate rows instead. Giving them all their own row consumes a lot of\nvertical space though, and to me that makes the docs more tedious to\nread & browse, so it's harder to grasp all the available range-related\noperations at a glance.I have similar opinion - maybe is better do documentation for range and multirange separately. Sometimes there are still removed operators @+ \n\nI'm skeptical of changing pg_type.typtype from 'm' to 'r'. A\nmultirange isn't a range, so why should we give it the same type? Also\nwon't this break any queries that are using that column to find range\ntypes? What is the motivation to use the same typtype for both ranges\nand multiranges? (There is plenty I don't understand here, e.g. why we\nhave both typtype and typcategory, so maybe there is a good reason I'm\nmissing.)If you can share TYPTYPE_RANGE in code for multiranges, then it should be 'r'. If not, then it needs own value. \n\nI experimented with setting pg_type.typelem to the multirange's range\ntype, but it seemed to break a lot of things, and reading the code I\nsaw some places that treat a non-zero typelem as synonymous with being\nan array. So I'm reluctant to make this change also, especially when\nit is just as easy to query pg_range to get a multirange's range type.ok, it is unhappy, but it is true. This note should be somewhere in code, please \nAlso range types themselves don't set typelem to their base type, and\nit seems like we'd want to treat ranges and multiranges the same way\nhere.\n\nAlvaro also suggested renaming pg_range.mltrngtypid to\npg_range.rngmultitypid, so it shares the same \"rng\" prefix as the\nother columns in this table. Having a different prefix does stand out.\nI've included that change in this patch too.Personally I have not any comments to implemented functionality.\n\nYours,\nPaul",
"msg_date": "Fri, 10 Jan 2020 10:38:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 1:38 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> This still leaves the question of how best to format the docs for\n>> these operators. I like being able to combine all the <@ variations\n>> (e.g.) into one table row, but if that is too ugly I could give them\n>> separate rows instead. Giving them all their own row consumes a lot of\n>> vertical space though, and to me that makes the docs more tedious to\n>> read & browse, so it's harder to grasp all the available range-related\n>> operations at a glance.\n>\n>\n> I have similar opinion - maybe is better do documentation for range and multirange separately. Sometimes there are still removed operators @+\n\nI like keeping the range/multirange operators together since they are\nso similar for both types, but if others disagree I'd be grateful for\nmore feedback.\n\nYou're right that I left in a few references to the old @+ style\noperators in the examples; I've fixed those.\n\n> If you can share TYPTYPE_RANGE in code for multiranges, then it should be 'r'. If not, then it needs own value.\n\nOkay. I think a new 'm' value is warranted because they are not interchangeable.\n\n>> I experimented with setting pg_type.typelem to the multirange's range\n>> type, but it seemed to break a lot of things, and reading the code I\n>> saw some places that treat a non-zero typelem as synonymous with being\n>> an array. So I'm reluctant to make this change also, especially when\n>> it is just as easy to query pg_range to get a multirange's range type.\n>\n>\n> ok, it is unhappy, but it is true. This note should be somewhere in code, please\n\nI've added a comment about this. I put it at the top of DefineRange\nbut let me know if that's the wrong place.\n\nThe attached file is also rebased on currrent master.\n\nThanks!\nPaul",
"msg_date": "Fri, 17 Jan 2020 12:07:54 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "pá 17. 1. 2020 v 21:08 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Fri, Jan 10, 2020 at 1:38 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> This still leaves the question of how best to format the docs for\n> >> these operators. I like being able to combine all the <@ variations\n> >> (e.g.) into one table row, but if that is too ugly I could give them\n> >> separate rows instead. Giving them all their own row consumes a lot of\n> >> vertical space though, and to me that makes the docs more tedious to\n> >> read & browse, so it's harder to grasp all the available range-related\n> >> operations at a glance.\n> >\n> >\n> > I have similar opinion - maybe is better do documentation for range and\n> multirange separately. Sometimes there are still removed operators @+\n>\n> I like keeping the range/multirange operators together since they are\n> so similar for both types, but if others disagree I'd be grateful for\n> more feedback.\n>\n\nok\n\n>\n> You're right that I left in a few references to the old @+ style\n> operators in the examples; I've fixed those.\n>\n> > If you can share TYPTYPE_RANGE in code for multiranges, then it should\n> be 'r'. If not, then it needs own value.\n>\n> Okay. I think a new 'm' value is warranted because they are not\n> interchangeable.\n>\n> >> I experimented with setting pg_type.typelem to the multirange's range\n> >> type, but it seemed to break a lot of things, and reading the code I\n> >> saw some places that treat a non-zero typelem as synonymous with being\n> >> an array. So I'm reluctant to make this change also, especially when\n> >> it is just as easy to query pg_range to get a multirange's range type.\n> >\n> >\n> > ok, it is unhappy, but it is true. This note should be somewhere in\n> code, please\n>\n> I've added a comment about this. I put it at the top of DefineRange\n> but let me know if that's the wrong place.\n>\n> The attached file is also rebased on currrent master.\n>\n\nCan be nice to have a polymorphic function\n\nmultirange(anymultirange, anyrange) returns anymultirange. This functions\nshould to do multirange from $2 to type $1\n\nIt can enhance to using polymorphic types and simplify casting.\n\nUsage\n\nCREATE OR REPLACE FUNCTION diff(anymultirange, anyrange)\nRETURNS anymultirange AS $$\n SELECT $1 - multirange($1, $2)\n$$ LANGUAGE sql;\n\nwhen I tried to write this function in plpgsql I got\n\ncreate or replace function multirange(anymultirange, anyrange) returns\nanymultirange as $$\nbegin\n execute format('select $2::%I', pg_typeof($1)) into $1;\n return $1;\nend;\n$$ language plpgsql immutable strict;\n\nERROR: unrecognized typtype: 109\nCONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n\nSo probably some support in PL is missing\n\nBut all others looks very well\n\nRegards\n\nPavel\n\n\n\n\n>\n> Thanks!\n> Paul\n>\n\npá 17. 1. 2020 v 21:08 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Fri, Jan 10, 2020 at 1:38 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> This still leaves the question of how best to format the docs for\n>> these operators. I like being able to combine all the <@ variations\n>> (e.g.) into one table row, but if that is too ugly I could give them\n>> separate rows instead. Giving them all their own row consumes a lot of\n>> vertical space though, and to me that makes the docs more tedious to\n>> read & browse, so it's harder to grasp all the available range-related\n>> operations at a glance.\n>\n>\n> I have similar opinion - maybe is better do documentation for range and multirange separately. Sometimes there are still removed operators @+\n\nI like keeping the range/multirange operators together since they are\nso similar for both types, but if others disagree I'd be grateful for\nmore feedback.ok \n\nYou're right that I left in a few references to the old @+ style\noperators in the examples; I've fixed those.\n\n> If you can share TYPTYPE_RANGE in code for multiranges, then it should be 'r'. If not, then it needs own value.\n\nOkay. I think a new 'm' value is warranted because they are not interchangeable.\n\n>> I experimented with setting pg_type.typelem to the multirange's range\n>> type, but it seemed to break a lot of things, and reading the code I\n>> saw some places that treat a non-zero typelem as synonymous with being\n>> an array. So I'm reluctant to make this change also, especially when\n>> it is just as easy to query pg_range to get a multirange's range type.\n>\n>\n> ok, it is unhappy, but it is true. This note should be somewhere in code, please\n\nI've added a comment about this. I put it at the top of DefineRange\nbut let me know if that's the wrong place.\n\nThe attached file is also rebased on currrent master.Can be nice to have a polymorphic function multirange(anymultirange, anyrange) returns anymultirange. This functions should to do multirange from $2 to type $1It can enhance to using polymorphic types and simplify casting.UsageCREATE OR REPLACE FUNCTION diff(anymultirange, anyrange)RETURNS anymultirange AS $$ SELECT $1 - multirange($1, $2)$$ LANGUAGE sql;when I tried to write this function in plpgsql I gotcreate or replace function multirange(anymultirange, anyrange) returns anymultirange as $$begin execute format('select $2::%I', pg_typeof($1)) into $1; return $1;end;$$ language plpgsql immutable strict;ERROR: unrecognized typtype: 109CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1So probably some support in PL is missingBut all others looks very wellRegardsPavel \n\nThanks!\nPaul",
"msg_date": "Sat, 18 Jan 2020 16:19:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Can be nice to have a polymorphic function\n>\n> multirange(anymultirange, anyrange) returns anymultirange. This functions should to do multirange from $2 to type $1\n>\n> It can enhance to using polymorphic types and simplify casting.\n\nThanks for taking another look! I actually have that already but it is\nnamed anymultirange:\n\nregression=# select anymultirange(int4range(1,2));\n anymultirange\n---------------\n {[1,2)}\n(1 row)\n\nWill that work for you?\n\nI think I only wrote that to satisfy some requirement of having an\nanymultirange type, but I agree it could be useful. (I even used it in\nthe regress tests.) Maybe it's worth documenting too?\n\n> when I tried to write this function in plpgsql I got\n>\n> create or replace function multirange(anymultirange, anyrange) returns anymultirange as $$\n> begin\n> execute format('select $2::%I', pg_typeof($1)) into $1;\n> return $1;\n> end;\n> $$ language plpgsql immutable strict;\n>\n> ERROR: unrecognized typtype: 109\n> CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n\nHmm, I'll add a test for that and see if I can find the problem.\n\nThanks!\nPaul\n\n\n",
"msg_date": "Sat, 18 Jan 2020 08:07:08 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "so 18. 1. 2020 v 17:07 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Sat, Jan 18, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Can be nice to have a polymorphic function\n> >\n> > multirange(anymultirange, anyrange) returns anymultirange. This\n> functions should to do multirange from $2 to type $1\n> >\n> > It can enhance to using polymorphic types and simplify casting.\n>\n> Thanks for taking another look! I actually have that already but it is\n> named anymultirange:\n>\n> regression=# select anymultirange(int4range(1,2));\n> anymultirange\n> ---------------\n> {[1,2)}\n> (1 row)\n>\n> Will that work for you?\n>\n\nIt's better than I though\n\n\n> I think I only wrote that to satisfy some requirement of having an\n> anymultirange type, but I agree it could be useful. (I even used it in\n> the regress tests.) Maybe it's worth documenting too?\n>\n\nyes\n\n\n> > when I tried to write this function in plpgsql I got\n> >\n> > create or replace function multirange(anymultirange, anyrange) returns\n> anymultirange as $$\n> > begin\n> > execute format('select $2::%I', pg_typeof($1)) into $1;\n> > return $1;\n> > end;\n> > $$ language plpgsql immutable strict;\n> >\n> > ERROR: unrecognized typtype: 109\n> > CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n>\n> Hmm, I'll add a test for that and see if I can find the problem.\n>\n\nok\n\n\n> Thanks!\n> Paul\n>\n\nso 18. 1. 2020 v 17:07 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Sat, Jan 18, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Can be nice to have a polymorphic function\n>\n> multirange(anymultirange, anyrange) returns anymultirange. This functions should to do multirange from $2 to type $1\n>\n> It can enhance to using polymorphic types and simplify casting.\n\nThanks for taking another look! I actually have that already but it is\nnamed anymultirange:\n\nregression=# select anymultirange(int4range(1,2));\n anymultirange\n---------------\n {[1,2)}\n(1 row)\n\nWill that work for you?It's better than I though \n\nI think I only wrote that to satisfy some requirement of having an\nanymultirange type, but I agree it could be useful. (I even used it in\nthe regress tests.) Maybe it's worth documenting too?yes \n\n> when I tried to write this function in plpgsql I got\n>\n> create or replace function multirange(anymultirange, anyrange) returns anymultirange as $$\n> begin\n> execute format('select $2::%I', pg_typeof($1)) into $1;\n> return $1;\n> end;\n> $$ language plpgsql immutable strict;\n>\n> ERROR: unrecognized typtype: 109\n> CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n\nHmm, I'll add a test for that and see if I can find the problem.ok \n\nThanks!\nPaul",
"msg_date": "Sat, 18 Jan 2020 17:35:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "so 18. 1. 2020 v 17:35 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 18. 1. 2020 v 17:07 odesílatel Paul A Jungwirth <\n> pj@illuminatedcomputing.com> napsal:\n>\n>> On Sat, Jan 18, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > Can be nice to have a polymorphic function\n>> >\n>> > multirange(anymultirange, anyrange) returns anymultirange. This\n>> functions should to do multirange from $2 to type $1\n>> >\n>> > It can enhance to using polymorphic types and simplify casting.\n>>\n>> Thanks for taking another look! I actually have that already but it is\n>> named anymultirange:\n>>\n>> regression=# select anymultirange(int4range(1,2));\n>> anymultirange\n>> ---------------\n>> {[1,2)}\n>> (1 row)\n>>\n>> Will that work for you?\n>>\n>\n> It's better than I though\n>\n>\n>> I think I only wrote that to satisfy some requirement of having an\n>> anymultirange type, but I agree it could be useful. (I even used it in\n>> the regress tests.) Maybe it's worth documenting too?\n>>\n>\nNow, I think so name \"anymultirange\" is not good. Maybe better name is just\n\"multirange\"\n\n\n\n> yes\n>\n>\n>> > when I tried to write this function in plpgsql I got\n>> >\n>> > create or replace function multirange(anymultirange, anyrange) returns\n>> anymultirange as $$\n>> > begin\n>> > execute format('select $2::%I', pg_typeof($1)) into $1;\n>> > return $1;\n>> > end;\n>> > $$ language plpgsql immutable strict;\n>> >\n>> > ERROR: unrecognized typtype: 109\n>> > CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n>>\n>> Hmm, I'll add a test for that and see if I can find the problem.\n>>\n>\n> ok\n>\n>\n>> Thanks!\n>> Paul\n>>\n>\n\nso 18. 1. 2020 v 17:35 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:so 18. 1. 2020 v 17:07 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Sat, Jan 18, 2020 at 7:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Can be nice to have a polymorphic function\n>\n> multirange(anymultirange, anyrange) returns anymultirange. This functions should to do multirange from $2 to type $1\n>\n> It can enhance to using polymorphic types and simplify casting.\n\nThanks for taking another look! I actually have that already but it is\nnamed anymultirange:\n\nregression=# select anymultirange(int4range(1,2));\n anymultirange\n---------------\n {[1,2)}\n(1 row)\n\nWill that work for you?It's better than I though \n\nI think I only wrote that to satisfy some requirement of having an\nanymultirange type, but I agree it could be useful. (I even used it in\nthe regress tests.) Maybe it's worth documenting too?Now, I think so name \"anymultirange\" is not good. Maybe better name is just \"multirange\" yes \n\n> when I tried to write this function in plpgsql I got\n>\n> create or replace function multirange(anymultirange, anyrange) returns anymultirange as $$\n> begin\n> execute format('select $2::%I', pg_typeof($1)) into $1;\n> return $1;\n> end;\n> $$ language plpgsql immutable strict;\n>\n> ERROR: unrecognized typtype: 109\n> CONTEXT: compilation of PL/pgSQL function \"multirange\" near line 1\n\nHmm, I'll add a test for that and see if I can find the problem.ok \n\nThanks!\nPaul",
"msg_date": "Sun, 19 Jan 2020 09:09:56 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 12:10 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Now, I think so name \"anymultirange\" is not good. Maybe better name is just \"multirange\"\n\nAre you sure? This function exists to be a cast to an anymultirange,\nand I thought the convention was to name cast functions after their\ndestination type. I can change it, but in my opinion anymultirange\nfollows the Postgres conventions better. But just let me know and I'll\ndo \"multirange\" instead!\n\nYours,\nPaul\n\n\n",
"msg_date": "Sun, 19 Jan 2020 15:34:25 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> On Sun, Jan 19, 2020 at 12:10 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> Now, I think so name \"anymultirange\" is not good. Maybe better name is just \"multirange\"\n\n> Are you sure? This function exists to be a cast to an anymultirange,\n> and I thought the convention was to name cast functions after their\n> destination type.\n\nTrue for casts involving concrete types, mainly because we'd like\nthe identity \"value::typename == typename(value)\" to hold without\ntoo much worry about whether the latter is a plain function call\nor a special case. Not sure whether it makes as much sense for\npolymorphics, since casting to a polymorphic type is pretty silly:\nwe do seem to allow you to do that, but it's a no-op.\n\nI'm a little troubled by the notion that what you're talking about\nhere is not a no-op (if it were, you wouldn't need a function).\nThat seems like there's something fundamentally not quite right\neither with the design or with how you're thinking about it.\n\nAs a comparison point, we sometimes describe subscripting as\nbeing a polymorphic operation like\n\n\tsubscript(anyarray, integer) returns anyelement\n\nIt would be completely unhelpful to call that anyelement().\nI feel like you might be making a similar mistake here.\n\nAlternatively, consider this: a cast from some concrete multirange type\nto anymultirange is a no-op, while any other sort of cast probably ought\nto be casting to some particular concrete multirange type. That would\nline up with the existing operations for plain ranges.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Jan 2020 19:38:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "po 20. 1. 2020 v 1:38 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> > On Sun, Jan 19, 2020 at 12:10 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> Now, I think so name \"anymultirange\" is not good. Maybe better name is\n> just \"multirange\"\n>\n> > Are you sure? This function exists to be a cast to an anymultirange,\n> > and I thought the convention was to name cast functions after their\n> > destination type.\n>\n> True for casts involving concrete types, mainly because we'd like\n> the identity \"value::typename == typename(value)\" to hold without\n> too much worry about whether the latter is a plain function call\n> or a special case. Not sure whether it makes as much sense for\n> polymorphics, since casting to a polymorphic type is pretty silly:\n> we do seem to allow you to do that, but it's a no-op.\n>\n> I'm a little troubled by the notion that what you're talking about\n> here is not a no-op (if it were, you wouldn't need a function).\n> That seems like there's something fundamentally not quite right\n> either with the design or with how you're thinking about it.\n>\n\nI thinking about completeness of operations\n\nI can to write\n\nCREATE OR REPLACE FUNCTION fx(anyarray, anyelement)\nRETURNS anyarray AS $$\nSELECT $1 || ARRAY[$2]\n$$ LANGUAGE sql;\n\nI need to some functionality for moving a value to different category (it\nis more generic than casting to specific type (that can hold category)\n\nCREATE OR REPLACE FUNCTION fx(anymultirange, anyrange)\nRETURNS anyrage AS $$\nSELECT $1 + multirange($1)\n$$ LANGUAGE sql;\n\nis just a analogy.\n\nRegards\n\nPavel\n\n\n> As a comparison point, we sometimes describe subscripting as\n> being a polymorphic operation like\n>\n> subscript(anyarray, integer) returns anyelement\n>\n> It would be completely unhelpful to call that anyelement().\n> I feel like you might be making a similar mistake here.\n>\n> Alternatively, consider this: a cast from some concrete multirange type\n> to anymultirange is a no-op, while any other sort of cast probably ought\n> to be casting to some particular concrete multirange type. That would\n> line up with the existing operations for plain ranges.\n>\n\n\n\n\n\n> regards, tom lane\n>\n\npo 20. 1. 2020 v 1:38 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> On Sun, Jan 19, 2020 at 12:10 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> Now, I think so name \"anymultirange\" is not good. Maybe better name is just \"multirange\"\n\n> Are you sure? This function exists to be a cast to an anymultirange,\n> and I thought the convention was to name cast functions after their\n> destination type.\n\nTrue for casts involving concrete types, mainly because we'd like\nthe identity \"value::typename == typename(value)\" to hold without\ntoo much worry about whether the latter is a plain function call\nor a special case. Not sure whether it makes as much sense for\npolymorphics, since casting to a polymorphic type is pretty silly:\nwe do seem to allow you to do that, but it's a no-op.\n\nI'm a little troubled by the notion that what you're talking about\nhere is not a no-op (if it were, you wouldn't need a function).\nThat seems like there's something fundamentally not quite right\neither with the design or with how you're thinking about it.I thinking about completeness of operationsI can to writeCREATE OR REPLACE FUNCTION fx(anyarray, anyelement)RETURNS anyarray AS $$SELECT $1 || ARRAY[$2]$$ LANGUAGE sql;I need to some functionality for moving a value to different category (it is more generic than casting to specific type (that can hold category)CREATE OR REPLACE FUNCTION fx(anymultirange, anyrange)RETURNS anyrage AS $$SELECT $1 + multirange($1)$$ LANGUAGE sql;is just a analogy.RegardsPavel \n\nAs a comparison point, we sometimes describe subscripting as\nbeing a polymorphic operation like\n\n subscript(anyarray, integer) returns anyelement\n\nIt would be completely unhelpful to call that anyelement().\nI feel like you might be making a similar mistake here.\n\nAlternatively, consider this: a cast from some concrete multirange type\nto anymultirange is a no-op, while any other sort of cast probably ought\nto be casting to some particular concrete multirange type. That would\nline up with the existing operations for plain ranges.\n\n regards, tom lane",
"msg_date": "Mon, 20 Jan 2020 05:49:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> True for casts involving concrete types, mainly because we'd like\n> the identity \"value::typename == typename(value)\" to hold without\n> too much worry about whether the latter is a plain function call\n> or a special case. Not sure whether it makes as much sense for\n> polymorphics, since casting to a polymorphic type is pretty silly:\n> we do seem to allow you to do that, but it's a no-op.\n>\n> ...\n>\n> Alternatively, consider this: a cast from some concrete multirange type\n> to anymultirange is a no-op, while any other sort of cast probably ought\n> to be casting to some particular concrete multirange type. That would\n> line up with the existing operations for plain ranges.\n\nI agree you wouldn't actually cast by saying x::anymultirange, and the\ncasts we define are already concrete, so instead you'd say\nx::int4multirange. But I think having a polymorphic function to\nconvert from an anyrange to an anymultirange is useful so you can\nwrite generic functions. I can see how calling it \"anymultirange\" may\nbe preferring the implementor perspective over the user perspective\nthough, and how simply \"multirange\" would be more empathetic. I don't\nmind taking that approach.\n\nYours,\nPaul\n\n\n",
"msg_date": "Sun, 19 Jan 2020 21:57:57 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 9:57 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Sun, Jan 19, 2020 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > True for casts involving concrete types, mainly because we'd like\n> > the identity \"value::typename == typename(value)\" to hold without\n> > too much worry about whether the latter is a plain function call\n> > or a special case. Not sure whether it makes as much sense for\n> > polymorphics, since casting to a polymorphic type is pretty silly:\n> > we do seem to allow you to do that, but it's a no-op.\n> >\n> > ...\n> >\n> > Alternatively, consider this: a cast from some concrete multirange type\n> > to anymultirange is a no-op, while any other sort of cast probably ought\n> > to be casting to some particular concrete multirange type. That would\n> > line up with the existing operations for plain ranges.\n>\n> I agree you wouldn't actually cast by saying x::anymultirange, and the\n> casts we define are already concrete, so instead you'd say\n> x::int4multirange. But I think having a polymorphic function to\n> convert from an anyrange to an anymultirange is useful so you can\n> write generic functions. I can see how calling it \"anymultirange\" may\n> be preferring the implementor perspective over the user perspective\n> though, and how simply \"multirange\" would be more empathetic. I don't\n> mind taking that approach.\n\nHere is a patch with anymultirange(anyrange) renamed to\nmultirange(anyrange). I also rebased on the latest master, added\ndocumentation about the multirange(anyrange) function, and slightly\nadjusted the formatting of the range functions table.\n\nThanks,\nPaul",
"msg_date": "Tue, 21 Jan 2020 15:54:52 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi\n\nst 22. 1. 2020 v 0:55 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Sun, Jan 19, 2020 at 9:57 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > On Sun, Jan 19, 2020 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > True for casts involving concrete types, mainly because we'd like\n> > > the identity \"value::typename == typename(value)\" to hold without\n> > > too much worry about whether the latter is a plain function call\n> > > or a special case. Not sure whether it makes as much sense for\n> > > polymorphics, since casting to a polymorphic type is pretty silly:\n> > > we do seem to allow you to do that, but it's a no-op.\n> > >\n> > > ...\n> > >\n> > > Alternatively, consider this: a cast from some concrete multirange type\n> > > to anymultirange is a no-op, while any other sort of cast probably\n> ought\n> > > to be casting to some particular concrete multirange type. That would\n> > > line up with the existing operations for plain ranges.\n> >\n> > I agree you wouldn't actually cast by saying x::anymultirange, and the\n> > casts we define are already concrete, so instead you'd say\n> > x::int4multirange. But I think having a polymorphic function to\n> > convert from an anyrange to an anymultirange is useful so you can\n> > write generic functions. I can see how calling it \"anymultirange\" may\n> > be preferring the implementor perspective over the user perspective\n> > though, and how simply \"multirange\" would be more empathetic. I don't\n> > mind taking that approach.\n>\n> Here is a patch with anymultirange(anyrange) renamed to\n> multirange(anyrange). I also rebased on the latest master, added\n> documentation about the multirange(anyrange) function, and slightly\n> adjusted the formatting of the range functions table.\n>\n\nI think so this patch is ready for commiter.\n\nAll tests passed, the doc is good enough (the chapter name \"Range functions\nand Operators\" should be renamed to \"Range/multirange functions and\nOperators\"\nThe code formatting and comments looks well\n\n\nThank you for your work\n\nRegards\n\nPavel\n\n\n> Thanks,\n> Paul\n>\n\nHist 22. 1. 2020 v 0:55 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Sun, Jan 19, 2020 at 9:57 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Sun, Jan 19, 2020 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > True for casts involving concrete types, mainly because we'd like\n> > the identity \"value::typename == typename(value)\" to hold without\n> > too much worry about whether the latter is a plain function call\n> > or a special case. Not sure whether it makes as much sense for\n> > polymorphics, since casting to a polymorphic type is pretty silly:\n> > we do seem to allow you to do that, but it's a no-op.\n> >\n> > ...\n> >\n> > Alternatively, consider this: a cast from some concrete multirange type\n> > to anymultirange is a no-op, while any other sort of cast probably ought\n> > to be casting to some particular concrete multirange type. That would\n> > line up with the existing operations for plain ranges.\n>\n> I agree you wouldn't actually cast by saying x::anymultirange, and the\n> casts we define are already concrete, so instead you'd say\n> x::int4multirange. But I think having a polymorphic function to\n> convert from an anyrange to an anymultirange is useful so you can\n> write generic functions. I can see how calling it \"anymultirange\" may\n> be preferring the implementor perspective over the user perspective\n> though, and how simply \"multirange\" would be more empathetic. I don't\n> mind taking that approach.\n\nHere is a patch with anymultirange(anyrange) renamed to\nmultirange(anyrange). I also rebased on the latest master, added\ndocumentation about the multirange(anyrange) function, and slightly\nadjusted the formatting of the range functions table.I think so this patch is ready for commiter.All tests passed, the doc is good enough (the chapter name \"Range functions and Operators\" should be renamed to \"Range/multirange functions and Operators\"The code formatting and comments looks wellThank you for your workRegardsPavel\n\nThanks,\nPaul",
"msg_date": "Fri, 24 Jan 2020 08:08:56 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I started to review this again. I'm down to figuring out whether the\ntypecache changes make sense; in doing so I realized that the syscaches\nweren't perfectly defined (I think leftovers from when there was a\npg_multirange catalog, earlier in development), so I fixed that.\n\n0001 is mostly Paul's v10 patch, rebased to current master; no\nconflicts, I had to make a couple of small other adjustments to catch up\nwith current times.\n\nThe other patches are fairly obvious; changes in 0004 are described in\nits commit message.\n\nI'll continue to try to think through the typecache aspects of this\npatch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 3 Mar 2020 23:28:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I came across an interesting thing, namely multirange_canonicalize()'s\nuse of qsort_arg with a callback of range_compare(). range_compare()\ncalls range_deserialize() (non-trivial parsing) for each input range;\nmultirange_canonicalize() later does a few extra deserialize calls of\nits own. Call me a premature optimization guy if you will, but I think\nit makes sense to have a different struct (let's call it\n\"InMemoryRange\") which stores the parsed representation of each range;\nthen we can deserialize all ranges up front, and use that as many times\nas needed, without having to deserialize each range every time.\n\nWhile I'm at this, why not name the new file simply multiranges.c\ninstead of multirangetypes.c?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Mar 2020 18:33:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Thanks for looking at this again!\n\nOn 3/4/20 1:33 PM, Alvaro Herrera wrote:\n> I came across an interesting thing, namely multirange_canonicalize()'s\n> use of qsort_arg with a callback of range_compare(). range_compare()\n> calls range_deserialize() (non-trivial parsing) for each input range;\n> multirange_canonicalize() later does a few extra deserialize calls of\n> its own. Call me a premature optimization guy if you will, but I think\n> it makes sense to have a different struct (let's call it\n> \"InMemoryRange\") which stores the parsed representation of each range;\n> then we can deserialize all ranges up front, and use that as many times\n> as needed, without having to deserialize each range every time.\n\nI don't know, this sounds like a drastic change. I agree that \nmultirange_deserialize and range_deserialize do a lot of copying (not \nreally any parsing though, and they both assume their inputs are already \nde-TOASTED). But they are used very extensively, so if you wanted to \nremove them you'd have to rewrite a lot.\n\nI interpreted the intention of range_deserialize to be a way to keep the \nrange struct fairly \"private\" and give a standard interface to \nextracting its attributes. Its motive seems akin to deconstruct_array. \nSo I wrote multirange_deserialize to follow that principle. Both \nfunctions also handle memory alignment issues for you. With \nmultirange_deserialize, there isn't actually much structure (just the \nlist of ranges), so perhaps you could more easily omit it and give \ncallers direct access into the multirange contents. That still seems \nrisky though, and less well encapsulated.\n\nMy preference would be to see if these functions are really a \nperformance problem first, and only redo the in-memory structures if \nthey are. Also that seems like something you could do as a separate \nproject. (I wouldn't mind working on it myself, although I'd prefer to \ndo actual temporal database features first.) There are no \nbackwards-compatibility concerns to changing the in-memory structure, \nright? (Even if there are, it's too late to avoid them for ranges.)\n\n> While I'm at this, why not name the new file simply multiranges.c\n> instead of multirangetypes.c?\n\nAs someone who doesn't do a lot of Postgres hacking, I tried to follow \nthe approach in rangetypes.c as closely as I could, especially for \nnaming things. So I named the file multirangetypes.c because there was \nalready rangetypes.c. But also I can see how the \"types\" emphasizes that \nranges and multiranges are not concrete types themselves, but more like \nabstract data types or generics (like arrays).\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Wed, 4 Mar 2020 14:26:50 -0800",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> [ v11 patches ]\n\nThe cfbot isn't too happy with this; it's getting differently-ordered\nresults than you apparently did for the list of owned objects in\ndependency.out's DROP OWNED BY test. Not sure why that should be ---\nit seems like af6550d34 should have ensured that there's only one\npossible ordering.\n\nHowever, what I'm on about right at the moment is that I don't think\nthere should be any delta in that test at all. As far as I can see,\nthe design idea here is that multiranges will be automatically created\nover range types, and the user doesn't need to do that. To my mind,\nthat means that they're an implementation detail and should not show up as\nseparately-owned objects, any more than an autogenerated array type does.\nSo somewhere there's a missing bit of code, or more than one missing bit,\nto make multiranges act as derived types, the way arrays are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Mar 2020 15:19:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I wrote:\n> However, what I'm on about right at the moment is that I don't think\n> there should be any delta in that test at all. As far as I can see,\n> the design idea here is that multiranges will be automatically created\n> over range types, and the user doesn't need to do that. To my mind,\n> that means that they're an implementation detail and should not show up as\n> separately-owned objects, any more than an autogenerated array type does.\n\nActually ... have you given any thought to just deciding that ranges and\nmultiranges are the same type? That is, any range can now potentially\ncontain multiple segments? That would eliminate a whole lot of the\ntedious infrastructure hacking involved in this patch, and let you focus\non the actually-useful functionality.\n\nIt's possible that this is a bad idea. It bears a lot of similarity,\nI guess, to the way that Postgres doesn't consider arrays of different\ndimensionality to be distinct types. That has some advantages but it\nsurely also has downsides. I think on the whole the advantages win,\nand I feel like that might also be the case here.\n\nThe gating requirement for this would be to make sure that a plain\nrange and a multirange can be told apart by contents. The first idea that\ncomes to mind is to repurpose the allegedly-unused RANGE_xB_NULL bits in\nthe flag byte at the end of the datum. If one of them is set, then it's a\nmultirange, and we use a different interpretation of the bytes between the\ntype OID and the flag byte.\n\nAssuming that that's ok, it seems like we could consider the traditional\nrange functions like lower() and upper() to report on the first or last\nrange bound in a multirange --- essentially, they ignore any \"holes\"\nthat exist inside the range. And the new functions for multiranges\nact much like array slicing, in that they give you back pieces of a range\nthat aren't actually of a distinct type.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Mar 2020 16:06:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I wrote:\n> Actually ... have you given any thought to just deciding that ranges and\n> multiranges are the same type? That is, any range can now potentially\n> contain multiple segments? That would eliminate a whole lot of the\n> tedious infrastructure hacking involved in this patch, and let you focus\n> on the actually-useful functionality.\n\nAlso, this would allow us to remove at least one ugly misfeature:\n\nregression=# select '[1,2]'::int4range + '[3,10)'::int4range;\n ?column? \n----------\n [1,10)\n(1 row)\n\nregression=# select '[1,2]'::int4range + '[4,10)'::int4range;\nERROR: result of range union would not be contiguous\n\nIf the result of range_union can be a multirange as easily as not,\nwe would no longer have to throw an error here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Mar 2020 16:20:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "so 7. 3. 2020 v 22:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I wrote:\n> > Actually ... have you given any thought to just deciding that ranges and\n> > multiranges are the same type? That is, any range can now potentially\n> > contain multiple segments? That would eliminate a whole lot of the\n> > tedious infrastructure hacking involved in this patch, and let you focus\n> > on the actually-useful functionality.\n>\n> Also, this would allow us to remove at least one ugly misfeature:\n>\n> regression=# select '[1,2]'::int4range + '[3,10)'::int4range;\n> ?column?\n> ----------\n> [1,10)\n> (1 row)\n>\n> regression=# select '[1,2]'::int4range + '[4,10)'::int4range;\n> ERROR: result of range union would not be contiguous\n>\n> If the result of range_union can be a multirange as easily as not,\n> we would no longer have to throw an error here.\n>\n\nI think this behave is correct. Sometimes you should to get only one range\n- and this check is a protection against not continuous range.\n\nif you expect multirange, then do\n\nselect '[1,2]'::int4range::multirange + '[4,10)'::int4range;\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nso 7. 3. 2020 v 22:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> Actually ... have you given any thought to just deciding that ranges and\n> multiranges are the same type? That is, any range can now potentially\n> contain multiple segments? That would eliminate a whole lot of the\n> tedious infrastructure hacking involved in this patch, and let you focus\n> on the actually-useful functionality.\n\nAlso, this would allow us to remove at least one ugly misfeature:\n\nregression=# select '[1,2]'::int4range + '[3,10)'::int4range;\n ?column? \n----------\n [1,10)\n(1 row)\n\nregression=# select '[1,2]'::int4range + '[4,10)'::int4range;\nERROR: result of range union would not be contiguous\n\nIf the result of range_union can be a multirange as easily as not,\nwe would no longer have to throw an error here.I think this behave is correct. Sometimes you should to get only one range - and this check is a protection against not continuous range.if you expect multirange, then doselect '[1,2]'::int4range::multirange + '[4,10)'::int4range;RegardsPavel \n\n regards, tom lane",
"msg_date": "Sat, 7 Mar 2020 22:26:58 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Mar 07, 2020 at 04:06:32PM -0500, Tom Lane wrote:\n> I wrote:\n> > However, what I'm on about right at the moment is that I don't think\n> > there should be any delta in that test at all. As far as I can see,\n> > the design idea here is that multiranges will be automatically created\n> > over range types, and the user doesn't need to do that. To my mind,\n> > that means that they're an implementation detail and should not show up as\n> > separately-owned objects, any more than an autogenerated array type does.\n> \n> Actually ... have you given any thought to just deciding that ranges and\n> multiranges are the same type? That is, any range can now potentially\n> contain multiple segments? That would eliminate a whole lot of the\n> tedious infrastructure hacking involved in this patch, and let you focus\n> on the actually-useful functionality.\n\nIf we're changing range types rather than constructing a new\nmulti-range layer atop them, I think it would be helpful to have some\nway to figure out quickly whether this new range type was contiguous.\nOne way to do that would be to include a \"range cardinality\" in the\ndata structure which be the number of left ends in it.\n\nOne of the things I'd pictured doing with multiranges was along the\nlines of a \"full coverage\" constraint like \"During a shift, there can\nbe no interval that's not covered,\" which would correspond to a \"range\ncardinality\" of 1.\n\nI confess I'm getting a little twitchy about the idea of eliding the\ncases of \"one\" and \"many\", though.\n\n> Assuming that that's ok, it seems like we could consider the traditional\n> range functions like lower() and upper() to report on the first or last\n> range bound in a multirange --- essentially, they ignore any \"holes\"\n> that exist inside the range. And the new functions for multiranges\n> act much like array slicing, in that they give you back pieces of a range\n> that aren't actually of a distinct type.\n\nSo new functions along the lines of lowers(), uppers(), opennesses(),\netc.? I guess this could be extended as needs emerge.\n\nThere's another use case not yet covered here that could make this\neven more complex, we should probably plan for it: multi-ranges with\nweights.\n\nFor example,\n\nSELECT weighted_range_union(r)\nFROM (VALUES('[0,1)'::float8range), ('[0,3)'), '('[2,5)')) AS t(r)\n\nwould yield something along the lines of:\n\n(([0,1),1), ([1,3),2), ([3,5),1))\n\nand wedging that into the range type seems messy. Each range would\nthen have a cardinality, and each range within would have a weight,\nall of which would be an increasingly heavy burden on the common case\nwhere there's just a single range.\n\nEnhancing a separate multirange type to have weights seems like a\ncleaner path forward.\n\nGiven that, I'm -1 on mushing multi-ranges into a special case of\nranges, or /vice versa/.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 7 Mar 2020 23:13:13 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> There's another use case not yet covered here that could make this\n> even more complex, we should probably plan for it: multi-ranges with\n> weights.\n\nI'm inclined to reject that as completely out of scope. The core\nargument for unifying multiranges with ranges, if you ask me, is\nto make the data type closed under union. Weights are from some\nother universe.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Mar 2020 18:45:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Mar 07, 2020 at 06:45:44PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > There's another use case not yet covered here that could make this\n> > even more complex, we should probably plan for it: multi-ranges\n> > with weights.\n> \n> I'm inclined to reject that as completely out of scope. The core\n> argument for unifying multiranges with ranges, if you ask me, is to\n> make the data type closed under union. Weights are from some other\n> universe.\n\nI don't think they are. SQL databases are super useful because they do\nbags in addition to sets, so set union isn't the only, or maybe even\nthe most important, operation over which ranges ought to be closed.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 8 Mar 2020 03:45:05 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, 7 Mar 2020 at 16:27, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n> so 7. 3. 2020 v 22:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> I wrote:\n>> > Actually ... have you given any thought to just deciding that ranges and\n>> > multiranges are the same type? That is, any range can now potentially\n>> > contain multiple segments? That would eliminate a whole lot of the\n>> > tedious infrastructure hacking involved in this patch, and let you focus\n>> > on the actually-useful functionality.\n>>\n>>\n\n> I think this behave is correct. Sometimes you should to get only one range\n> - and this check is a protection against not continuous range.\n>\n> if you expect multirange, then do\n>\n> select '[1,2]'::int4range::multirange + '[4,10)'::int4range;\n>\n\nDefinitely agreed that range and multirange (or whatever it's called)\nshould be different. In the work I do I have a number of uses for ranges,\nbut not (yet) for multiranges. I want to be able to declare a column as\nrange and be sure that it is just a single range, and then call lower() and\nupper() on it and be sure to get just one value in each case; and if I\naccidentally try to take the union of ranges where the union isn’t another\nrange, I want to get an error rather than calculate some weird (in my\ncontext) multirange.\n\nOn a related note, I was thinking about this and I don’t think I like\nrange_agg as a name at all. I know we have array_agg and string_agg but\nsurely shouldn’t this be called union_agg, and shouldn’t there also be an\nintersect_agg? I mean, taking the union isn’t the only possible aggregate\non ranges or multiranges.\n\nOn Sat, 7 Mar 2020 at 16:27, Pavel Stehule <pavel.stehule@gmail.com> wrote:so 7. 3. 2020 v 22:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I wrote:\n> Actually ... have you given any thought to just deciding that ranges and\n> multiranges are the same type? That is, any range can now potentially\n> contain multiple segments? That would eliminate a whole lot of the\n> tedious infrastructure hacking involved in this patch, and let you focus\n> on the actually-useful functionality. I think this behave is correct. Sometimes you should to get only one range - and this check is a protection against not continuous range.if you expect multirange, then doselect '[1,2]'::int4range::multirange + '[4,10)'::int4range;Definitely agreed that range and multirange (or whatever it's called) should be different. In the work I do I have a number of uses for ranges, but not (yet) for multiranges. I want to be able to declare a column as range and be sure that it is just a single range, and then call lower() and upper() on it and be sure to get just one value in each case; and if I accidentally try to take the union of ranges where the union isn’t another range, I want to get an error rather than calculate some weird (in my context) multirange.On a related note, I was thinking about this and I don’t think I like range_agg as a name at all. I know we have array_agg and string_agg but surely shouldn’t this be called union_agg, and shouldn’t there also be an intersect_agg? I mean, taking the union isn’t the only possible aggregate on ranges or multiranges.",
"msg_date": "Sun, 8 Mar 2020 00:27:08 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n>> so 7. 3. 2020 v 22:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>> Actually ... have you given any thought to just deciding that ranges and\n>>> multiranges are the same type? That is, any range can now potentially\n>>> contain multiple segments?\n\n> Definitely agreed that range and multirange (or whatever it's called)\n> should be different. In the work I do I have a number of uses for ranges,\n> but not (yet) for multiranges. I want to be able to declare a column as\n> range and be sure that it is just a single range, and then call lower() and\n> upper() on it and be sure to get just one value in each case; and if I\n> accidentally try to take the union of ranges where the union isn’t another\n> range, I want to get an error rather than calculate some weird (in my\n> context) multirange.\n\nI do not find that argument convincing at all. Surely you could put\nthat constraint on your column using \"CHECK (numranges(VALUE) <= 1)\"\nor some such notation.\n\nAlso, you're attacking a straw man with respect to lower() and upper();\nI did not suggest changing them to return arrays, but rather interpreting\nthem as returning the lowest or highest endpoint, which I think would be\ntransparent in most cases. (There would obviously need to be some other\nfunctions that could dissect a multirange more completely.)\n\nThe real problem with the proposal as it stands, I think, is exactly\nthat range union has failure conditions and you have to use some other\noperator if you want to get a successful result always. That's an\nenormously ugly kluge, and if we'd done it right the first time nobody\nwould have objected.\n\nBottom line is that I don't think that we should add a pile of new moving\nparts to the type system just because people are afraid of change;\narguably, that's *more* change (and more risk of bugs), not less.\nUnifying the types would, for example, get rid of the pesky question\nof what promoting a range to multirange should look like exactly,\nbecause it'd be a no-op.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Mar 2020 12:56:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 10:43 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I took the liberty of rebasing this series on top of recent branch\n> master.\n>\n\nIn the tests there is:\n\n+select '{[a,a],[b,b]}'::textmultirange;\n+ textmultirange\n+----------------\n+ {[a,a],[b,b]}\n+(1 row)\n+\n+-- without canonicalization, we can't join these:\n+select '{[a,a], [b,b]}'::textmultirange;\n+ textmultirange\n+----------------\n+ {[a,a],[b,b]}\n+(1 row)\n+\n\nAside from the comment they are identical so I'm confused as to why both\ntests exist - though I suspect it has to do with the fact that the expected\nresult would be {[a,b]} since text is discrete.\n\nAlso, the current patch set seems a bit undecided on whether it wants to be\ntruly a multi-range or a range that can report non-contiguous components.\nSpecifically,\n\n+select '{[a,d), [b,f]}'::textmultirange;\n+ textmultirange\n+----------------\n+ {[a,f]}\n+(1 row)\n\nThere is a an argument that a multi-range should output {[a,d),[b,f]}. IMO\nits arguable that a multi-range container should not try and reduce the\nnumber of contained ranges at all. If that is indeed a desire, which seems\nlike it is, that feature alone goes a long way to support wanting to just\nmerge the desired functionality into the existing range type, where the\nfinal output has the minimum number of contiguous ranges possible, rather\nthan having a separate multirange type.\n\nDavid J.\n\nOn Fri, Dec 20, 2019 at 10:43 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I took the liberty of rebasing this series on top of recent branch\nmaster.In the tests there is:+select '{[a,a],[b,b]}'::textmultirange;+ textmultirange +----------------+ {[a,a],[b,b]}+(1 row)++-- without canonicalization, we can't join these:+select '{[a,a], [b,b]}'::textmultirange;+ textmultirange +----------------+ {[a,a],[b,b]}+(1 row)+Aside from the comment they are identical so I'm confused as to why both tests exist - though I suspect it has to do with the fact that the expected result would be {[a,b]} since text is discrete.Also, the current patch set seems a bit undecided on whether it wants to be truly a multi-range or a range that can report non-contiguous components. Specifically,+select '{[a,d), [b,f]}'::textmultirange;+ textmultirange +----------------+ {[a,f]}+(1 row)There is a an argument that a multi-range should output {[a,d),[b,f]}. IMO its arguable that a multi-range container should not try and reduce the number of contained ranges at all. If that is indeed a desire, which seems like it is, that feature alone goes a long way to support wanting to just merge the desired functionality into the existing range type, where the final output has the minimum number of contiguous ranges possible, rather than having a separate multirange type.David J.",
"msg_date": "Sun, 8 Mar 2020 10:28:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Mar 7, 2020 at 4:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's possible that this is a bad idea. It bears a lot of similarity,\n> I guess, to the way that Postgres doesn't consider arrays of different\n> dimensionality to be distinct types. That has some advantages but it\n> surely also has downsides. I think on the whole the advantages win,\n> and I feel like that might also be the case here.\n\nPersonally, I'm pretty unhappy with the fact that the array system\nconflates arrays with different numbers of dimensions. Like, you end\nup having to write array_upper(X, 1) instead of just array_upper(X),\nand then you're still left wondering whether whatever you wrote is\ngoing to blow up if somebody sneaks a multidimensional array in there,\nor for that matter, an array with a non-standard lower bound. There's\nlots of little things like that, where the decision to decorate the\narray type with these extra frammishes makes it harder to use for\neverybody even though most people don't use (or even want) those\nfeatures.\n\nSo count me as +1 for keeping range and multirange separate.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Mar 2020 13:53:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "I wonder what's the point of multirange arrays. Is there a reason we\ncreate those?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Mar 2020 16:59:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I wonder what's the point of multirange arrays. Is there a reason we\n> create those?\n\nThat's what we thought about arrays of composites to start with,\ntoo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Mar 2020 17:37:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, 2020-03-07 at 16:06 -0500, Tom Lane wrote:\n> Actually ... have you given any thought to just deciding that ranges\n> and\n> multiranges are the same type? \n\nIt has come up in a number of conversations, but I'm not sure if it was\ndiscussed on this list.\n\n> I think on the whole the advantages win,\n> and I feel like that might also be the case here.\n\nSome things to think about:\n\n1. Ranges are common -- at least implicitly -- in a lot of\napplications/systems. It's pretty easy to represent extrernal data as\nranges in postgres, and also to represent postgres ranges in external\nsystems. But I can see multiranges causing friction around a lot of\ncommon tasks, like displaying in a UI. If you only expect ranges, you\ncan add a CHECK constraint, so this is annoying but not necessarily a\ndeal-breaker.\n\n2. There are existing client libraries[1] that support range types and\ntransform them to types within the host language. Obviously, those\nwould need to be updated to expect multiple ranges.\n\n3. It seems like we would want some kind of base \"range\" type. When you\ntry to break a multirange down into constituent ranges, what type would\nthose pieces be? (Aside: how do you get the constituent ranges?)\n\nI'm thinking more about casting to see if there's a possible compromise\nthere.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://sfackler.github.io/rust-postgres-range/doc/v0.8.2/postgres_range/\n\n\n\n",
"msg_date": "Mon, 09 Mar 2020 18:34:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Mar 09, 2020 at 06:34:04PM -0700, Jeff Davis wrote:\n> On Sat, 2020-03-07 at 16:06 -0500, Tom Lane wrote:\n> > Actually ... have you given any thought to just deciding that ranges\n> > and\n> > multiranges are the same type? \n> \n> It has come up in a number of conversations, but I'm not sure if it was\n> discussed on this list.\n> \n> > I think on the whole the advantages win,\n> > and I feel like that might also be the case here.\n> \n> Some things to think about:\n> \n> 1. Ranges are common -- at least implicitly -- in a lot of\n> applications/systems. It's pretty easy to represent extrernal data as\n> ranges in postgres, and also to represent postgres ranges in external\n> systems. But I can see multiranges causing friction around a lot of\n> common tasks, like displaying in a UI. If you only expect ranges, you\n> can add a CHECK constraint, so this is annoying but not necessarily a\n> deal-breaker.\n\nIt could become well and truly burdensome in a UI or an API. The\ndifference between one, as ranges are now, and many, as multi-ranges\nwould be if we shoehorn them into the range type, are pretty annoying\nto deal with.\n\n> 2. There are existing client libraries[1] that support range types and\n> transform them to types within the host language. Obviously, those\n> would need to be updated to expect multiple ranges.\n\nThe type systems that would support such types might get unhappy with\nus if we started messing with some of the properties like\ncontiguousness.\n\n> 3. It seems like we would want some kind of base \"range\" type. When you\n> try to break a multirange down into constituent ranges, what type would\n> those pieces be? (Aside: how do you get the constituent ranges?)\n> \n> I'm thinking more about casting to see if there's a possible compromise\n> there.\n\nI think the right compromise is to recognize that the closure of a set\n(ranges) over an operation (set union) may well be a different set\n(multi-ranges). Other operations have already been proposed, complete\nwith concrete use cases that could really make PostgreSQL stand out.\n\nThat we don't have an obvious choice of \"most correct\" operation over\nwhich to close ranges makes it even bigger a potential foot-gun\nwhen we choose one arbitrarily and declare it to be the canonical one.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 10 Mar 2020 17:08:04 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Thanks everyone for offering some thoughts on this!\n\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n> have you given any thought to just deciding that ranges and\n> multiranges are the same type?\n\nI can see how it might be nice to have just one type to think about.\nStill I think keeping them separate makes sense. Other folks have\nbrought up several reasons already. Just to chime in:\n\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > Definitely agreed that range and multirange (or whatever it's called)\n> > should be different. In the work I do I have a number of uses for ranges,\n> > but not (yet) for multiranges. I want to be able to declare a column as\n> > range and be sure that it is just a single range, and then call lower() and\n> > upper() on it and be sure to get just one value in each case; and if I\n> > accidentally try to take the union of ranges where the union isn’t another\n> > range, I want to get an error rather than calculate some weird (in my\n> > context) multirange.\n>\n> I do not find that argument convincing at all. Surely you could put\n> that constraint on your column using \"CHECK (numranges(VALUE) <= 1)\"\n> or some such notation.\n\nA check constraint works for columns, but there are other contexts\nwhere you'd like to restrict things to just a contiguous range, e.g.\nuser-defined functions and intermediate results in queries. Basic\nranges seem a lot simpler to think about, so I can appreciate how\nletting any range be a multirange adds a heavy cognitive burden. I\nthink a lot of people will share Isaac's opinion here.\n\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n> Also, this would allow us to remove at least one ugly misfeature:\n>\n> regression=# select '[1,2]'::int4range + '[3,10)'::int4range;\n> ?column?\n> ----------\n> [1,10)\n> (1 row)\n>\n> regression=# select '[1,2]'::int4range + '[4,10)'::int4range;\n> ERROR: result of range union would not be contiguous\n\nBecause of backwards compatibility we can't really change +/-/* not to\nraise (right?), so if we joined ranges and multiranges we'd need to\nadd operators with a different name. I was calling those @+/@-/@*\nbefore, but that was considered too unintuitive and undiscoverable.\nHaving two types lets us use the nicer operator names.\n\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n> it seems like we could consider the traditional\n> range functions like lower() and upper() to report on the first or last\n> range bound in a multirange\n\nI tried to keep functions/operators similar, so already lower(mr) =\nlower(r) and upper(mr) = upper(r).\nI think *conceptually* it's good to make ranges & multiranges as\ninterchangable as possible, but that doesn't mean they have to be the\nsame type.\n\nAdding multiranges-as-ranges also raises questions about their string\nformat. If a multirange is {[1,2), [4,5)} would you only print the\ncurly braces when there is more than one element?\n\nI don't *think* allowing non-contiguous ranges would break how we use\nthem in GiST indexes or exclusion constraints, but maybe someone can\nthink of some problem I can't. It's one place to be wary anyway. At\nthe very least it would make those things slower I expect.\n\nOn a few other issues people have raised recently:\n\nAlvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I wonder what's the point of multirange arrays. Is there a reason we\n> create those?\n\nWe have arrays of everything else, so why not have them for\nmultiranges? We don't have to identify specific use cases here,\nalthough I can see how you'd want to call array_agg/UNNEST on some\nmultiranges, e.g. (Actually I really want to add an UNNEST that\n*takes* a multirange, but that could be a follow-on commit.) If\nnothing else I think omitting arrays of multiranges would be a strange\nirregularity in the type system.\n\nDavid G. Johnston <david.g.johnston@gmail.com> wrote:\n> In the tests there is:\n>\n> +select '{[a,a],[b,b]}'::textmultirange;\n> + textmultirange\n> +----------------\n> + {[a,a],[b,b]}\n> +(1 row)\n> +\n> +-- without canonicalization, we can't join these:\n> +select '{[a,a], [b,b]}'::textmultirange;\n> + textmultirange\n> +----------------\n> + {[a,a],[b,b]}\n> +(1 row)\n> +\n>\n> Aside from the comment they are identical so I'm confused as to why both tests exist - though I suspect it has to do with the fact that the expected result would be {[a,b]} since text is discrete.\n\nThose tests are for basic string parsing (multirange_in), so one is\ntesting {A,B} and the other {A, B} (with a space after the comma).\n(There are some tests right above those that also have blank spaces,\nbut they only output a single element in the multirange result.)\n\nDavid G. Johnston <david.g.johnston@gmail.com> wrote:\n> Also, the current patch set seems a bit undecided on whether it wants to be truly a multi-range or a range that can report non-contiguous components. Specifically,\n>\n> +select '{[a,d), [b,f]}'::textmultirange;\n> + textmultirange\n> +----------------\n> + {[a,f]}\n> +(1 row)\n\nWithout a canonicalization function, we can't know that [a,a] touches\n[b,b], but we *can* know that [a,d) touches [b,f). Or even:\n\nregression=# select '{[a,b), [b,b]}'::textmultirange;\n textmultirange\n ----------------\n {[a,b]}\n (1 row)\n\nSo I'm joining ranges whenever we know they touch. I think this is\nconsistent with existing range operators, e.g.:\n\nregression=# select '[a,a]'::textrange -|- '[b,b]';\n ?column?\n----------\n f\n\nregression=# select '[a,b)'::textrange -|- '[b,b]';\n ?column?\n----------\n t\n\nDavid G. Johnston <david.g.johnston@gmail.com> wrote:\n> There is a an argument that a multi-range should output {[a,d),[b,f]}. IMO its arguable that a multi-range container should not try and reduce the number of contained ranges at all.\n\nAutomatically combining touching ranges seems very desirable to me,\nand one of the motivations to building a multirange type instead of\njust using an array of ranges. Mathematically {[1,2), [2,3)} is\nequivalent to {[1,3)}, and merging the touching elements keeps things\neasier to read/understand and faster. Ranges themselves have a\ncanonical form too. Not canonicalizing raises a lot of questions, like\nwhen are two \"equivalent\" ranges equal? And when you compose\noperators/functions, do you keep all the internally-introduced splits,\nor somehow preserve only splits that were present in your top-level\ninputs? If you really want a list of possibly-touching ranges, I would\nuse an array for that. Why even have a range_agg (instead of just\narray_agg'ing the ranges) if you're not going to merge the inputs?\n\nIsaac Morland <isaac.morland@gmail.com> writes:\n> On a related note, I was thinking about this and I don’t think I like\n> range_agg as a name at all. I know we have array_agg and string_agg but surely\n> shouldn’t this be called union_agg, and shouldn’t there also be an\n> intersect_agg? I mean, taking the union isn’t the only possible aggregate on\n> ranges or multiranges.\n\nThe patch does include a range_intersect_agg already. Since there are\nso many set-like things in SQL, I don't think the unqualified\nunion_agg/intersect_agg are appropriate to give to ranges alone. And\nthe existing ${type}_agg functions are all \"additive\": json_agg,\njson_object_agg, array_agg, string_agg, xmlagg. So I think range_agg\nis the least-surprising name for this behavior. I'm not even the first\nperson to call it that, as you can see from [1].\n\nDavid Fetter <david@fetter.org> writes:\n> One way to do that would be to include a \"range cardinality\" in the\n> data structure which be the number of left ends in it.\n\nI agree that is probably useful enough to add to this patch. I'll work on it.\n\nDavid Fetter <david@fetter.org> writes:\n> There's another use case not yet covered here that could make this\n> even more complex, we should probably plan for it: multi-ranges with\n> weights.\n\nSeveral people have asked me about this. I think it would need to be a\nseparate type though, e.g. weighted_multirange. Personally I wouldn't\nmind working on it eventually, but I don't think it needs to be part\nof this initial patch. Possibly it could even be an extension. In lieu\nof a real type you also have an array-of-ranges, which is what I\noriginally proposed range_agg to return.\n\nFinally, I think I mentioned this a long time ago, but I'm still not\nsure if this patch needs work around these things:\n\n- gist opclass\n- spgist opclass\n- typanalyze\n- selectivity\n\nI'd love for a real Postgres expert to tell me \"No, we can add that\nlater\" or \"Yes, you have to add that now.\" Even better if they can\noffer some help, because I'm not sure I understand those areas well\nenough to do it myself.\n\nThanks all,\nPaul\n\n[1] https://git.proteus-tech.com/open-source/django-postgres/blob/fa91cf9b43ce942e84b1a9be22f445f3515ca360/postgres/sql/range_agg.sql\n\n\n",
"msg_date": "Wed, 11 Mar 2020 16:04:40 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "\nHello Paul, thanks for the thorough response to all these points.\n\nRegarding the merge of multiranges with ranges, I had also thought of\nthat at some point and was leaning towards doing that, but after the\nlatest responses I think the arguments against it are sensible; and now\nthere's a clear majority for keeping them separate.\n\nI'll be posting an updated version of the patch later today.\n\nI was a bit scared bit this part:\n\nOn 2020-Mar-11, Paul A Jungwirth wrote:\n\n> Finally, I think I mentioned this a long time ago, but I'm still not\n> sure if this patch needs work around these things:\n> \n> - gist opclass\n> - spgist opclass\n> - typanalyze\n> - selectivity\n> \n> I'd love for a real Postgres expert to tell me \"No, we can add that\n> later\" or \"Yes, you have to add that now.\"\n\nWhile I think that the gist and spgist opclass are in the \"very nice to\nhave but still optional\" category, the other two items seem mandatory\n(but I'm not 100% certain about that, TBH). I'm not sure we have time\nto get those ready during this commitfest.\n\n\n... thinking about gist+spgist, I think they could be written\nidentically to those for ranges, using the lowest (first) lower bound\nand the higher (last) upper bound.\n\n... thinking about selectivity, I think the way to write that is to\nfirst compute the selectivity for the range across the first lower bound\nand the last upper bound, and then subtract that for the \"negative\"\nspace between the contained ranges.\n\nI have no immediate thoughts about typanalyze. I suppose it should be\nsomehow based on the implementation for ranges ... maybe a first-cut is\nto construct fake ranges covering the whole multirange (as above) and\njust use the ranges implementation (compute_range_stats).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Mar 2020 09:38:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 5:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> ... thinking about gist+spgist, I think they could be written\n> identically to those for ranges, using the lowest (first) lower bound\n> and the higher (last) upper bound.\n>\n> ... thinking about selectivity, I think the way to write that is to\n> first compute the selectivity for the range across the first lower bound\n> and the last upper bound, and then subtract that for the \"negative\"\n> space between the contained ranges.\n>\n> I have no immediate thoughts about typanalyze. I suppose it should be\n> somehow based on the implementation for ranges ... maybe a first-cut is\n> to construct fake ranges covering the whole multirange (as above) and\n> just use the ranges implementation (compute_range_stats).\n\nThanks, this is pretty much what I was thinking too, but I'm really\nglad to have someone who knows better confirm it. I can get started on\nthese right away, and I'll let folks know if I need any help. When I\nlooked at this last fall there was a lot I didn't understand. More or\nless using the existing ranges implementation should be a big help\nthough.\n\nPaul\n\n\n",
"msg_date": "Fri, 13 Mar 2020 09:32:20 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 4:39 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Sat, Mar 7, 2020 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > [ v11 patches ]\n> > The cfbot isn't too happy with this; it's getting differently-ordered\n> > results than you apparently did for the list of owned objects in\n> > dependency.out's DROP OWNED BY test. Not sure why that should be ---\n> > it seems like af6550d34 should have ensured that there's only one\n> > possible ordering.\n>\n> Oh, my last email left out the most important part. :-) Is this\n> failure online somewhere so I can take a look at it and fix it?\n\nLooks like I sent this just to Tom before. This is something I need to\nfix, right?\n\nRegards,\nPaul\n\n\n",
"msg_date": "Fri, 13 Mar 2020 09:33:56 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> On Wed, Mar 11, 2020 at 4:39 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n>> Oh, my last email left out the most important part. :-) Is this\n>> failure online somewhere so I can take a look at it and fix it?\n\nLook for your patch(es) at\n\nhttp://commitfest.cputube.org\n\nRight now it's not even applying, presumably because Alvaro already\npushed some pieces, so you need to rebase. But when it was applying,\none or both of the test builds was failing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Mar 2020 13:06:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Mar-13, Tom Lane wrote:\n\n> Right now it's not even applying, presumably because Alvaro already\n> pushed some pieces, so you need to rebase. But when it was applying,\n> one or both of the test builds was failing.\n\nHere's the rebased version.\n\nI just realized I didn't include the API change I proposed in\nhttps://postgr.es/m/20200306200343.GA625@alvherre.pgsql ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 13 Mar 2020 18:39:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 2:39 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Here's the rebased version.\n>\n> I just realized I didn't include the API change I proposed in\n> https://postgr.es/m/20200306200343.GA625@alvherre.pgsql ...\n\nThanks for your help with this Alvaro!\n\nI was just adding your changes to my own branch and I noticed your\nv12-0001 has different parameter names here:\n\ndiff --git a/src/backend/utils/adt/multirangetypes.c\nb/src/backend/utils/adt/multirangetypes.c\nindex f9dd0378cc..0c9afd5448 100644\n--- a/src/backend/utils/adt/multirangetypes.c\n+++ b/src/backend/utils/adt/multirangetypes.c\n@@ -376,11 +375,11 @@ multirange_typanalyze(PG_FUNCTION_ARGS)\n * pointer to a type cache entry.\n */\n static MultirangeIOData *\n-get_multirange_io_data(FunctionCallInfo fcinfo, Oid mltrngtypid,\nIOFuncSelector func)\n+get_multirange_io_data(FunctionCallInfo fcinfo, Oid rngtypid,\nIOFuncSelector func)\n {\n MultirangeIOData *cache = (MultirangeIOData *) fcinfo->flinfo->fn_extra;\n\n- if (cache == NULL || cache->typcache->type_id != mltrngtypid)\n+ if (cache == NULL || cache->typcache->type_id != rngtypid)\n {\n int16 typlen;\n bool typbyval;\n@@ -389,9 +388,9 @@ get_multirange_io_data(FunctionCallInfo fcinfo,\nOid mltrngtypid, IOFuncSelector\n\n cache = (MultirangeIOData *)\nMemoryContextAlloc(fcinfo->flinfo->fn_mcxt,\n\nsizeof(MultirangeIOData));\n- cache->typcache = lookup_type_cache(mltrngtypid,\nTYPECACHE_MULTIRANGE_INFO);\n+ cache->typcache = lookup_type_cache(rngtypid,\nTYPECACHE_MULTIRANGE_INFO);\n if (cache->typcache->rngtype == NULL)\n- elog(ERROR, \"type %u is not a multirange type\", mltrngtypid);\n+ elog(ERROR, \"type %u is not a multirange type\", rngtypid);\n\n /* get_type_io_data does more than we need, but is convenient */\n get_type_io_data(cache->typcache->rngtype->type_id,\n\nI'm pretty sure mltrngtypid is the correct name here. Right? Let me\nknow if I'm missing something. :-)\n\nYours,\nPaul\n\n\n",
"msg_date": "Sat, 14 Mar 2020 10:34:44 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> > On Wed, Mar 11, 2020 at 4:39 PM Paul A Jungwirth\n> > <pj@illuminatedcomputing.com> wrote:\n> >> Oh, my last email left out the most important part. :-) Is this\n> >> failure online somewhere so I can take a look at it and fix it?\n>\n> Look for your patch(es) at\n>\n> http://commitfest.cputube.org\n>\n> Right now it's not even applying, presumably because Alvaro already\n> pushed some pieces, so you need to rebase. But when it was applying,\n> one or both of the test builds was failing.\n\nHere are all Alvaro's changes rolled into one patch, along with the\nget_multirange_io_data parameter renamed to mltrngtypid, and this\nsmall fix to the dependency regression test:\ndiff --git a/src/test/regress/expected/dependency.out\nb/src/test/regress/expected/dependency.out\nindex 778699a961..8232795148 100644\n--- a/src/test/regress/expected/dependency.out\n+++ b/src/test/regress/expected/dependency.out\n@@ -140,8 +140,8 @@ owner of sequence deptest_a_seq\n owner of table deptest\n owner of function deptest_func()\n owner of type deptest_enum\n-owner of type deptest_range\n owner of type deptest_multirange\n+owner of type deptest_range\n owner of table deptest2\n owner of sequence ss1\n owner of type deptest_t\n\nI think that should fix the cfbot failure.\n\nYours,\nPaul",
"msg_date": "Sat, 14 Mar 2020 11:13:54 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Mar 14, 2020 at 11:13 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> I think that should fix the cfbot failure.\n\nI saw this patch was failing to apply again. There was some\nrefactoring to how polymorphic types are determined. I added my\nchanges for anymultirange to that new approach, and things should be\npassing again.\n\nYours,\nPaul",
"msg_date": "Mon, 16 Mar 2020 21:52:12 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Mar-16, Paul A Jungwirth wrote:\n\n> On Sat, Mar 14, 2020 at 11:13 AM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > I think that should fix the cfbot failure.\n> \n> I saw this patch was failing to apply again. There was some\n> refactoring to how polymorphic types are determined. I added my\n> changes for anymultirange to that new approach, and things should be\n> passing again.\n\nThere's been another flurry of commits in the polymorphic types area.\nCan you please rebase again?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Mar 2020 17:41:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 1:42 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Mar-16, Paul A Jungwirth wrote:\n>\n> > On Sat, Mar 14, 2020 at 11:13 AM Paul A Jungwirth\n> > <pj@illuminatedcomputing.com> wrote:\n> > > I think that should fix the cfbot failure.\n> >\n> > I saw this patch was failing to apply again. There was some\n> > refactoring to how polymorphic types are determined. I added my\n> > changes for anymultirange to that new approach, and things should be\n> > passing again.\n>\n> There's been another flurry of commits in the polymorphic types area.\n> Can you please rebase again?\n\nI noticed that too. :-) I'm about halfway through a rebase right now.\nI can probably finish it up tonight.\n\nPaul\n\n\n",
"msg_date": "Thu, 19 Mar 2020 13:43:48 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 1:43 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Thu, Mar 19, 2020 at 1:42 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > There's been another flurry of commits in the polymorphic types area.\n> > Can you please rebase again?\n>\n> I noticed that too. :-) I'm about halfway through a rebase right now.\n> I can probably finish it up tonight.\n\nHere is that patch. I should probably add an anycompatiblemultirange\ntype now too? I'll get started on that tomorrow.\n\nRegards,\nPaul",
"msg_date": "Thu, 19 Mar 2020 22:24:53 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Mar-14, Paul A Jungwirth wrote:\n\n> On Fri, Mar 13, 2020 at 2:39 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Here's the rebased version.\n> >\n> > I just realized I didn't include the API change I proposed in\n> > https://postgr.es/m/20200306200343.GA625@alvherre.pgsql ...\n> \n> Thanks for your help with this Alvaro!\n> \n> I was just adding your changes to my own branch and I noticed your\n> v12-0001 has different parameter names here:\n> \n> static MultirangeIOData *\n> -get_multirange_io_data(FunctionCallInfo fcinfo, Oid mltrngtypid,\n> IOFuncSelector func)\n> +get_multirange_io_data(FunctionCallInfo fcinfo, Oid rngtypid,\n> IOFuncSelector func)\n\n> I'm pretty sure mltrngtypid is the correct name here. Right? Let me\n> know if I'm missing something. :-)\n\nHeh. The intention here was to abbreviate to \"typid\", but if you want\nto keep the longer name, it's OK too. I don't think that name is\nparticularly critical, since it should be obvious that it must be a\nmultirange type.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Mar 2020 11:48:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Mar-19, Paul A Jungwirth wrote:\n\n> On Thu, Mar 19, 2020 at 1:43 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > On Thu, Mar 19, 2020 at 1:42 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > There's been another flurry of commits in the polymorphic types area.\n> > > Can you please rebase again?\n> >\n> > I noticed that too. :-) I'm about halfway through a rebase right now.\n> > I can probably finish it up tonight.\n> \n> Here is that patch. I should probably add an anycompatiblemultirange\n> type now too? I'll get started on that tomorrow.\n\nThanks for the new version. Here's a few minor adjustments while I\ncontinue to read through it.\n\nThinking about the on-disk representation, can we do better than putting\nthe contained ranges in long-varlena format, including padding; also we\ninclude the type OID with each element. Sounds wasteful. A more\ncompact representation might be to allow short varlenas and doing away\nwith the alignment padding, put the the type OID just once. This is\nimportant because we cannot change it later.\n\nI'm also wondering if multirange_in() is the right strategy. Would it\nbe sensible to give each range to range_parse or range_parse_bounde, so\nthat it determines where each range starts and ends? Then that function\ndoesn't have to worry about each quote and escape, duplicating range\nparsing code. (This will probably require changing signature of the\nrangetypes.c function, and exporting it; for example have\nrange_parse_bound allow bound_str to be NULL and in that case don't mess\nwith the StringInfo and just return the end position of the parsed\nbound.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 23 Mar 2020 20:32:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Thanks Alvaro!\n\nOn Mon, Mar 23, 2020 at 4:33 PM Alvaro Herrera \n<alvherre@2ndquadrant.com> wrote:\n >\n > Thinking about the on-disk representation, can we do better than putting\n > the contained ranges in long-varlena format, including padding; also we\n > include the type OID with each element. �Sounds wasteful. �A more\n > compact representation might be to allow short varlenas and doing away\n > with the alignment padding, put the the type OID just once. �This is\n > important because we cannot change it later.\n\nCan you give me some guidance on this? I don't know how to make the \non-disk format different from the in-memory format. (And for the \nin-memory format, I think it's important to have actual RangeTypes \ninside the multirange.) Is there something in the documentation, or a \nREADME in the repo, or even another type I can follow?\n\n > I'm also wondering if multirange_in() is the right strategy. �Would \nit> be sensible to give each range to range_parse or range_parse_bounde, so\n > that it determines where each range starts and ends? �Then that function\n > doesn't have to worry about each quote and escape, duplicating range\n > parsing code. �(This will probably require changing signature of the\n > rangetypes.c function, and exporting it; for example have\n > range_parse_bound allow bound_str to be NULL and in that case don't mess\n > with the StringInfo and just return the end position of the parsed\n > bound.)\n\nYeah, I really wanted to do it that way originally too. As you say it \nwould require passing back more information from the range-parsing code. \nI can take a stab at making the necessary changes. I'm a bit more \nconfident now than I was then in changing the range code we have already.\n\nRegards,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Mon, 23 Mar 2020 21:23:31 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Mar-23, Paul Jungwirth wrote:\n\n> On Mon, Mar 23, 2020 at 4:33 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >\n> > Thinking about the on-disk representation, can we do better than putting\n> > the contained ranges in long-varlena format, including padding; also we\n> > include the type OID with each element. �Sounds wasteful. �A more\n> > compact representation might be to allow short varlenas and doing away\n> > with the alignment padding, put the the type OID just once. �This is\n> > important because we cannot change it later.\n> \n> Can you give me some guidance on this? I don't know how to make the on-disk\n> format different from the in-memory format. (And for the in-memory format, I\n> think it's important to have actual RangeTypes inside the multirange.) Is\n> there something in the documentation, or a README in the repo, or even\n> another type I can follow?\n\nSorry I didn't reply earlier, but I didn't know the answer then and I\nstill don't know the answer now.\n\nAnyway, I rebased this to verify that the code hasn't broken, and it\nhasn't -- the tests still pass. There was a minor conflict in\npg_operator.dat which I fixed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 4 Apr 2020 20:10:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "v17 is a rebase fixing a minor parse_coerce.c edit; v16 lasted little\n:-( I chose to change the wording of the conflicting comment in\nenforce_generic_type_consistency():\n\n * 3) Similarly, if return type is ANYRANGE or ANYMULTIRANGE, and any\n *\t argument is ANYRANGE or ANYMULTIRANGE, use that argument's\n *\t actual type, range type or multirange type as the function's return\n *\t type.\n\nThis wording is less precise, in that it doesn't say exactly which of\nthe three types is the actual result for each of the possible four cases\n(r->r, r->m, m->m, m->r) but I think it should be straightforward.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Apr 2020 18:13:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Apr-05, Alvaro Herrera wrote:\n\n> v17 is a rebase fixing a minor parse_coerce.c edit; v16 lasted little\n> :-( I chose to change the wording of the conflicting comment in\n> enforce_generic_type_consistency():\n\nHm, attached.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 5 Apr 2020 18:14:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Apr 4, 2020 at 4:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Sorry I didn't reply earlier, but I didn't know the answer then and I\n> still don't know the answer now.\n\nOkay, thanks Alvaro! I'll see if I can figure it out myself. I assume\nit is actually possible, right? I've seen references to on-disk format\nvs in-memory format before, but I've never encountered anything in the\ncode supporting a difference.\n\n> Anyway, I rebased this to verify that the code hasn't broken, and it\n> hasn't -- the tests still pass. There was a minor conflict in\n> pg_operator.dat which I fixed.\n\nThanks, and thanks for your v17 also. Here is a patch building on that\nand adding support for anycompatiblemultirange.\n\nRegards,\nPaul",
"msg_date": "Fri, 10 Apr 2020 20:44:30 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Apr 10, 2020 at 8:44 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> Thanks, and thanks for your v17 also. Here is a patch building on that\n> and adding support for anycompatiblemultirange.\n\nHere is a v19 that just moved the multirange tests to a new parallel\ngroup to avoid a max-20-tests error. Sorry about that!\n\nBtw I'm working on typanalyze + selectivity, and it seems like the\ntest suite doesn't run those things? At least I can't seem to get it\nto call the existing range typanalyze functions. Those would run from\nthe vacuum process, not an ordinary backend, right? Is there a\nseparate test suite for that I'm overlooking? I'm sure I can figure it\nout, but since I'm uploading a new patch file I thought I'd ask. . . .\n\nThanks,\nPaul",
"msg_date": "Sat, 11 Apr 2020 09:36:37 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Apr 11, 2020 at 9:36 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> Btw I'm working on typanalyze + selectivity, and it seems like the\n> test suite doesn't run those things?\n\nNevermind, I just had to add `analyze numrange_test` to\nsrc/test/regress/sql/rangetypes.sql. :-) Do you want a separate patch\nfor that? Or maybe it should go in sql/vacuum.sql?\n\nRegards,\nPaul\n\n\n",
"msg_date": "Sat, 11 Apr 2020 09:42:44 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Apr-11, Paul A Jungwirth wrote:\n\n> On Sat, Apr 11, 2020 at 9:36 AM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > Btw I'm working on typanalyze + selectivity, and it seems like the\n> > test suite doesn't run those things?\n> \n> Nevermind, I just had to add `analyze numrange_test` to\n> src/test/regress/sql/rangetypes.sql. :-) Do you want a separate patch\n> for that? Or maybe it should go in sql/vacuum.sql?\n\nDunno, it seems fine in rangetypes.sql.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Apr 2020 12:56:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sat, Apr 11, 2020 at 09:36:37AM -0700, Paul A Jungwirth wrote:\n> On Fri, Apr 10, 2020 at 8:44 PM Paul A Jungwirth <pj@illuminatedcomputing.com> wrote:\n> > Thanks, and thanks for your v17 also. Here is a patch building on that\n> > and adding support for anycompatiblemultirange.\n> \n> Here is a v19 that just moved the multirange tests to a new parallel\n> group to avoid a max-20-tests error. Sorry about that!\n\nThis needs to be rebased ; set cfbot to \"waiting\".\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 5 Jul 2020 12:20:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jul 5, 2020 at 10:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> This needs to be rebased ; set cfbot to \"waiting\".\n\nHere is a patch that is rebased onto current master. It also includes\nthe analyze/selectivity additions.\n\nI haven't made much progress storing on-disk multiranges without the\nrange type oids. Peter Geoghegan suggested I look at how we handle\narrays in heap_deform_tuple, but I don't see anything there to help me\n(probably I misunderstood him though). Just knowing that arrays are\nsomething we do this for is enough to hunt for clues, but if anyone\ncan point me more directly to code that will help me do it for\nmultiranges, I'd be appreciative.\n\nYours,\nPaul",
"msg_date": "Sun, 5 Jul 2020 12:11:15 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jul 5, 2020 at 12:11 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> Just knowing that arrays are\n> something we do this for is enough to hunt for clues, but if anyone\n> can point me more directly to code that will help me do it for\n> multiranges, I'd be appreciative.\n\nIt looks like expandeddatum.h is where I should be looking. . . .\n\nPaul\n\n\n",
"msg_date": "Sun, 5 Jul 2020 12:25:49 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Jul 5, 2020 at 12:11 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> I haven't made much progress storing on-disk multiranges without the\n> range type oids.\n\nHere is a patch using the TOAST EXTENDED infrastructure to store\nmultiranges on disk with a new \"short\" range type struct that omits\nthe range type oids, but then loads ordinary range structs for its\nin-memory operations. One nice thing that fell out from that work is\nthat I can build an ordinary RangeType ** list at the same time.\n(Since RangeTypes are varlena and their bounds may be varlena, you\nalready needed to get a list like that for nearly any operation.)\n\nThis is rebased on the current master, including some changes to doc\ntables and pg_upgrade handling of type oids.\n\nYours,\nPaul",
"msg_date": "Sun, 16 Aug 2020 12:55:21 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Aug 16, 2020 at 12:55 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> This is rebased on the current master, including some changes to doc\n> tables and pg_upgrade handling of type oids.\n\nHere is a rebased version of this patch, including a bunch of cleanup\nfrom Alvaro. (Thanks Alvaro!)\n\nPaul",
"msg_date": "Wed, 23 Sep 2020 17:04:57 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "čt 24. 9. 2020 v 2:05 odesílatel Paul A Jungwirth <\npj@illuminatedcomputing.com> napsal:\n\n> On Sun, Aug 16, 2020 at 12:55 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > This is rebased on the current master, including some changes to doc\n> > tables and pg_upgrade handling of type oids.\n>\n> Here is a rebased version of this patch, including a bunch of cleanup\n> from Alvaro. (Thanks Alvaro!)\n>\n\nI tested this patch and It looks well, I have not any objections\n\n1. there are not new warnings\n2. make check-world passed\n3. build doc without problems\n4. doc is enough, regress tests too\n5. there was not objection against this feature in discussion, and I think\nit is interesting and useful feature - good additional to arrays\n\nRegards\n\nPavel\n\n\n\n\n> Paul\n>\n\nčt 24. 9. 2020 v 2:05 odesílatel Paul A Jungwirth <pj@illuminatedcomputing.com> napsal:On Sun, Aug 16, 2020 at 12:55 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> This is rebased on the current master, including some changes to doc\n> tables and pg_upgrade handling of type oids.\n\nHere is a rebased version of this patch, including a bunch of cleanup\nfrom Alvaro. (Thanks Alvaro!)I tested this patch and It looks well, I have not any objections1. there are not new warnings2. make check-world passed3. build doc without problems4. doc is enough, regress tests too5. there was not objection against this feature in discussion, and I think it is interesting and useful feature - good additional to arrays RegardsPavel\n\nPaul",
"msg_date": "Fri, 9 Oct 2020 08:51:36 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi!\n\nOn Thu, Sep 24, 2020 at 3:05 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Sun, Aug 16, 2020 at 12:55 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> > This is rebased on the current master, including some changes to doc\n> > tables and pg_upgrade handling of type oids.\n>\n> Here is a rebased version of this patch, including a bunch of cleanup\n> from Alvaro. (Thanks Alvaro!)\n\nI'd like to review this patch. Could you please rebase it once again? Thanks.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 27 Nov 2020 11:35:37 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 12:35 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> I'd like to review this patch. Could you please rebase it once again? Thanks.\n\nThanks! Here is a rebased version. It also includes one more cleanup\ncommit from Alvaro since the last one.\n\nYours,\nPaul",
"msg_date": "Sun, 29 Nov 2020 09:11:05 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 8:11 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> On Fri, Nov 27, 2020 at 12:35 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > I'd like to review this patch. Could you please rebase it once again? Thanks.\n>\n> Thanks! Here is a rebased version. It also includes one more cleanup\n> commit from Alvaro since the last one.\n\nThank you. Could you please, update doc/src/sgml/catalogs.sgml,\nbecause pg_type and pg_range catalogs are updated.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 29 Nov 2020 22:43:14 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 11:43 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> Thank you. Could you please, update doc/src/sgml/catalogs.sgml,\n> because pg_type and pg_range catalogs are updated.\n\nAttached! :-)\n\nPaul",
"msg_date": "Sun, 29 Nov 2020 12:53:40 -0800",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Nov 29, 2020 at 11:53 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Sun, Nov 29, 2020 at 11:43 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > Thank you. Could you please, update doc/src/sgml/catalogs.sgml,\n> > because pg_type and pg_range catalogs are updated.\n>\n> Attached! :-)\n\nYou're quick, thank you. Please, also take a look at cfbot failure\nhttps://travis-ci.org/github/postgresql-cfbot/postgresql/builds/746623942\nI've tried to reproduce it, but didn't manage yet.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 30 Nov 2020 22:35:06 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 10:35 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Sun, Nov 29, 2020 at 11:53 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> >\n> > On Sun, Nov 29, 2020 at 11:43 AM Alexander Korotkov\n> > <aekorotkov@gmail.com> wrote:\n> > > Thank you. Could you please, update doc/src/sgml/catalogs.sgml,\n> > > because pg_type and pg_range catalogs are updated.\n> >\n> > Attached! :-)\n>\n> You're quick, thank you. Please, also take a look at cfbot failure\n> https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/746623942\n> I've tried to reproduce it, but didn't manage yet.\n\nGot it. type_sanity test fails on any platform, you just need to\nrepeat \"make check\" till it fails.\n\nThe failed query checked consistency of range types, but it didn't\ntake into account ranges of domains and ranges of records, which are\nexercised by multirangetypes test running in parallel. We could teach\nthis query about such kinds of ranges, but I think that would be\noverkill, because we're not going to introduce such builtin ranges\nyet. So, I'm going to just move multirangetypes test into another\ngroup of parallel tests.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 30 Nov 2020 23:39:24 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:39 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Mon, Nov 30, 2020 at 10:35 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > On Sun, Nov 29, 2020 at 11:53 PM Paul A Jungwirth\n> > <pj@illuminatedcomputing.com> wrote:\n> > >\n> > > On Sun, Nov 29, 2020 at 11:43 AM Alexander Korotkov\n> > > <aekorotkov@gmail.com> wrote:\n> > > > Thank you. Could you please, update doc/src/sgml/catalogs.sgml,\n> > > > because pg_type and pg_range catalogs are updated.\n> > >\n> > > Attached! :-)\n> >\n> > You're quick, thank you. Please, also take a look at cfbot failure\n> > https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/746623942\n> > I've tried to reproduce it, but didn't manage yet.\n>\n> Got it. type_sanity test fails on any platform, you just need to\n> repeat \"make check\" till it fails.\n>\n> The failed query checked consistency of range types, but it didn't\n> take into account ranges of domains and ranges of records, which are\n> exercised by multirangetypes test running in parallel. We could teach\n> this query about such kinds of ranges, but I think that would be\n> overkill, because we're not going to introduce such builtin ranges\n> yet. So, I'm going to just move multirangetypes test into another\n> group of parallel tests.\n\nI also found a problem in multirange types naming logic. Consider the\nfollowing example.\n\ncreate type a_multirange AS (x float, y float);\ncreate type a as range(subtype=text, collation=\"C\");\ncreate table tbl (x __a_multirange);\ndrop type a_multirange;\n\nIf you dump this database, the dump couldn't be restored. The\nmultirange type is named __a_multirange, because the type named\na_multirange already exists. However, it might appear that\na_multirange type is already deleted. When the dump is restored, a\nmultirange type is named a_multirange, and the corresponding table\nfails to be created. The same thing doesn't happen with arrays,\nbecause arrays are not referenced in dumps by their internal names.\n\nI think we probably should add an option to specify multirange type\nnames while creating a range type. Then dump can contain exact type\nnames used in the database, and restore wouldn't have a names\ncollision.\n\nAnother thing that worries me is the multirange serialization format.\n\ntypedef struct\n{\n int32 vl_len_; /* varlena header */\n char flags; /* range flags */\n char _padding; /* Bounds must be aligned */\n /* Following the header are zero to two bound values. */\n} ShortRangeType;\n\nComment says this structure doesn't contain a varlena header, while\nstructure obviously has it.\n\nIn general, I wonder if we can make the binary format of multiranges\nmore efficient. It seems that every function involving multiranges\nfrom multirange_deserialize(). I think we can make functions like\nmultirange_contains_elem() much more efficient. Multirange is\nbasically an array of ranges. So we can pack it as follows.\n1. Typeid and rangecount\n2. Tightly packed array of flags (1-byte for each range)\n3. Array of indexes of boundaries (4-byte for each range). Or even\nbetter we can combine offsets and lengths to be compression-friendly\nlike jsonb JEntry's do.\n4. Boundary values\nUsing this format, we can implement multirange_contains_elem(),\nmultirange_contains_range() without deserialization and using binary\nsearch. That would be much more efficient. What do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 8 Dec 2020 02:45:57 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On 2020-Dec-08, Alexander Korotkov wrote:\n\n> I also found a problem in multirange types naming logic. Consider the\n> following example.\n> \n> create type a_multirange AS (x float, y float);\n> create type a as range(subtype=text, collation=\"C\");\n> create table tbl (x __a_multirange);\n> drop type a_multirange;\n> \n> If you dump this database, the dump couldn't be restored. The\n> multirange type is named __a_multirange, because the type named\n> a_multirange already exists. However, it might appear that\n> a_multirange type is already deleted. When the dump is restored, a\n> multirange type is named a_multirange, and the corresponding table\n> fails to be created. The same thing doesn't happen with arrays,\n> because arrays are not referenced in dumps by their internal names.\n> \n> I think we probably should add an option to specify multirange type\n> names while creating a range type. Then dump can contain exact type\n> names used in the database, and restore wouldn't have a names\n> collision.\n\nHmm, good point. I agree that a dump must preserve the name, since once\ncreated it is user-visible. I had not noticed this problem, but it's\nobvious in retrospect.\n\n> In general, I wonder if we can make the binary format of multiranges\n> more efficient. It seems that every function involving multiranges\n> from multirange_deserialize(). I think we can make functions like\n> multirange_contains_elem() much more efficient. Multirange is\n> basically an array of ranges. So we can pack it as follows.\n> 1. Typeid and rangecount\n> 2. Tightly packed array of flags (1-byte for each range)\n> 3. Array of indexes of boundaries (4-byte for each range). Or even\n> better we can combine offsets and lengths to be compression-friendly\n> like jsonb JEntry's do.\n> 4. Boundary values\n> Using this format, we can implement multirange_contains_elem(),\n> multirange_contains_range() without deserialization and using binary\n> search. That would be much more efficient. What do you think?\n\nI also agree. I spent some time staring at the I/O code a couple of\nmonths back but was unable to focus on it for long enough. I don't know\nJEntry's format, but I do remember that the storage format for JSONB was\nwidely discussed back then; it seems wise to apply similar logic or at\nleast similar reasoning.\n\n\n",
"msg_date": "Mon, 7 Dec 2020 21:00:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 3:00 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Dec-08, Alexander Korotkov wrote:\n>\n> > I also found a problem in multirange types naming logic. Consider the\n> > following example.\n> >\n> > create type a_multirange AS (x float, y float);\n> > create type a as range(subtype=text, collation=\"C\");\n> > create table tbl (x __a_multirange);\n> > drop type a_multirange;\n> >\n> > If you dump this database, the dump couldn't be restored. The\n> > multirange type is named __a_multirange, because the type named\n> > a_multirange already exists. However, it might appear that\n> > a_multirange type is already deleted. When the dump is restored, a\n> > multirange type is named a_multirange, and the corresponding table\n> > fails to be created. The same thing doesn't happen with arrays,\n> > because arrays are not referenced in dumps by their internal names.\n> >\n> > I think we probably should add an option to specify multirange type\n> > names while creating a range type. Then dump can contain exact type\n> > names used in the database, and restore wouldn't have a names\n> > collision.\n>\n> Hmm, good point. I agree that a dump must preserve the name, since once\n> created it is user-visible. I had not noticed this problem, but it's\n> obvious in retrospect.\n>\n> > In general, I wonder if we can make the binary format of multiranges\n> > more efficient. It seems that every function involving multiranges\n> > from multirange_deserialize(). I think we can make functions like\n> > multirange_contains_elem() much more efficient. Multirange is\n> > basically an array of ranges. So we can pack it as follows.\n> > 1. Typeid and rangecount\n> > 2. Tightly packed array of flags (1-byte for each range)\n> > 3. Array of indexes of boundaries (4-byte for each range). Or even\n> > better we can combine offsets and lengths to be compression-friendly\n> > like jsonb JEntry's do.\n> > 4. Boundary values\n> > Using this format, we can implement multirange_contains_elem(),\n> > multirange_contains_range() without deserialization and using binary\n> > search. That would be much more efficient. What do you think?\n>\n> I also agree. I spent some time staring at the I/O code a couple of\n> months back but was unable to focus on it for long enough. I don't know\n> JEntry's format, but I do remember that the storage format for JSONB was\n> widely discussed back then; it seems wise to apply similar logic or at\n> least similar reasoning.\n\nThank you for your feedback!\n\nI'd like to publish my revision of the patch. So Paul could start\nfrom it. The changes I made are minor\n1. Add missing types to typedefs.list\n2. Run pg_indent run over the changed files and some other formatting changes\n3. Reorder the regression tests to evade the error spotted by\ncommitfest.cputube.org\n\nI'm switching this patch to WOA.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 8 Dec 2020 03:20:10 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Tue, Dec 8, 2020 at 3:20 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I'd like to publish my revision of the patch. So Paul could start\n> from it. The changes I made are minor\n> 1. Add missing types to typedefs.list\n> 2. Run pg_indent run over the changed files and some other formatting changes\n> 3. Reorder the regression tests to evade the error spotted by\n> commitfest.cputube.org\n>\n> I'm switching this patch to WOA.\n\nI decided to work on this patch myself. The next revision is attached.\n\nThe changes are as follows.\n\n1. CREATE TYPE ... AS RANGE command now accepts new argument\nmultirange_type_name. If multirange_type_name isn't specified, then\nmultirange type name is selected automatically. pg_dump always\nspecifies multirange_type_name (if dumping at least pg14). Thanks to\nthat dumps are always restorable.\n2. Multiranges now have a new binary format. After the MultirangeType\nstruct, an array of offsets comes, then an array of flags and finally\nbounds themselves. Offsets points to the bounds of particular range\nwithin multirange. Thanks to that particular range could be accessed\nby number without deserialization of the whole multirange. Offsets\nare stored in compression-friendly format similar to jsonb (actually\nonly every 4th of those \"offsets\" is really offsets, others are\nlengths).\n3. Most of simple functions working with multirages now don't\ndeserialize the whole multirange. Instead they fetch bounds of\nparticular ranges, and that doesn't even require any additional memory\nallocation.\n4. I've removed ExpandedObject support from the patch. I don't see\nmuch point in it assuming all the functions are returning serialized\nmultirage anyway. We can add ExpandedObject support in future if\nneeded.\n5. multirange_contains_element(), multirange_contains_range(),\nmultirange_overlaps_range() now use binary search. Thanks to binary\nformat, which doesn't require full deserialization, these functions\nnow work with O(log N) complexity.\n\nComments and documentation still need revision according to these\nchanges. I'm going to continue with this.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 16 Dec 2020 02:21:47 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 2:21 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I decided to work on this patch myself. The next revision is attached.\n>\n> The changes are as follows.\n>\n> 1. CREATE TYPE ... AS RANGE command now accepts new argument\n> multirange_type_name. If multirange_type_name isn't specified, then\n> multirange type name is selected automatically. pg_dump always\n> specifies multirange_type_name (if dumping at least pg14). Thanks to\n> that dumps are always restorable.\n> 2. Multiranges now have a new binary format. After the MultirangeType\n> struct, an array of offsets comes, then an array of flags and finally\n> bounds themselves. Offsets points to the bounds of particular range\n> within multirange. Thanks to that particular range could be accessed\n> by number without deserialization of the whole multirange. Offsets\n> are stored in compression-friendly format similar to jsonb (actually\n> only every 4th of those \"offsets\" is really offsets, others are\n> lengths).\n> 3. Most of simple functions working with multirages now don't\n> deserialize the whole multirange. Instead they fetch bounds of\n> particular ranges, and that doesn't even require any additional memory\n> allocation.\n> 4. I've removed ExpandedObject support from the patch. I don't see\n> much point in it assuming all the functions are returning serialized\n> multirage anyway. We can add ExpandedObject support in future if\n> needed.\n> 5. multirange_contains_element(), multirange_contains_range(),\n> multirange_overlaps_range() now use binary search. Thanks to binary\n> format, which doesn't require full deserialization, these functions\n> now work with O(log N) complexity.\n>\n> Comments and documentation still need revision according to these\n> changes. I'm going to continue with this.\n\nThe next 27th revision is attached. It contains minor documentation\nand code changes, in particular it should address\ncommitfest.cputube.org complaints.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 16 Dec 2020 07:14:41 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Wed, Dec 16, 2020 at 7:14 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Wed, Dec 16, 2020 at 2:21 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > I decided to work on this patch myself. The next revision is attached.\n> >\n> > The changes are as follows.\n> >\n> > 1. CREATE TYPE ... AS RANGE command now accepts new argument\n> > multirange_type_name. If multirange_type_name isn't specified, then\n> > multirange type name is selected automatically. pg_dump always\n> > specifies multirange_type_name (if dumping at least pg14). Thanks to\n> > that dumps are always restorable.\n> > 2. Multiranges now have a new binary format. After the MultirangeType\n> > struct, an array of offsets comes, then an array of flags and finally\n> > bounds themselves. Offsets points to the bounds of particular range\n> > within multirange. Thanks to that particular range could be accessed\n> > by number without deserialization of the whole multirange. Offsets\n> > are stored in compression-friendly format similar to jsonb (actually\n> > only every 4th of those \"offsets\" is really offsets, others are\n> > lengths).\n> > 3. Most of simple functions working with multirages now don't\n> > deserialize the whole multirange. Instead they fetch bounds of\n> > particular ranges, and that doesn't even require any additional memory\n> > allocation.\n> > 4. I've removed ExpandedObject support from the patch. I don't see\n> > much point in it assuming all the functions are returning serialized\n> > multirage anyway. We can add ExpandedObject support in future if\n> > needed.\n> > 5. multirange_contains_element(), multirange_contains_range(),\n> > multirange_overlaps_range() now use binary search. Thanks to binary\n> > format, which doesn't require full deserialization, these functions\n> > now work with O(log N) complexity.\n> >\n> > Comments and documentation still need revision according to these\n> > changes. I'm going to continue with this.\n>\n> The next 27th revision is attached. It contains minor documentation\n> and code changes, in particular it should address\n> commitfest.cputube.org complaints.\n\nThe next 28th revision is attached. It comes with minor code\nimprovements, comments and commit message.\n\nAlso, given now we have a manual multirange type naming mechanism,\nI've removed logic for prepending automatically generated names with\nunderscores to evade collision. Instead, user is advised to name\nmultirange manually (as discussed in [1]).\n\nI think this patch is very close to committable. I'm going to spend\nsome more time further polishing it and commit (if I don't find a\nmajor issue or face objections).\n\nLinks\n1. https://www.postgresql.org/message-id/CALNJ-vSUpQ_Y%3DjXvTxt1VYFztaBSsWVXeF1y6gTYQ4bOiWDLgQ%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 17 Dec 2020 22:10:56 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Thu, Dec 17, 2020 at 10:10 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n>\n> I think this patch is very close to committable. I'm going to spend\n> some more time further polishing it and commit (if I don't find a\n> major issue or face objections).\n\nThe main patch is committed. I've prepared a set of improvements.\n0001 Fixes bug in bsearch comparison functions\n0002 Implements missing @> (range,multirange) operator and its commutator\n0003 Does refactors signatures of *_internal() multirange functions\n0004 Adds cross-type (range, multirange) operators handling to\nexisting range GiST opclass\n0005 Adds support for GiST multirange indexing by approximation of\nmultirange as the union range with no gaps\n\nThe patchset is quite trivial. I'm going to push it if there are no objections.\n\nThe SP-GiST handling is more tricky and requires substantial work.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 27 Dec 2020 12:50:07 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "Hi,\n\nThis is not an ideal way to index multirages, but something we can easily\nhave.\n\ntypo: multiranges\n\nCheers\n\nOn Sun, Dec 27, 2020 at 1:50 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Thu, Dec 17, 2020 at 10:10 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> >\n> > I think this patch is very close to committable. I'm going to spend\n> > some more time further polishing it and commit (if I don't find a\n> > major issue or face objections).\n>\n> The main patch is committed. I've prepared a set of improvements.\n> 0001 Fixes bug in bsearch comparison functions\n> 0002 Implements missing @> (range,multirange) operator and its commutator\n> 0003 Does refactors signatures of *_internal() multirange functions\n> 0004 Adds cross-type (range, multirange) operators handling to\n> existing range GiST opclass\n> 0005 Adds support for GiST multirange indexing by approximation of\n> multirange as the union range with no gaps\n>\n> The patchset is quite trivial. I'm going to push it if there are no\n> objections.\n>\n> The SP-GiST handling is more tricky and requires substantial work.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nHi,This is not an ideal way to index multirages, but something we can easily have.typo: multirangesCheersOn Sun, Dec 27, 2020 at 1:50 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Thu, Dec 17, 2020 at 10:10 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n>\n> I think this patch is very close to committable. I'm going to spend\n> some more time further polishing it and commit (if I don't find a\n> major issue or face objections).\n\nThe main patch is committed. I've prepared a set of improvements.\n0001 Fixes bug in bsearch comparison functions\n0002 Implements missing @> (range,multirange) operator and its commutator\n0003 Does refactors signatures of *_internal() multirange functions\n0004 Adds cross-type (range, multirange) operators handling to\nexisting range GiST opclass\n0005 Adds support for GiST multirange indexing by approximation of\nmultirange as the union range with no gaps\n\nThe patchset is quite trivial. I'm going to push it if there are no objections.\n\nThe SP-GiST handling is more tricky and requires substantial work.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 27 Dec 2020 09:53:13 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 09:53:13AM -0800, Zhihong Yu wrote:\n> Hi,\n> \n> This is not an ideal way to index multirages, but something we can\n> easily have.\n\nWhat sort of indexing improvements do you have in mind?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 27 Dec 2020 19:07:51 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 8:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> This is not an ideal way to index multirages, but something we can easily have.\n>\n> typo: multiranges\n\nThanks for catching. I will revise the commit message before committing.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 27 Dec 2020 22:31:27 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
},
{
"msg_contents": "On Sun, Dec 27, 2020 at 9:07 PM David Fetter <david@fetter.org> wrote:\n> On Sun, Dec 27, 2020 at 09:53:13AM -0800, Zhihong Yu wrote:\n> > This is not an ideal way to index multirages, but something we can\n> > easily have.\n>\n> What sort of indexing improvements do you have in mind?\n\nApproximation of multirange as a range can cause false positives.\nIt's good if gaps are small, but what if they aren't.\n\nIdeally, we should split multirange to the ranges and index them\nseparately. So, we would need a GIN-like index. The problem is that\nthe GIN entry tree is a B-tree, which is not very useful for searching\nfor ranges. If we could replace the GIN entry tree with GiST or\nSP-GiST, that should be good. We could index multirage parts\nseparately and big gaps wouldn't be a problem. Similar work was\nalready prototyped (it was prototyped under the name \"vodka\", but I'm\nnot a big fan of this name). FWIW, such a new access method would\nneed a lot of work to bring it to commit. I don't think it would be\nreasonable, before multiranges get popular.\n\nRegarding the GiST opclass, it seems the best we can do in GiST.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 27 Dec 2020 22:38:38 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range_agg"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm starting this thread mostly to keep track of patches developed in\nresponse to issue [1] reported on pgsql-performance. The symptoms are\nvery simple - query performing a hash join ends up using much more\nmemory than expected (pretty much ignoring work_mem), and possibly\nending up with OOM.\n\nThe root cause is that hash join treats batches as pretty much free, but\nthat's not really true - we do allocate two BufFile structs per batch,\nand each BufFile is ~8kB as it includes PGAlignedBuffer.\n\nThis is not ideal even if we happen to estimate everything correctly,\nbecause for example with work_mem=4MB and nbatch=1024, it means we'll\nuse about 16MB (2*8kB*1024) for the BufFile structures alone, plus the\nwork_mem for hash table itself.\n\nBut it can easily explode when we under-estimate the hash side. In the\npgsql-performance message, the hash side (with the patches applied,\nallowing the query to complete) it looks like this:\n\n Hash (cost=2823846.37..2823846.37 rows=34619 width=930)\n (actual time=252946.367..252946.367 rows=113478127 loops=1)\n\nSo it's 3277x under-estimated. It starts with 16 batches, and ends up\nadding more and more batches until it fails with 524288 of them (it gets\nto that many batches because some of the values are very common and we\ndon't disable the growth earlier).\n\nThe OOM is not very surprising, because with 524288 batches it'd need\nabout 8GB of memory, and the system only has 8GB RAM installed.\n\nThe two attached patches both account for the BufFile memory, but then\nuse very different strategies when the work_mem limit is reached.\n\nThe first patch realizes it's impossible to keep adding batches without\nbreaking the work_mem limit, because at some point the BufFile will need\nmore memory than that. But it does not make sense to stop adding batches\nentirely, because then the hash table could grow indefinitely.\n\nSo the patch abandons the idea of enforcing work_mem in this situation,\nand instead attempts to minimize memory usage over time - it increases\nthe spaceAllowed in a way that ensures doubling the number of batches\nactually reduces memory usage in the long run.\n\nThe second patch tries to enforce work_mem more strictly. That would be\nimpossible if we were to keep all the BufFile structs in memory, so\ninstead it slices the batches into chunks that fit into work_mem, and\nthen uses a single \"overflow\" file for slices currently not in memory.\nThese extra slices can't be counted into work_mem, but we should need\njust very few of them. For example with work_mem=4MB the slice is 128\nbatches, so we need 128x less overflow files (compared to per-batch).\n\n\nNeither of those patches tweaks ExecChooseHashTableSize() to consider\nmemory needed for BufFiles while deciding how many batches will be\nneeded. That's something that probably needs to happen, but it would not\nhelp with the underestimate issue.\n\nI'm not entirely sure which of those approaches is the right one. The\nfirst one is clearly just a \"damage control\" for cases where the hash\nside turned out to be much larger than we expected. With good estimates\nwe probably would not have picked a hash join for those (that is, we\nshould have realized we can't keep work_mem and prohibit hash join).\n\nThe second patch however makes hash join viable for some of those cases,\nand it seems to work pretty well (there are some numbers in the message\nposted to pgsql-performance thread). So I kinda like this second one.\n\nIt's all just PoC quality, at this point, far from committable state.\n\n\n[1] https://www.postgresql.org/message-id/flat/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 4 May 2019 02:34:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Fri, May 3, 2019 at 5:34 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> The second patch tries to enforce work_mem more strictly. That would be\n> impossible if we were to keep all the BufFile structs in memory, so\n> instead it slices the batches into chunks that fit into work_mem, and\n> then uses a single \"overflow\" file for slices currently not in memory.\n> These extra slices can't be counted into work_mem, but we should need\n> just very few of them. For example with work_mem=4MB the slice is 128\n> batches, so we need 128x less overflow files (compared to per-batch).\n>\n> I want to see if I understand the implications of the per-slice-overflow\npatch\nfor execution of hashjoin:\nFor each bucket in the hashtable, when attempting to double the number of\nbatches, if the memory that the BufFile structs will occupy once this is\ndone\nwill exceed the work_mem, split each batch into slices that fit into memory.\nThis means that, for each probe-side tuple hashing to that bucket, you have\nto\nload every slice of each batch separately into memory to ensure correct\nresults.\nIs this right?\n\n\n>\n> I'm not entirely sure which of those approaches is the right one. The\n> first one is clearly just a \"damage control\" for cases where the hash\n> side turned out to be much larger than we expected. With good estimates\n> we probably would not have picked a hash join for those (that is, we\n> should have realized we can't keep work_mem and prohibit hash join).\n>\n> The second patch however makes hash join viable for some of those cases,\n> and it seems to work pretty well (there are some numbers in the message\n> posted to pgsql-performance thread). So I kinda like this second one.\n>\n> So, my initial reaction after taking a look at the patches is that I\nprefer the\nfirst approach--increasing the resize threshhold. The second patch, the\nper-slice-overflow patch, adds a major new mechanic to hashjoin in order to\naddress what is, based on my understanding, an edge case.\n\n-- \nMelanie Plageman\n\nOn Fri, May 3, 2019 at 5:34 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nThe second patch tries to enforce work_mem more strictly. That would be\nimpossible if we were to keep all the BufFile structs in memory, so\ninstead it slices the batches into chunks that fit into work_mem, and\nthen uses a single \"overflow\" file for slices currently not in memory.\nThese extra slices can't be counted into work_mem, but we should need\njust very few of them. For example with work_mem=4MB the slice is 128\nbatches, so we need 128x less overflow files (compared to per-batch).\nI want to see if I understand the implications of the per-slice-overflow patchfor execution of hashjoin: For each bucket in the hashtable, when attempting to double the number ofbatches, if the memory that the BufFile structs will occupy once this is donewill exceed the work_mem, split each batch into slices that fit into memory.This means that, for each probe-side tuple hashing to that bucket, you have toload every slice of each batch separately into memory to ensure correct results. Is this right? \nI'm not entirely sure which of those approaches is the right one. The\nfirst one is clearly just a \"damage control\" for cases where the hash\nside turned out to be much larger than we expected. With good estimates\nwe probably would not have picked a hash join for those (that is, we\nshould have realized we can't keep work_mem and prohibit hash join).\n\nThe second patch however makes hash join viable for some of those cases,\nand it seems to work pretty well (there are some numbers in the message\nposted to pgsql-performance thread). So I kinda like this second one.So, my initial reaction after taking a look at the patches is that I prefer thefirst approach--increasing the resize threshhold. The second patch, theper-slice-overflow patch, adds a major new mechanic to hashjoin in order toaddress what is, based on my understanding, an edge case. -- Melanie Plageman",
"msg_date": "Mon, 6 May 2019 14:58:09 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 7, 2019 at 9:58 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Fri, May 3, 2019 at 5:34 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> The second patch tries to enforce work_mem more strictly. That would be\n>> impossible if we were to keep all the BufFile structs in memory, so\n>> instead it slices the batches into chunks that fit into work_mem, and\n>> then uses a single \"overflow\" file for slices currently not in memory.\n>> These extra slices can't be counted into work_mem, but we should need\n>> just very few of them. For example with work_mem=4MB the slice is 128\n>> batches, so we need 128x less overflow files (compared to per-batch).\n>>\n> I want to see if I understand the implications of the per-slice-overflow patch\n> for execution of hashjoin:\n> For each bucket in the hashtable, when attempting to double the number of\n> batches, if the memory that the BufFile structs will occupy once this is done\n> will exceed the work_mem, split each batch into slices that fit into memory.\n> This means that, for each probe-side tuple hashing to that bucket, you have to\n> load every slice of each batch separately into memory to ensure correct results.\n> Is this right?\n\nSeems expensive for large numbers of slices -- you need to join the\nouter batch against each inner slice. But I wonder how we'd deal with\nouter joins, as Tom Lane asked in another thread:\n\nhttps://www.postgresql.org/message-id/12185.1488932980%40sss.pgh.pa.us\n\n>> I'm not entirely sure which of those approaches is the right one. The\n>> first one is clearly just a \"damage control\" for cases where the hash\n>> side turned out to be much larger than we expected. With good estimates\n>> we probably would not have picked a hash join for those (that is, we\n>> should have realized we can't keep work_mem and prohibit hash join).\n>>\n>> The second patch however makes hash join viable for some of those cases,\n>> and it seems to work pretty well (there are some numbers in the message\n>> posted to pgsql-performance thread). So I kinda like this second one.\n>>\n> So, my initial reaction after taking a look at the patches is that I prefer the\n> first approach--increasing the resize threshhold. The second patch, the\n> per-slice-overflow patch, adds a major new mechanic to hashjoin in order to\n> address what is, based on my understanding, an edge case.\n\nPersonally I'd like to make work_mem more reliable, even if it takes a\nmajor new mechanism.\n\nStepping back a bit, I think there is something fishy about the way we\ndetect extreme skew. Is that a factor in this case? Right now we\nwait until we have a batch that gets split into child batches\ncontaining exactly 0% and 100% of the tuples before we give up.\nPreviously I had thought of that as merely a waste of time, but\nclearly it's also a waste of unmetered memory. Oops.\n\nI think our extreme skew detector should go off sooner, because\notherwise if you have N nicely distributed unique keys and also M\nduplicates of one bad egg key that'll never fit in memory, we keep\nrepartitioning until none of the N keys fall into the batch containing\nthe key for the M duplicates before we give up! You can use\nballs-into-bins maths to figure out the number, but I think that means\nwe expect to keep splitting until we have N * some_constant batches,\nand that's just silly and liable to create massive numbers of\npartitions proportional to N, even though we're trying to solve a\nproblem with M. In another thread I suggested we should stop when\n(say) 95% of the tuples go to one child batch. I'm not sure how you\npick the number.\n\nOf course that doesn't solve the problem that we don't have a better\nplan for dealing with the M duplicates -- it just avoids a needless\nbatch explosions triggered by bad maths. I think we need something\nlike Tomas's #2, or a way to switch to sort-merge, or some other\nscheme. I'm not sure how to compare the slice idea, which involves\nprocessing outer tuples * inner slices with the sort-merge idea, which\ninvolves sorting the inner and outer batch, plus the entirely new\nconcept of switching to another node at execution time.\n\nI also wondered about reducing the buffer size of the BufFiles, but\nthat doesn't seem to be fixing the real problem.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 May 2019 13:48:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n>On Tue, May 7, 2019 at 9:58 AM Melanie Plageman\n><melanieplageman@gmail.com> wrote:\n>> On Fri, May 3, 2019 at 5:34 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>> The second patch tries to enforce work_mem more strictly. That would be\n>>> impossible if we were to keep all the BufFile structs in memory, so\n>>> instead it slices the batches into chunks that fit into work_mem, and\n>>> then uses a single \"overflow\" file for slices currently not in memory.\n>>> These extra slices can't be counted into work_mem, but we should need\n>>> just very few of them. For example with work_mem=4MB the slice is 128\n>>> batches, so we need 128x less overflow files (compared to per-batch).\n>>>\n>> I want to see if I understand the implications of the per-slice-overflow patch\n>> for execution of hashjoin:\n>> For each bucket in the hashtable, when attempting to double the number of\n>> batches, if the memory that the BufFile structs will occupy once this is done\n>> will exceed the work_mem, split each batch into slices that fit into memory.\n>> This means that, for each probe-side tuple hashing to that bucket, you have to\n>> load every slice of each batch separately into memory to ensure correct results.\n>> Is this right?\n>\n\n>Seems expensive for large numbers of slices -- you need to join the\n>outer batch against each inner slice.\n\nNope, that's not how it works. It's the array of batches that gets\nsliced, not the batches themselves.\n\nIt does slightly increase the amount of data we need to shuffle between\nthe temp files, because we can't write the data directly to batches in\n\"future\" slices. But that amplification is capped to ~2.2x (compared to\nthe ~1.4x in master) - I've shared some measurements in [1].\n\n[1] https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n\n>But I wonder how we'd deal with outer joins, as Tom Lane asked in\n>another thread:\n>\n>https://www.postgresql.org/message-id/12185.1488932980%40sss.pgh.pa.us\n>\n\nThat seems unrelated - we slice the array of batches, to keep memory\nneeded for BufFile under control. The hash table remains intact, so\nthere's no issue with outer joins.\n\n>>> I'm not entirely sure which of those approaches is the right one. The\n>>> first one is clearly just a \"damage control\" for cases where the hash\n>>> side turned out to be much larger than we expected. With good estimates\n>>> we probably would not have picked a hash join for those (that is, we\n>>> should have realized we can't keep work_mem and prohibit hash join).\n>>>\n>>> The second patch however makes hash join viable for some of those cases,\n>>> and it seems to work pretty well (there are some numbers in the message\n>>> posted to pgsql-performance thread). So I kinda like this second one.\n>>>\n>> So, my initial reaction after taking a look at the patches is that I prefer the\n>> first approach--increasing the resize threshhold. The second patch, the\n>> per-slice-overflow patch, adds a major new mechanic to hashjoin in order to\n>> address what is, based on my understanding, an edge case.\n>\n>Personally I'd like to make work_mem more reliable, even if it takes a\n>major new mechanism.\n>\n\nYeah, I share that attitude.\n\n>Stepping back a bit, I think there is something fishy about the way we\n>detect extreme skew. Is that a factor in this case? Right now we\n>wait until we have a batch that gets split into child batches\n>containing exactly 0% and 100% of the tuples before we give up.\n>Previously I had thought of that as merely a waste of time, but\n>clearly it's also a waste of unmetered memory. Oops.\n>\n\nYes, that was a factor in the reported query - the data set contained\nsignificant number of duplicate values (~10%) but it took a while to\ndisable growth because there always happened to be a couple rows with a\ndifferent value.\n\n>I think our extreme skew detector should go off sooner, because\n>otherwise if you have N nicely distributed unique keys and also M\n>duplicates of one bad egg key that'll never fit in memory, we keep\n>repartitioning until none of the N keys fall into the batch containing\n>the key for the M duplicates before we give up! You can use\n>balls-into-bins maths to figure out the number, but I think that means\n>we expect to keep splitting until we have N * some_constant batches,\n>and that's just silly and liable to create massive numbers of\n>partitions proportional to N, even though we're trying to solve a\n>problem with M. In another thread I suggested we should stop when\n>(say) 95% of the tuples go to one child batch. I'm not sure how you\n>pick the number.\n>\n\nI agree we should relax the 0%/100% split condition, and disable the\ngrowth sooner. But I think we should also re-evaluate that decision\nafter a while - the data set may be correlated in some way, in which\ncase we may disable the growth prematurely. It may not reduce memory\nusage now, but it may help in the future.\n\nIt's already an issue, but it would be even more likely if we disabled\ngrowth e.g. with just 5%/95% splits.\n\nFWIW I believe this is mostly orthogonal issue to what's discussed in\nthis thread.\n\n>Of course that doesn't solve the problem that we don't have a better\n>plan for dealing with the M duplicates -- it just avoids a needless\n>batch explosions triggered by bad maths. I think we need something\n>like Tomas's #2, or a way to switch to sort-merge, or some other\n>scheme. I'm not sure how to compare the slice idea, which involves\n>processing outer tuples * inner slices with the sort-merge idea, which\n>involves sorting the inner and outer batch, plus the entirely new\n>concept of switching to another node at execution time.\n>\n\nDo we actually check how many duplicates are there during planning? I\nwonder if we could penalize (of even disable) hashjoins when there are\ntoo many duplicates to fit into work_mem. Of course, that's going to be\ntricky with filtering, and so on.\n\nSwitching to some other algorithm during execution moves the goal posts\nto the next galaxy, I'm afraid.\n\n>I also wondered about reducing the buffer size of the BufFiles, but\n>that doesn't seem to be fixing the real problem.\n>\n\nYeah. It might help a bit, but it's very limited - even if you reduce\nthe buffer to say 1kB, it's just a factor of 8. And I'm not sure what\nwould be the impact on performance. \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 7 May 2019 05:15:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Do we actually check how many duplicates are there during planning?\n\nCertainly that's part of the planner's cost estimates ... but it's\nonly as good as the planner's statistical knowledge.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 23:18:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 7, 2019 at 3:15 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n> >Seems expensive for large numbers of slices -- you need to join the\n> >outer batch against each inner slice.\n>\n> Nope, that's not how it works. It's the array of batches that gets\n> sliced, not the batches themselves.\n\nSorry, I read only the description and not the code, and got confused\nabout that. So, I see three separate but related problems:\n\nA. Broken escape valve: sometimes we generate a huge number of\nbatches while trying to split up many duplicates, because of the\npresence of other more uniformly distributed keys. We could fix that\nwith (say) a 95% rule.\nB. Lack of good alternative execution strategy when the escape valve\nis triggered. A batch cannot be split effectively, but cannot fit in\nwork_mem, so for now we decide to ignore work_mem.\nC. Unmetered explosion of batches and thus BufFiles, probably usually\ncaused by problem A, but theoretically also due to a real need for\npartitions.\n\n> >But I wonder how we'd deal with outer joins, as Tom Lane asked in\n> >another thread:\n> >\n> >https://www.postgresql.org/message-id/12185.1488932980%40sss.pgh.pa.us\n>\n> That seems unrelated - we slice the array of batches, to keep memory\n> needed for BufFile under control. The hash table remains intact, so\n> there's no issue with outer joins.\n\nRight, sorry, my confusion. I thought you were describing\nhttps://en.wikipedia.org/wiki/Block_nested_loop. (I actually think we\ncan make that work for left outer joins without too much fuss by\nwriting out a stream of match bits to a new temporary file. Googling,\nI see that MySQL originally didn't support BNL for outer joins and\nthen added some match flag propagation thing recently.)\n\n> I agree we should relax the 0%/100% split condition, and disable the\n> growth sooner. But I think we should also re-evaluate that decision\n> after a while - the data set may be correlated in some way, in which\n> case we may disable the growth prematurely. It may not reduce memory\n> usage now, but it may help in the future.\n>\n> It's already an issue, but it would be even more likely if we disabled\n> growth e.g. with just 5%/95% splits.\n>\n> FWIW I believe this is mostly orthogonal issue to what's discussed in\n> this thread.\n\nBut isn't problem A the root cause of problem C, in most cases? There\nmust also be \"genuine\" cases of problem C that would occur even if we\nfix that, of course: someone has small work_mem, and data that can be\neffectively partitioned to fit it, but it just takes a huge number of\npartitions to do it. So that we don't behave badly in those cases, I\nagree with you 100%: we should fix the memory accounting to count\nBufFile overheads as you are proposing, and then I guess ideally\nswitch to our alternative strategy (BNL or sort-merge or ...) when we\nsee that BufFiles are wasting to much work_mem and its time to try\nsomething else. It seems you don't actually have one of those cases\nhere, though?\n\nI think we should fix problem A. Then handle problem C by accounting\nfor BufFiles, and figure out a way to switch to our alternative\nstrategy (currently: ignore work_mem), when we think that creating\nmore BufFiles will be futile (not sure exactly what the rule there\nshould be). And then work on fixing B properly with a good strategy.\nHere's a straw-man idea: we could adopt BNL, and then entirely remove\nour repartitioning code. If the planner's number of partitions turns\nout to be not enough, we'll just handle it using BNL loops.\n\n> Switching to some other algorithm during execution moves the goal posts\n> to the next galaxy, I'm afraid.\n\nThe main problem I'm aware of with sort-merge join is: not all that is\nhashable is sortable. So BNL is actually the only solution I'm aware\nof for problem B that doesn't involve changing a fundamental thing\nabout PostgreSQL's data type requirements.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 May 2019 16:28:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, May 06, 2019 at 11:18:28PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Do we actually check how many duplicates are there during planning?\n>\n>Certainly that's part of the planner's cost estimates ... but it's\n>only as good as the planner's statistical knowledge.\n>\n\nI'm looking at the code, and the only place where I see code dealing with\nMCVs (probably the best place for info about duplicate values) is\nestimate_hash_bucketsize in final_cost_hashjoin. That's not quite what I\nhad in mind - I was thinking more about something along the lines \"See the\nlarget group of duplicate values, disable hash join if it can't fit into\nwork_mem at all.\"\n\nOf course, if the input estimates are off, that may not work too well. It\nwould certainly not help the query failing with OOM, because that was a\ncase of severe underestimate.\n\nOr did you mean some other piece of code that I have missed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 15:17:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 07, 2019 at 04:28:36PM +1200, Thomas Munro wrote:\n>On Tue, May 7, 2019 at 3:15 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n>> >Seems expensive for large numbers of slices -- you need to join the\n>> >outer batch against each inner slice.\n>>\n>> Nope, that's not how it works. It's the array of batches that gets\n>> sliced, not the batches themselves.\n>\n>Sorry, I read only the description and not the code, and got confused\n>about that. So, I see three separate but related problems:\n>\n>A. Broken escape valve: sometimes we generate a huge number of\n>batches while trying to split up many duplicates, because of the\n>presence of other more uniformly distributed keys. We could fix that\n>with (say) a 95% rule.\n>B. Lack of good alternative execution strategy when the escape valve\n>is triggered. A batch cannot be split effectively, but cannot fit in\n>work_mem, so for now we decide to ignore work_mem.\n>C. Unmetered explosion of batches and thus BufFiles, probably usually\n>caused by problem A, but theoretically also due to a real need for\n>partitions.\n>\n\nRight. I don't think a single solution addressing all those issues exists.\nIt's more likely we need multiple improvements.\n\n>> >But I wonder how we'd deal with outer joins, as Tom Lane asked in\n>> >another thread:\n>> >\n>> >https://www.postgresql.org/message-id/12185.1488932980%40sss.pgh.pa.us\n>>\n>> That seems unrelated - we slice the array of batches, to keep memory\n>> needed for BufFile under control. The hash table remains intact, so\n>> there's no issue with outer joins.\n>\n>Right, sorry, my confusion. I thought you were describing\n>https://en.wikipedia.org/wiki/Block_nested_loop. (I actually think we\n>can make that work for left outer joins without too much fuss by\n>writing out a stream of match bits to a new temporary file. Googling,\n>I see that MySQL originally didn't support BNL for outer joins and\n>then added some match flag propagation thing recently.)\n>\n\nPossibly, I'm not against implementing that, although I don't have very\ngood idea what the benefits of BNL joins are (performance-wise). In any\ncase, I think entirely unrelated to hash joins.\n\n>> I agree we should relax the 0%/100% split condition, and disable the\n>> growth sooner. But I think we should also re-evaluate that decision\n>> after a while - the data set may be correlated in some way, in which\n>> case we may disable the growth prematurely. It may not reduce memory\n>> usage now, but it may help in the future.\n>>\n>> It's already an issue, but it would be even more likely if we disabled\n>> growth e.g. with just 5%/95% splits.\n>>\n>> FWIW I believe this is mostly orthogonal issue to what's discussed in\n>> this thread.\n>\n>But isn't problem A the root cause of problem C, in most cases? There\n>must also be \"genuine\" cases of problem C that would occur even if we\n>fix that, of course: someone has small work_mem, and data that can be\n>effectively partitioned to fit it, but it just takes a huge number of\n>partitions to do it. So that we don't behave badly in those cases, I\n>agree with you 100%: we should fix the memory accounting to count\n>BufFile overheads as you are proposing, and then I guess ideally\n>switch to our alternative strategy (BNL or sort-merge or ...) when we\n>see that BufFiles are wasting to much work_mem and its time to try\n>something else. It seems you don't actually have one of those cases\n>here, though?\n>\n\nMaybe. Or maybe not. I don't have enough data to make such judgements\nabout the causes in general. We have one query from pgsql-performance.\nThere might be more, but IMO that's probably biased data set.\n\nBut even that reported query actually is not the case that A causes C.\nThe outer side of the hash join was significantly underestimated (34619\nvs. 113478127) due to highly-correlated conditions.\n\nAnd in that case it's trivial to cause nbatch explosion even with perfect\ndata sets with no duplicates (so no escape valve failure).\n\n\n>I think we should fix problem A. Then handle problem C by accounting\n>for BufFiles, and figure out a way to switch to our alternative\n>strategy (currently: ignore work_mem), when we think that creating\n>more BufFiles will be futile (not sure exactly what the rule there\n>should be). And then work on fixing B properly with a good strategy.\n>Here's a straw-man idea: we could adopt BNL, and then entirely remove\n>our repartitioning code. If the planner's number of partitions turns\n>out to be not enough, we'll just handle it using BNL loops.\n>\n\nYeah, something like that.\n\nI think we can fix A by relaxing the escape valve condition, and then\nrechecking it once in a while. So we fill work_mem, realize it didn't\nactually reduce the batch size significantly and disable nbatch growth.\nBut at the same time we increase the threshold to 2x work_mem, and after\nreaching it we \"consider\" a nbatch increase. That is, we walk the batch\nand see how many tuples would move if we increased nbatch (that should be\nfairly cheap) - if it helps, great, enable growth and split the batch. If\nnot, double the threshold again. Rinse and repeat.\n\nFor C, I think we can use either of the two approaches I proposed. I like\nthe second option better, as it actually enforces work_mem. The first\noption kinda helped with A too, although in different way, ana I think the\nsolution I outlined in the previous paragraph will work better.\n\nNo opinion regarding the switch to BNL, at the moment.\n\n>> Switching to some other algorithm during execution moves the goal posts\n>> to the next galaxy, I'm afraid.\n>\n>The main problem I'm aware of with sort-merge join is: not all that is\n>hashable is sortable. So BNL is actually the only solution I'm aware\n>of for problem B that doesn't involve changing a fundamental thing\n>about PostgreSQL's data type requirements.\n>\n\nSure, each of those algorithms has limitations. But I think that's mostly\nirrelevant to the main issue - switching between algorithms mid-execution.\nAt that point some of the tuples might have been already sent sent to the\nother nodes, and I have no idea how to \"resume\" the tuple stream short of\nbuffering everything locally until the join completes. And that would be\nrather terrible, I guess.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 15:59:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Mon, May 06, 2019 at 11:18:28PM -0400, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> Do we actually check how many duplicates are there during planning?\n\n>> Certainly that's part of the planner's cost estimates ... but it's\n>> only as good as the planner's statistical knowledge.\n\n> I'm looking at the code, and the only place where I see code dealing with\n> MCVs (probably the best place for info about duplicate values) is\n> estimate_hash_bucketsize in final_cost_hashjoin.\n\nWhat I'm thinking of is this bit in final_cost_hashjoin:\n\n /*\n * If the bucket holding the inner MCV would exceed work_mem, we don't\n * want to hash unless there is really no other alternative, so apply\n * disable_cost. (The executor normally copes with excessive memory usage\n * by splitting batches, but obviously it cannot separate equal values\n * that way, so it will be unable to drive the batch size below work_mem\n * when this is true.)\n */\n if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),\n inner_path->pathtarget->width) >\n (work_mem * 1024L))\n startup_cost += disable_cost;\n\nIt's certainly likely that that logic needs improvement in view of this\ndiscussion --- I was just pushing back on the claim that we weren't\nconsidering the issue at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 10:42:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 07, 2019 at 10:42:36AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Mon, May 06, 2019 at 11:18:28PM -0400, Tom Lane wrote:\n>>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>> Do we actually check how many duplicates are there during planning?\n>\n>>> Certainly that's part of the planner's cost estimates ... but it's\n>>> only as good as the planner's statistical knowledge.\n>\n>> I'm looking at the code, and the only place where I see code dealing with\n>> MCVs (probably the best place for info about duplicate values) is\n>> estimate_hash_bucketsize in final_cost_hashjoin.\n>\n>What I'm thinking of is this bit in final_cost_hashjoin:\n>\n> /*\n> * If the bucket holding the inner MCV would exceed work_mem, we don't\n> * want to hash unless there is really no other alternative, so apply\n> * disable_cost. (The executor normally copes with excessive memory usage\n> * by splitting batches, but obviously it cannot separate equal values\n> * that way, so it will be unable to drive the batch size below work_mem\n> * when this is true.)\n> */\n> if (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),\n> inner_path->pathtarget->width) >\n> (work_mem * 1024L))\n> startup_cost += disable_cost;\n>\n>It's certainly likely that that logic needs improvement in view of this\n>discussion --- I was just pushing back on the claim that we weren't\n>considering the issue at all.\n>\n\nAh, this code is new in 11, and I was looking at code from 10 for some\nreason. I don't think we can do much better than this, except perhaps\nfalling back to (1/ndistinct) when there's no MCV available.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 17:09:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, May 6, 2019 at 8:15 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> Nope, that's not how it works. It's the array of batches that gets\n> sliced, not the batches themselves.\n>\n> It does slightly increase the amount of data we need to shuffle between\n> the temp files, because we can't write the data directly to batches in\n> \"future\" slices. But that amplification is capped to ~2.2x (compared to\n> the ~1.4x in master) - I've shared some measurements in [1].\n>\n> [1]\n> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n>\n>\nCool, I misunderstood. I looked at the code again today, and, at the email\nthread where you measured \"amplification\".\n\nIn terms of how many times you write each tuple, is it accurate to say that\na\ntuple can now be spilled three times (in the worst case) whereas, before, it\ncould be spilled only twice?\n\n1 - when building the inner side hashtable, tuple is spilled to a \"slice\"\nfile\n2 - (assuming the number of batches was increased) during execution, when a\ntuple belonging to a later slice's spill file is found, it is re-spilled to\nthat\nslice's spill file\n3 - during execution, when reading from its slice file, it is re-spilled\n(again)\nto its batch's spill file\n\nIs it correct that the max number of BufFile structs you will have is equal\nto\nthe number of slices + number of batches in a slice\nbecause that is the max number of open BufFiles you would have at a time?\n\nBy the way, applying v4 patch on master, in an assert build, I am tripping\nsome\nasserts -- starting with\nAssert(!file->readOnly);\nin BufFileWrite\n\nOne thing I was a little confused by was the nbatch_inmemory member of the\nhashtable. The comment in ExecChooseHashTableSize says that it is\ndetermining\nthe number of batches we can fit in memory. I thought that the problem was\nthe\namount of space taken up by the BufFile data structure itself--which is\nrelated\nto the number of open BufFiles you need at a time. This comment in\nExecChooseHashTableSize makes it sound like you are talking about fitting\nmore\nthan one batch of tuples into memory at a time. I was under the impression\nthat\nyou could only fit one batch of tuples in memory at a time.\n\nSo, I was stepping through the code with work_mem set to the lower bound,\nand in\nExecHashIncreaseNumBatches, I got confused.\nhashtable->nbatch_inmemory was 2 for me, thus, nbatch_tmp was 2\nso, I didn't meet this condition\nif (nbatch_tmp > hashtable->nbatch_inmemory)\nsince I just set nbatch_tmp using hashtable->nbatch_inmemory\nSo, I didn't increase the number of slices, which is what I was expecting.\nWhat happens when hashtable->nbatch_inmemory is equal to nbatch_tmp?\n\n-- \nMelanie Plageman\n\nOn Mon, May 6, 2019 at 8:15 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:Nope, that's not how it works. It's the array of batches that gets\nsliced, not the batches themselves.\n\nIt does slightly increase the amount of data we need to shuffle between\nthe temp files, because we can't write the data directly to batches in\n\"future\" slices. But that amplification is capped to ~2.2x (compared to\nthe ~1.4x in master) - I've shared some measurements in [1].\n\n[1] https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\nCool, I misunderstood. I looked at the code again today, and, at the emailthread where you measured \"amplification\".In terms of how many times you write each tuple, is it accurate to say that atuple can now be spilled three times (in the worst case) whereas, before, itcould be spilled only twice?1 - when building the inner side hashtable, tuple is spilled to a \"slice\" file2 - (assuming the number of batches was increased) during execution, when atuple belonging to a later slice's spill file is found, it is re-spilled to thatslice's spill file3 - during execution, when reading from its slice file, it is re-spilled (again)to its batch's spill fileIs it correct that the max number of BufFile structs you will have is equal tothe number of slices + number of batches in a slicebecause that is the max number of open BufFiles you would have at a time?By the way, applying v4 patch on master, in an assert build, I am tripping someasserts -- starting withAssert(!file->readOnly);in BufFileWriteOne thing I was a little confused by was the nbatch_inmemory member of thehashtable. The comment in ExecChooseHashTableSize says that it is determiningthe number of batches we can fit in memory. I thought that the problem was theamount of space taken up by the BufFile data structure itself--which is relatedto the number of open BufFiles you need at a time. This comment inExecChooseHashTableSize makes it sound like you are talking about fitting morethan one batch of tuples into memory at a time. I was under the impression thatyou could only fit one batch of tuples in memory at a time. So, I was stepping through the code with work_mem set to the lower bound, and inExecHashIncreaseNumBatches, I got confused. hashtable->nbatch_inmemory was 2 for me, thus, nbatch_tmp was 2so, I didn't meet this condition if (nbatch_tmp > hashtable->nbatch_inmemory)since I just set nbatch_tmp using hashtable->nbatch_inmemorySo, I didn't increase the number of slices, which is what I was expecting.What happens when hashtable->nbatch_inmemory is equal to nbatch_tmp?-- Melanie Plageman",
"msg_date": "Tue, 7 May 2019 17:30:27 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 7, 2019 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, May 07, 2019 at 04:28:36PM +1200, Thomas Munro wrote:\n> >On Tue, May 7, 2019 at 3:15 PM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >> On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n> >> Switching to some other algorithm during execution moves the goal posts\n> >> to the next galaxy, I'm afraid.\n> >\n> >The main problem I'm aware of with sort-merge join is: not all that is\n> >hashable is sortable. So BNL is actually the only solution I'm aware\n> >of for problem B that doesn't involve changing a fundamental thing\n> >about PostgreSQL's data type requirements.\n> >\n>\n> Sure, each of those algorithms has limitations. But I think that's mostly\n> irrelevant to the main issue - switching between algorithms mid-execution.\n> At that point some of the tuples might have been already sent sent to the\n> other nodes, and I have no idea how to \"resume\" the tuple stream short of\n> buffering everything locally until the join completes. And that would be\n> rather terrible, I guess.\n>\n>\nWhat if you switched to NLJ on a batch-by-batch basis and did it before\nstarting\nexecution of the join but after building the inner side of the hash table.\nThat\nway, no tuples will have been sent to other nodes yet.\n\n-- \nMelanie Plageman\n\nOn Tue, May 7, 2019 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Tue, May 07, 2019 at 04:28:36PM +1200, Thomas Munro wrote:\n>On Tue, May 7, 2019 at 3:15 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n>> Switching to some other algorithm during execution moves the goal posts\n>> to the next galaxy, I'm afraid.\n>\n>The main problem I'm aware of with sort-merge join is: not all that is\n>hashable is sortable. So BNL is actually the only solution I'm aware\n>of for problem B that doesn't involve changing a fundamental thing\n>about PostgreSQL's data type requirements.\n>\n\nSure, each of those algorithms has limitations. But I think that's mostly\nirrelevant to the main issue - switching between algorithms mid-execution.\nAt that point some of the tuples might have been already sent sent to the\nother nodes, and I have no idea how to \"resume\" the tuple stream short of\nbuffering everything locally until the join completes. And that would be\nrather terrible, I guess.\n\nWhat if you switched to NLJ on a batch-by-batch basis and did it before startingexecution of the join but after building the inner side of the hash table. Thatway, no tuples will have been sent to other nodes yet.-- Melanie Plageman",
"msg_date": "Tue, 7 May 2019 17:43:56 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 07, 2019 at 05:43:56PM -0700, Melanie Plageman wrote:\n> On Tue, May 7, 2019 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n>\n> On Tue, May 07, 2019 at 04:28:36PM +1200, Thomas Munro wrote:\n> >On Tue, May 7, 2019 at 3:15 PM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >> On Tue, May 07, 2019 at 01:48:40PM +1200, Thomas Munro wrote:\n> >> Switching to some other algorithm during execution moves the goal\n> posts\n> >> to the next galaxy, I'm afraid.\n> >\n> >The main problem I'm aware of with sort-merge join is: not all that is\n> >hashable is sortable.� So BNL is actually the only solution I'm aware\n> >of for problem B that doesn't involve changing a fundamental thing\n> >about PostgreSQL's data type requirements.\n> >\n>\n> Sure, each of those algorithms has limitations. But I think that's\n> mostly\n> irrelevant to the main issue - switching between algorithms\n> mid-execution.\n> At that point some of the tuples might have been already sent sent to\n> the\n> other nodes, and I have no idea how to \"resume\" the tuple stream short\n> of\n> buffering everything locally until the join completes. And that would be\n> rather terrible, I guess.\n>\n> What if you switched to NLJ on a batch-by-batch basis and did it before\n> starting\n> execution of the join but after building the inner side of the hash\n> table.� That\n> way, no tuples will have been sent to other nodes yet.\n>\n\nInteresting idea! I think you're right doing it on a per-batch basis\nwould solve that problem. Essentially, if all (or >95%) of the tuples\nhas the same hash value, we could switch to a special \"degraded\" mode\ndoing something like a NL. At that point the hash table benefits are\nlost anyway, because all the tuples are in a single chain, so it's not\ngoing to be much slower.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 8 May 2019 15:34:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 07, 2019 at 05:30:27PM -0700, Melanie Plageman wrote:\n> On Mon, May 6, 2019 at 8:15 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n>\n> Nope, that's not how it works. It's the array of batches that gets\n> sliced, not the batches themselves.\n>\n> It does slightly increase the amount of data we need to shuffle between\n> the temp files, because we can't write the data directly to batches in\n> \"future\" slices. But that amplification is capped to ~2.2x (compared to\n> the ~1.4x in master) - I've shared some measurements in [1].\n>\n> [1]\n> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n>\n> Cool, I misunderstood. I looked at the code again today, and, at the email\n> thread where you measured \"amplification\".\n>\n\nOh! I hope you're not too disgusted by the code in that PoC patch ;-)\n\n> In terms of how many times you write each tuple, is it accurate to\n> say that a tuple can now be spilled three times (in the worst case)\n> whereas, before, it could be spilled only twice?\n>\n> 1 - when building the inner side hashtable, tuple is spilled to a \"slice\"\n> file\n> 2 - (assuming the number of batches was increased) during execution, when\n> a tuple belonging to a later slice's spill file is found, it is re-spilled\n> to that slice's spill file\n> 3 - during execution, when reading from its slice file, it is re-spilled\n> (again) to its batch's spill file\n>\n\nYes, that's mostly accurate understanding. Essentially this might add\none extra step of \"reshuffling\" from the per-slice to per-batch files.\n\n> Is it correct that the max number of BufFile structs you will have\n> is equal to the number of slices + number of batches in a slice\n> because that is the max number of open BufFiles you would have at a\n> time?\n\nYes. With the caveat that we need twice that number of BufFile structs,\nbecause we need them on both sides of the join.\n\n> By the way, applying v4 patch on master, in an assert build, I am tripping\n> some\n> asserts -- starting with\n> Assert(!file->readOnly);\n> in BufFileWrite\n\nWhoooops :-/\n\n> One thing I was a little confused by was the nbatch_inmemory member\n> of the hashtable.� The comment in ExecChooseHashTableSize says that\n> it is determining the number of batches we can fit in memory.� I\n> thought that the problem was the amount of space taken up by the\n> BufFile data structure itself--which is related to the number of\n> open BufFiles you need at a time. This comment in\n> ExecChooseHashTableSize makes it sound like you are talking about\n> fitting more than one batch of tuples into memory at a time. I was\n> under the impression that you could only fit one batch of tuples in\n> memory at a time.\n\nI suppose you mean this chunk:\n\n /*\n * See how many batches we can fit into memory (driven mostly by size\n * of BufFile, with PGAlignedBlock being the largest part of that).\n * We need one BufFile for inner and outer side, so we count it twice\n * for each batch, and we stop once we exceed (work_mem/2).\n */\n while ((nbatch_inmemory * 2) * sizeof(PGAlignedBlock) * 2\n <= (work_mem * 1024L / 2))\n nbatch_inmemory *= 2;\n\nYeah, that comment is a bit confusing. What the code actually does is\ncomputing the largest \"slice\" of batches for which we can keep the\nBufFile structs in memory, without exceeding work_mem/2.\n\nMaybe the nbatch_inmemory should be renamed to nbatch_slice, not sure.\n\n> So, I was stepping through the code with work_mem set to the lower\n> bound, and in ExecHashIncreaseNumBatches, I got confused.\n> hashtable->nbatch_inmemory was 2 for me, thus, nbatch_tmp was 2 so,\n> I didn't meet this condition if (nbatch_tmp >\n> hashtable->nbatch_inmemory) since I just set nbatch_tmp using\n> hashtable->nbatch_inmemory So, I didn't increase the number of\n> slices, which is what I was expecting. What happens when\n> hashtable->nbatch_inmemory is equal to nbatch_tmp?\n>\n\nAh, good catch. The condition you're refering to\n\n if (nbatch_tmp > hashtable->nbatch_inmemory)\n\nshould actually be\n\n if (nbatch > hashtable->nbatch_inmemory)\n\nbecause the point is to initialize BufFile structs for the overflow\nfiles, and we need to do that once we cross nbatch_inmemory.\n\nAnd it turns out this actually causes the assert failures in regression\ntests, you reported earlier. It failed to initialize the overflow files\nin some cases, so the readOnly flag seemed to be set.\n\nAttached is an updated patch, fixing this. I tried to clarify some of\nthe comments too, and I fixed another bug I found while running the\nregression tests. It's still very much a crappy PoC code, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 8 May 2019 17:08:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Wed, May 8, 2019 at 8:08 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, May 07, 2019 at 05:30:27PM -0700, Melanie Plageman wrote:\n> > One thing I was a little confused by was the nbatch_inmemory member\n> > of the hashtable. The comment in ExecChooseHashTableSize says that\n> > it is determining the number of batches we can fit in memory. I\n> > thought that the problem was the amount of space taken up by the\n> > BufFile data structure itself--which is related to the number of\n> > open BufFiles you need at a time. This comment in\n> > ExecChooseHashTableSize makes it sound like you are talking about\n> > fitting more than one batch of tuples into memory at a time. I was\n> > under the impression that you could only fit one batch of tuples in\n> > memory at a time.\n>\n> I suppose you mean this chunk:\n>\n> /*\n> * See how many batches we can fit into memory (driven mostly by size\n> * of BufFile, with PGAlignedBlock being the largest part of that).\n> * We need one BufFile for inner and outer side, so we count it twice\n> * for each batch, and we stop once we exceed (work_mem/2).\n> */\n> while ((nbatch_inmemory * 2) * sizeof(PGAlignedBlock) * 2\n> <= (work_mem * 1024L / 2))\n> nbatch_inmemory *= 2;\n>\n> Yeah, that comment is a bit confusing. What the code actually does is\n> computing the largest \"slice\" of batches for which we can keep the\n> BufFile structs in memory, without exceeding work_mem/2.\n>\n> Maybe the nbatch_inmemory should be renamed to nbatch_slice, not sure.\n>\n\nI definitely would prefer to see hashtable->nbatch_inmemory renamed to\nhashtable->nbatch_slice--or maybe hashtable->nbuff_inmemory?\n\nI've been poking around the code for awhile today, and, even though I\nknow that the nbatch_inmemory is referring to the buffiles that can\nfit in memory, I keep forgetting and thinking it is referring to the\ntuple data that can fit in memory.\n\nIt might be worth explicitly calling out somewhere in the comments\nthat overflow slices will only be created either when the number of\nbatches was underestimated as part of ExecHashIncreaseNumBatches and\nthe new number of batches exceeds the value for\nhashtable->nbatch_inmemory or when creating the hashtable initially\nand the number of batches exceeds the value for\nhashtable->nbatch_inmemory (the name confuses this for me at hashtable\ncreation time especially) -- the number of actual buffiles that can be\nmanaged in memory.\n\n\n>\n> Attached is an updated patch, fixing this. I tried to clarify some of\n> the comments too, and I fixed another bug I found while running the\n> regression tests. It's still very much a crappy PoC code, though.\n>\n>\nSo, I ran the following example on master and with your patch.\n\ndrop table foo;\ndrop table bar;\ncreate table foo(a int, b int);\ncreate table bar(c int, d int);\ninsert into foo select i, i from generate_series(1,10000)i;\ninsert into bar select 1, 1 from generate_series(1,1000)i;\ninsert into bar select i%3, i%3 from generate_series(1000,10000)i;\ninsert into foo select 1,1 from generate_series(1,1000)i;\nanalyze foo; analyze bar;\nset work_mem=64;\n\nOn master, explain analyze looked like this\n\npostgres=# explain analyze verbose select * from foo, bar where a = c;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual\ntime=28.962..1048.442 rows=4008001 loops=1)\n Output: foo.a, foo.b, bar.c, bar.d\n Hash Cond: (bar.c = foo.a)\n -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8)\n(actual time=0.030..1.777 rows=10001 loops=1)\n Output: bar.c, bar.d\n -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual\ntime=12.285..12.285 rows=11000 loops=1)\n Output: foo.a, foo.b\n Buckets: 2048 (originally 2048) Batches: 64 (originally 16)\n Memory Usage: 49kB\n -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8)\n(actual time=0.023..3.786 rows=11000 loops=1)\n Output: foo.a, foo.b\n Planning Time: 0.435 ms\n Execution Time: 1206.904 ms\n(12 rows)\n\nand with your patch, it looked like this.\n\npostgres=# explain analyze verbose select * from foo, bar where a = c;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual\ntime=28.256..1102.026 rows=4008001 loops=1)\n Output: foo.a, foo.b, bar.c, bar.d\n Hash Cond: (bar.c = foo.a)\n -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8)\n(actual time=0.040..1.717 rows=10001 loops=1)\n Output: bar.c, bar.d\n -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual\ntime=12.327..12.327 rows=11000 loops=1)\n Output: foo.a, foo.b\n Buckets: 2048 (originally 2048) Batches: 16384 (originally 16,\nin-memory 2) Memory Usage: 131160kB\n -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8)\n(actual time=0.029..3.569 rows=11000 loops=1)\n Output: foo.a, foo.b\n Planning Time: 0.260 ms\n Execution Time: 1264.995 ms\n(12 rows)\n\nI noticed that the number of batches is much higher with the patch,\nand, I was checking $PGDATA/base/pgsql_tmp and saw that the number of\ntemp files which are the overflow files any given time was quite high.\n\nI would imagine that the desired behaviour is to keep memory usage\nwithin work_mem.\nIn this example, the number of slices is about 8000, each of which\nwould have an overflow file. Is this the case you mention in the\ncomment in ExecChooseHashTableSize ?\n\n* We ignore (per-slice)\n* overflow files, because those serve as \"damage control\" for cases\n* when per-batch BufFiles would exceed work_mem. Given enough batches\n* it's impossible to enforce work_mem strictly, because the overflow\n* files alone will consume more memory.\n\n-- \nMelanie Plageman\n\nOn Wed, May 8, 2019 at 8:08 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Tue, May 07, 2019 at 05:30:27PM -0700, Melanie Plageman wrote:\n> One thing I was a little confused by was the nbatch_inmemory member\n> of the hashtable. The comment in ExecChooseHashTableSize says that\n> it is determining the number of batches we can fit in memory. I\n> thought that the problem was the amount of space taken up by the\n> BufFile data structure itself--which is related to the number of\n> open BufFiles you need at a time. This comment in\n> ExecChooseHashTableSize makes it sound like you are talking about\n> fitting more than one batch of tuples into memory at a time. I was\n> under the impression that you could only fit one batch of tuples in\n> memory at a time.\n\nI suppose you mean this chunk:\n\n /*\n * See how many batches we can fit into memory (driven mostly by size\n * of BufFile, with PGAlignedBlock being the largest part of that).\n * We need one BufFile for inner and outer side, so we count it twice\n * for each batch, and we stop once we exceed (work_mem/2).\n */\n while ((nbatch_inmemory * 2) * sizeof(PGAlignedBlock) * 2\n <= (work_mem * 1024L / 2))\n nbatch_inmemory *= 2;\n\nYeah, that comment is a bit confusing. What the code actually does is\ncomputing the largest \"slice\" of batches for which we can keep the\nBufFile structs in memory, without exceeding work_mem/2.\n\nMaybe the nbatch_inmemory should be renamed to nbatch_slice, not sure.I definitely would prefer to see hashtable->nbatch_inmemory renamed tohashtable->nbatch_slice--or maybe hashtable->nbuff_inmemory?I've been poking around the code for awhile today, and, even though Iknow that the nbatch_inmemory is referring to the buffiles that canfit in memory, I keep forgetting and thinking it is referring to thetuple data that can fit in memory.It might be worth explicitly calling out somewhere in the commentsthat overflow slices will only be created either when the number ofbatches was underestimated as part of ExecHashIncreaseNumBatches andthe new number of batches exceeds the value forhashtable->nbatch_inmemory or when creating the hashtable initiallyand the number of batches exceeds the value forhashtable->nbatch_inmemory (the name confuses this for me at hashtablecreation time especially) -- the number of actual buffiles that can bemanaged in memory. \n\nAttached is an updated patch, fixing this. I tried to clarify some of\nthe comments too, and I fixed another bug I found while running the\nregression tests. It's still very much a crappy PoC code, though.\nSo, I ran the following example on master and with your patch.drop table foo;drop table bar;create table foo(a int, b int);create table bar(c int, d int);insert into foo select i, i from generate_series(1,10000)i;insert into bar select 1, 1 from generate_series(1,1000)i;insert into bar select i%3, i%3 from generate_series(1000,10000)i;insert into foo select 1,1 from generate_series(1,1000)i;analyze foo; analyze bar;set work_mem=64;On master, explain analyze looked like thispostgres=# explain analyze verbose select * from foo, bar where a = c; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual time=28.962..1048.442 rows=4008001 loops=1) Output: foo.a, foo.b, bar.c, bar.d Hash Cond: (bar.c = foo.a) -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8) (actual time=0.030..1.777 rows=10001 loops=1) Output: bar.c, bar.d -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual time=12.285..12.285 rows=11000 loops=1) Output: foo.a, foo.b Buckets: 2048 (originally 2048) Batches: 64 (originally 16) Memory Usage: 49kB -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8) (actual time=0.023..3.786 rows=11000 loops=1) Output: foo.a, foo.b Planning Time: 0.435 ms Execution Time: 1206.904 ms(12 rows)and with your patch, it looked like this.postgres=# explain analyze verbose select * from foo, bar where a = c; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual time=28.256..1102.026 rows=4008001 loops=1) Output: foo.a, foo.b, bar.c, bar.d Hash Cond: (bar.c = foo.a) -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8) (actual time=0.040..1.717 rows=10001 loops=1) Output: bar.c, bar.d -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual time=12.327..12.327 rows=11000 loops=1) Output: foo.a, foo.b Buckets: 2048 (originally 2048) Batches: 16384 (originally 16, in-memory 2) Memory Usage: 131160kB -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8) (actual time=0.029..3.569 rows=11000 loops=1) Output: foo.a, foo.b Planning Time: 0.260 ms Execution Time: 1264.995 ms(12 rows)I noticed that the number of batches is much higher with the patch,and, I was checking $PGDATA/base/pgsql_tmp and saw that the number oftemp files which are the overflow files any given time was quite high. I would imagine that the desired behaviour is to keep memory usagewithin work_mem.In this example, the number of slices is about 8000, each of whichwould have an overflow file. Is this the case you mention in thecomment in ExecChooseHashTableSize ?\t * We ignore (per-slice)\t * overflow files, because those serve as \"damage control\" for cases\t * when per-batch BufFiles would exceed work_mem. Given enough batches\t * it's impossible to enforce work_mem strictly, because the overflow\t * files alone will consume more memory. -- Melanie Plageman",
"msg_date": "Tue, 21 May 2019 17:38:50 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 21, 2019 at 05:38:50PM -0700, Melanie Plageman wrote:\n>On Wed, May 8, 2019 at 8:08 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> On Tue, May 07, 2019 at 05:30:27PM -0700, Melanie Plageman wrote:\n>> > One thing I was a little confused by was the nbatch_inmemory member\n>> > of the hashtable. The comment in ExecChooseHashTableSize says that\n>> > it is determining the number of batches we can fit in memory. I\n>> > thought that the problem was the amount of space taken up by the\n>> > BufFile data structure itself--which is related to the number of\n>> > open BufFiles you need at a time. This comment in\n>> > ExecChooseHashTableSize makes it sound like you are talking about\n>> > fitting more than one batch of tuples into memory at a time. I was\n>> > under the impression that you could only fit one batch of tuples in\n>> > memory at a time.\n>>\n>> I suppose you mean this chunk:\n>>\n>> /*\n>> * See how many batches we can fit into memory (driven mostly by size\n>> * of BufFile, with PGAlignedBlock being the largest part of that).\n>> * We need one BufFile for inner and outer side, so we count it twice\n>> * for each batch, and we stop once we exceed (work_mem/2).\n>> */\n>> while ((nbatch_inmemory * 2) * sizeof(PGAlignedBlock) * 2\n>> <= (work_mem * 1024L / 2))\n>> nbatch_inmemory *= 2;\n>>\n>> Yeah, that comment is a bit confusing. What the code actually does is\n>> computing the largest \"slice\" of batches for which we can keep the\n>> BufFile structs in memory, without exceeding work_mem/2.\n>>\n>> Maybe the nbatch_inmemory should be renamed to nbatch_slice, not sure.\n>>\n>\n>I definitely would prefer to see hashtable->nbatch_inmemory renamed to\n>hashtable->nbatch_slice--or maybe hashtable->nbuff_inmemory?\n>\n>I've been poking around the code for awhile today, and, even though I\n>know that the nbatch_inmemory is referring to the buffiles that can\n>fit in memory, I keep forgetting and thinking it is referring to the\n>tuple data that can fit in memory.\n>\n\nThat's a fair point. I think nbatch_slice is a good name.\n\n>It might be worth explicitly calling out somewhere in the comments\n>that overflow slices will only be created either when the number of\n>batches was underestimated as part of ExecHashIncreaseNumBatches and\n>the new number of batches exceeds the value for\n>hashtable->nbatch_inmemory or when creating the hashtable initially\n>and the number of batches exceeds the value for\n>hashtable->nbatch_inmemory (the name confuses this for me at hashtable\n>creation time especially) -- the number of actual buffiles that can be\n>managed in memory.\n>\n\nYes, this definitely needs to be explained somewhere - possibly in a\ncomment at the beginning of nodeHash.c or something like that.\n\nFWIW I wonder if this \"slicing\" would be useful even with correct\nestimates. E.g. let's say we can fit 128 batches into work_mem, but we\nexpect to need 256 (and it's accurate). At that point it's probably too\naggressive to disable hash joins - a merge join is likely more expensive\nthan just using the slicing. But that should be a cost-based decision.\n\n>\n>>\n>> Attached is an updated patch, fixing this. I tried to clarify some of\n>> the comments too, and I fixed another bug I found while running the\n>> regression tests. It's still very much a crappy PoC code, though.\n>>\n>>\n>So, I ran the following example on master and with your patch.\n>\n>drop table foo;\n>drop table bar;\n>create table foo(a int, b int);\n>create table bar(c int, d int);\n>insert into foo select i, i from generate_series(1,10000)i;\n>insert into bar select 1, 1 from generate_series(1,1000)i;\n>insert into bar select i%3, i%3 from generate_series(1000,10000)i;\n>insert into foo select 1,1 from generate_series(1,1000)i;\n>analyze foo; analyze bar;\n>set work_mem=64;\n>\n>On master, explain analyze looked like this\n>\n>postgres=# explain analyze verbose select * from foo, bar where a = c;\n> QUERY PLAN\n>\n>--------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual\n>time=28.962..1048.442 rows=4008001 loops=1)\n> Output: foo.a, foo.b, bar.c, bar.d\n> Hash Cond: (bar.c = foo.a)\n> -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8)\n>(actual time=0.030..1.777 rows=10001 loops=1)\n> Output: bar.c, bar.d\n> -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual\n>time=12.285..12.285 rows=11000 loops=1)\n> Output: foo.a, foo.b\n> Buckets: 2048 (originally 2048) Batches: 64 (originally 16)\n> Memory Usage: 49kB\n> -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8)\n>(actual time=0.023..3.786 rows=11000 loops=1)\n> Output: foo.a, foo.b\n> Planning Time: 0.435 ms\n> Execution Time: 1206.904 ms\n>(12 rows)\n>\n>and with your patch, it looked like this.\n>\n>postgres=# explain analyze verbose select * from foo, bar where a = c;\n> QUERY PLAN\n>\n>--------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=339.50..53256.27 rows=4011001 width=16) (actual\n>time=28.256..1102.026 rows=4008001 loops=1)\n> Output: foo.a, foo.b, bar.c, bar.d\n> Hash Cond: (bar.c = foo.a)\n> -> Seq Scan on public.bar (cost=0.00..145.01 rows=10001 width=8)\n>(actual time=0.040..1.717 rows=10001 loops=1)\n> Output: bar.c, bar.d\n> -> Hash (cost=159.00..159.00 rows=11000 width=8) (actual\n>time=12.327..12.327 rows=11000 loops=1)\n> Output: foo.a, foo.b\n> Buckets: 2048 (originally 2048) Batches: 16384 (originally 16,\n>in-memory 2) Memory Usage: 131160kB\n> -> Seq Scan on public.foo (cost=0.00..159.00 rows=11000 width=8)\n>(actual time=0.029..3.569 rows=11000 loops=1)\n> Output: foo.a, foo.b\n> Planning Time: 0.260 ms\n> Execution Time: 1264.995 ms\n>(12 rows)\n>\n>I noticed that the number of batches is much higher with the patch,\n>and, I was checking $PGDATA/base/pgsql_tmp and saw that the number of\n>temp files which are the overflow files any given time was quite high.\n>\n>I would imagine that the desired behaviour is to keep memory usage\n>within work_mem.\n\nThere's definitely something fishy going on. I suspect it's either because\nof the duplicate values (which might fit into 64kB on master, but not when\naccounting for BufFile). Or maybe it's because the initial 16 batches\ncan't possibly fit into work_mem.\n\nIf you try with a larger work_mem, say 256kB, does that behave OK?\n\n>In this example, the number of slices is about 8000, each of which\n>would have an overflow file. Is this the case you mention in the\n>comment in ExecChooseHashTableSize ?\n>\n>* We ignore (per-slice)\n>* overflow files, because those serve as \"damage control\" for cases\n>* when per-batch BufFiles would exceed work_mem. Given enough batches\n>* it's impossible to enforce work_mem strictly, because the overflow\n>* files alone will consume more memory.\n>\n\nYes. 8000 slices is ~64MB, so considering we need them on both sides of\nthe join that'd be ~128MB. Which is pretty much exactly 131160kB.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 22 May 2019 13:19:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Sat, May 4, 2019 at 8:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> The root cause is that hash join treats batches as pretty much free, but\n> that's not really true - we do allocate two BufFile structs per batch,\n> and each BufFile is ~8kB as it includes PGAlignedBuffer.\n>\n> The OOM is not very surprising, because with 524288 batches it'd need\n> about 8GB of memory, and the system only has 8GB RAM installed.\n>\n> The second patch tries to enforce work_mem more strictly. That would be\n> impossible if we were to keep all the BufFile structs in memory, so\n> instead it slices the batches into chunks that fit into work_mem, and\n> then uses a single \"overflow\" file for slices currently not in memory.\n> These extra slices can't be counted into work_mem, but we should need\n> just very few of them. For example with work_mem=4MB the slice is 128\n> batches, so we need 128x less overflow files (compared to per-batch).\n>\n>\nHi Tomas\n\nI read your second patch which uses overflow buf files to reduce the total\nnumber of batches.\nIt would solve the hash join OOM problem what you discussed above: 8K per\nbatch leads to batch bloating problem.\n\nI mentioned in another thread:\n\nhttps://www.postgresql.org/message-id/flat/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.com\nThere is another hashjoin OOM problem which disables splitting batches too\nearly. PG uses a flag hashtable->growEnable to determine whether to split\nbatches. Once one splitting failed(all the tuples are assigned to only one\nbatch of two split ones) The growEnable flag would be turned off forever.\n\nThe is an opposite side of batch bloating problem. It only contains too few\nbatches and makes the in-memory hash table too large to fit into memory.\n\nHere is the tradeoff: one batch takes more than 8KB(8KB makes sense, due to\nperformance), in-memory hash table takes memory as well and splitting\nbatched may(not must) reduce the in-memory hash table size but introduce\nmore batches(and thus more memory usage 8KB*#batch).\nCan we conclude that it would be worth to splitting if satisfy:\n(The reduced memory of in-memory hash table) - (8KB * number of new\nbatches) > 0\n\nSo I'm considering to combine our patch with your patch to fix join OOM\nproblem. No matter the OOM is introduced by (the memory usage of in-memory\nhash table) or (8KB * number of batches).\n\nnbatch_inmemory in your patch could also use the upper rule to redefine.\n\nWhat's your opinion?\n\nThanks\n\nHubert Zhang\n\nOn Sat, May 4, 2019 at 8:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nThe root cause is that hash join treats batches as pretty much free, but\nthat's not really true - we do allocate two BufFile structs per batch,\nand each BufFile is ~8kB as it includes PGAlignedBuffer.\nThe OOM is not very surprising, because with 524288 batches it'd need\nabout 8GB of memory, and the system only has 8GB RAM installed.\nThe second patch tries to enforce work_mem more strictly. That would be\nimpossible if we were to keep all the BufFile structs in memory, so\ninstead it slices the batches into chunks that fit into work_mem, and\nthen uses a single \"overflow\" file for slices currently not in memory.\nThese extra slices can't be counted into work_mem, but we should need\njust very few of them. For example with work_mem=4MB the slice is 128\nbatches, so we need 128x less overflow files (compared to per-batch).Hi TomasI read your second patch which uses overflow buf files to reduce the total number of batches.It would solve the hash join OOM problem what you discussed above: 8K per batch leads to batch bloating problem.I mentioned in another thread: https://www.postgresql.org/message-id/flat/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.comThere is another hashjoin OOM problem which disables splitting batches too early. PG uses a flag hashtable->growEnable to determine whether to split batches. Once one splitting failed(all the tuples are assigned to only one batch of two split ones) The growEnable flag would be turned off forever.The is an opposite side of batch bloating problem. It only contains too few batches and makes the in-memory hash table too large to fit into memory.Here is the tradeoff: one batch takes more than 8KB(8KB makes sense, due to performance), in-memory hash table takes memory as well and splitting batched may(not must) reduce the in-memory hash table size but introduce more batches(and thus more memory usage 8KB*#batch).Can we conclude that it would be worth to splitting if satisfy:(The reduced memory of in-memory hash table) - (8KB * number of new batches) > 0So I'm considering to combine our patch with your patch to fix join OOM problem. No matter the OOM is introduced by (the memory usage of in-memory hash table) or (8KB * number of batches).nbatch_inmemory in your patch could also use the upper rule to redefine.What's your opinion?ThanksHubert Zhang",
"msg_date": "Tue, 28 May 2019 15:40:01 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "Hi Tomas,\n\nHere is the patch, it's could be compatible with your patch and it focus on\nwhen to regrow the batch.\n\n\nOn Tue, May 28, 2019 at 3:40 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n> On Sat, May 4, 2019 at 8:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n>\n>> The root cause is that hash join treats batches as pretty much free, but\n>> that's not really true - we do allocate two BufFile structs per batch,\n>> and each BufFile is ~8kB as it includes PGAlignedBuffer.\n>>\n>> The OOM is not very surprising, because with 524288 batches it'd need\n>> about 8GB of memory, and the system only has 8GB RAM installed.\n>>\n>> The second patch tries to enforce work_mem more strictly. That would be\n>> impossible if we were to keep all the BufFile structs in memory, so\n>> instead it slices the batches into chunks that fit into work_mem, and\n>> then uses a single \"overflow\" file for slices currently not in memory.\n>> These extra slices can't be counted into work_mem, but we should need\n>> just very few of them. For example with work_mem=4MB the slice is 128\n>> batches, so we need 128x less overflow files (compared to per-batch).\n>>\n>>\n> Hi Tomas\n>\n> I read your second patch which uses overflow buf files to reduce the total\n> number of batches.\n> It would solve the hash join OOM problem what you discussed above: 8K per\n> batch leads to batch bloating problem.\n>\n> I mentioned in another thread:\n>\n> https://www.postgresql.org/message-id/flat/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.com\n> There is another hashjoin OOM problem which disables splitting batches too\n> early. PG uses a flag hashtable->growEnable to determine whether to split\n> batches. Once one splitting failed(all the tuples are assigned to only one\n> batch of two split ones) The growEnable flag would be turned off forever.\n>\n> The is an opposite side of batch bloating problem. It only contains too\n> few batches and makes the in-memory hash table too large to fit into memory.\n>\n> Here is the tradeoff: one batch takes more than 8KB(8KB makes sense, due\n> to performance), in-memory hash table takes memory as well and splitting\n> batched may(not must) reduce the in-memory hash table size but introduce\n> more batches(and thus more memory usage 8KB*#batch).\n> Can we conclude that it would be worth to splitting if satisfy:\n> (The reduced memory of in-memory hash table) - (8KB * number of new\n> batches) > 0\n>\n> So I'm considering to combine our patch with your patch to fix join OOM\n> problem. No matter the OOM is introduced by (the memory usage of\n> in-memory hash table) or (8KB * number of batches).\n>\n> nbatch_inmemory in your patch could also use the upper rule to redefine.\n>\n> What's your opinion?\n>\n> Thanks\n>\n> Hubert Zhang\n>\n\n\n-- \nThanks\n\nHubert Zhang",
"msg_date": "Tue, 28 May 2019 17:39:58 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, May 28, 2019 at 03:40:01PM +0800, Hubert Zhang wrote:\n>On Sat, May 4, 2019 at 8:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>Hi Tomas\n>\n>I read your second patch which uses overflow buf files to reduce the total\n>number of batches.\n>It would solve the hash join OOM problem what you discussed above: 8K per\n>batch leads to batch bloating problem.\n>\n>I mentioned in another thread:\n>\n>https://www.postgresql.org/message-id/flat/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.com\n>There is another hashjoin OOM problem which disables splitting batches too\n>early. PG uses a flag hashtable->growEnable to determine whether to split\n>batches. Once one splitting failed(all the tuples are assigned to only one\n>batch of two split ones) The growEnable flag would be turned off forever.\n>\n>The is an opposite side of batch bloating problem. It only contains too few\n>batches and makes the in-memory hash table too large to fit into memory.\n>\n\nYes. There are deffinitely multiple separate issues in the hashjoin code,\nand the various improvements discussed in this (and other) thread usually\naddress just a subset of them. We need to figure out how to combine them\nor maybe devise some more generic solution.\n\nSo I think we need to take a step back, and figure out how to combine\nthese improvements - otherwise we might commit a fix for one issue, making\nit much harder/impossible to improve the other issues.\n\nThe other important question is whether we see these cases as outliers\n(and the solutions as last-resort-attempt-to-survive kind of fix) or more\nwidely applicable optimizations. I've seen some interesting speedups with\nthe overflow-batches patch, but my feeling is we should really treat it as\na last-resort to survive. \n\nI had a chat about this with Thomas Munro yesterday. Unfortunately, some\nbeer was involved but I do vaguely remember he more or less convinced me\nthe BNL (block nested loop join) might be the right approach here. We\ndon't have any patch for that yet, though :-(\n\n>Here is the tradeoff: one batch takes more than 8KB(8KB makes sense, due to\n>performance), in-memory hash table takes memory as well and splitting\n>batched may(not must) reduce the in-memory hash table size but introduce\n>more batches(and thus more memory usage 8KB*#batch).\n>Can we conclude that it would be worth to splitting if satisfy:\n>(The reduced memory of in-memory hash table) - (8KB * number of new\n>batches) > 0\n>\n\nSomething like that, yes.\n\n>So I'm considering to combine our patch with your patch to fix join OOM\n>problem. No matter the OOM is introduced by (the memory usage of in-memory\n>hash table) or (8KB * number of batches).\n>\n>nbatch_inmemory in your patch could also use the upper rule to redefine.\n>\n>What's your opinion?\n>\n\nOne of the issues with my \"overflow batches\" patch, pointed out to me by\nThomas yesterday, is that it only works with non-parallel hash join. And\nwe don't know how to make it work in the parallel mode :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 29 May 2019 17:11:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "Okay, so, while I do have specific, actual code review/commitfest-y\nfeedback for the patch in this thread registered for this commitfest,\nI wanted to defer that for a later email and use this one to cover off\non a few higher level issues.\n\n1) How this patch's approach fits into the wider set of problems with\nhybrid hashjoin.\n\n2) Parallel HashJoin implementation of this patch's approach\n\nI think implementing support for parallel hashjoin or explicitly\ndisabling it would be the bare minimum for this patch, which is why I\nmade 2 its own item. I've marked it as returned to author for this\nreason.\n\nI do think that accounting for Buffile overhead when estimating the\nsize of the hashtable during ExecChooseHashTableSize() so it can be\nused during planning is a worthwhile patch by itself (though I know it\nis not even part of this patch).\n\nI'll start with 2 since I have less to say there.\n\n From comments upthread, I take it this would not work with parallel\nhashjoin as expected. Is this because each worker operates on batches\nindependently and now batches are lumped into slices?\n\nThinking through a parallel-aware implementation, it seems like you\nwould use slice-based barriers for the build phase but batch-based\nbarriers for the probe phase to avoid getting out of sync (workers\nwith outer tuples from one batch should not try and join those with\ntuples from another batch, even if in the same slice).\n\nYou would, of course, need to add code to make slices work with\nSharedTuplestore--caveat here is I still haven't tried to understand\nhow parallel-aware hashjoin works/uses SharedTuplestore.\n\nNow, addressing 1, how this patch fits into the wider set of problem's\nwith current hybrid hashjoin:\n\nThomas Munro nicely summarized roughly what I'm about to lay out like\nthis (upthread) -- he called them \"three separate but related\nproblems\":\n\n> A. Broken escape valve: sometimes we generate a huge number of\n> batches while trying to split up many duplicates, because of the\n> presence of other more uniformly distributed keys. We could fix that\n> with (say) a 95% rule.\n> B. Lack of good alternative execution strategy when the escape valve\n> is triggered. A batch cannot be split effectively, but cannot fit in\n> work_mem, so for now we decide to ignore work_mem.\n> C. Unmetered explosion of batches and thus BufFiles, probably usually\n> caused by problem A, but theoretically also due to a real need for\n> partitions.\n\nHowever, I would like to lay out the problem space a little bit\ndifferently. (using the end of the alphabet to differentiate).\n\nThe following scenarios are how you could end up running out of\nmemory:\n\nY. Plan-time underestimation of the number of required batches with\nrelatively uniform data distribution\n\nIn this case, the best join execution strategy is a plain hashjoin\nwith spilling as needed.\nnbatches should be increased as needed, because the data is ~evenly\ndistributed.\nslicing should be employed when buffile overhead exceeds some\nthreshhold for the ratio of work_mem to be used for buffile overhead\n\nZ. Plan and or execution time underestimation of the number of\nrequired batches with skewed data\n\nIf you knew this at planning time, you could have picked another\njoin-type, though, there might be cases where it would actually be\nless costly to use plain hashjoin for all batches except the bad batch\nand fall back to hash block nested loop join just for the duplicate\nvalues.\n\nIf you could not have known this at planning time, the best join\nexecution strategy is a hybrid hashjoin/hash block nested loop join.\n\nTo do this, preview if increasing nbatches would move tuples, and, if\nit would, do this (also, employing slicing if buffile overhead exceeds\nthe threshold)\n\nIf increasing nbatches wouldn't move tuples, process this batch with\nhash block nested loop join.\n\nEssentially, what we want is logical units of tuples which are\nwork_mem-sized. In some cases, each unit may contain multiple batches\n(a slice in Tomas' patch) and in other cases, each unit may contain\nonly part of a batch (a chunk is the term I used in my hash block\nnested loop join patch [1]).\n\nFor slicing, each unit, a slice, has multiple batches but one spill\nfile.\nFor hbnlj, each unit, a chunk, is one of multiple chunks in a single\nbatch, all of which are in the same spill file (1 batch = 1 spill\nfile).\n\nThinking through it, it seems to make the most sense to split the work\ninto ~ 3 independent pieces:\n\npatch1 - \"preview\" a batch increase (not yet written [I think])\npatch2 - slicing (Tomas' patch [2] but add in threshhold for portion of\nwork_mem buffile overhead is using)\npatch3 - hash block nested loop join (my patch [1])\n\npatch1 allows us to re-enable growth and was mentioned upthread, but I\nwill quote it here for simplicity:\n\n> I think we can fix A by relaxing the escape valve condition, and then\n> rechecking it once in a while. So we fill work_mem, realize it didn't\n> actually reduce the batch size significantly and disable nbatch growth.\n> But at the same time we increase the threshold to 2x work_mem, and after\n> reaching it we \"consider\" a nbatch increase. That is, we walk the batch\n> and see how many tuples would move if we increased nbatch (that should be\n> fairly cheap) - if it helps, great, enable growth and split the batch. If\n> not, double the threshold again. Rinse and repeat.\n\nWe don't want to fill up work_mem with buffile overhead after\nincreasing nbatches many times just to move a few tuples for one batch\nand end up disabling growth thus making it so that later we can't\nincrease nbatches and repartition for a batch that would nicely\npartition (like Hubert's case, I believe [3]).\n\nWe want to identify when re-partitioning would help and only do it\nthen and, for times when it wouldn't help, use a fallback strategy\nthat still allows progress on the hashjoin, and, for some spiky data,\nwhere we have re-partitioned for the right reasons, but there are\nstill a lot of batches that are small enough that they could all fit\nin memory at once, we want to track them with as little overhead as\npossible -- lump them into slices.\n\nWe should probably consider deciding to use slices based on some\nthreshold for the portion of work_mem which is allowed to be occupied\nby buffile overhead instead of waiting until the buffile overhead is\nliterally taking up most of work_mem.\n\nThe above summary is to address the concern in this thread about a\nholistic solution.\n\nI think the slicing patch is independent of both the hash block nested\nloop join patch and the \"preview\" mode for batch increasing.\n\nIf slicing is made to work for parallel-aware hashjoin and the code is\nin a committable state (and probably has the threshold I mentioned\nabove), then I think that this patch should go in.\n\n[1]\nhttps://www.postgresql.org/message-id/CAAKRu_ZkkukQgXCK8ADe-PmvcmpZh6G1Uin8pqqovL4x7P30mQ%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/20190508150844.rij36rtuk4lhvztw%40development\n[3]\nhttps://www.postgresql.org/message-id/CAB0yre%3De8ysPyoUvZqjKYAxc6-VB%3DJKHL-7XKZSxy0FT5vY7BQ%40mail.gmail.com\n\n-- \nMelanie Plageman\n\nOkay, so, while I do have specific, actual code review/commitfest-yfeedback for the patch in this thread registered for this commitfest,I wanted to defer that for a later email and use this one to cover offon a few higher level issues.1) How this patch's approach fits into the wider set of problems withhybrid hashjoin.2) Parallel HashJoin implementation of this patch's approachI think implementing support for parallel hashjoin or explicitlydisabling it would be the bare minimum for this patch, which is why Imade 2 its own item. I've marked it as returned to author for thisreason.I do think that accounting for Buffile overhead when estimating thesize of the hashtable during ExecChooseHashTableSize() so it can beused during planning is a worthwhile patch by itself (though I know itis not even part of this patch).I'll start with 2 since I have less to say there.From comments upthread, I take it this would not work with parallelhashjoin as expected. Is this because each worker operates on batchesindependently and now batches are lumped into slices?Thinking through a parallel-aware implementation, it seems like youwould use slice-based barriers for the build phase but batch-basedbarriers for the probe phase to avoid getting out of sync (workerswith outer tuples from one batch should not try and join those withtuples from another batch, even if in the same slice).You would, of course, need to add code to make slices work withSharedTuplestore--caveat here is I still haven't tried to understandhow parallel-aware hashjoin works/uses SharedTuplestore.Now, addressing 1, how this patch fits into the wider set of problem'swith current hybrid hashjoin:Thomas Munro nicely summarized roughly what I'm about to lay out likethis (upthread) -- he called them \"three separate but relatedproblems\":> A. Broken escape valve: sometimes we generate a huge number of> batches while trying to split up many duplicates, because of the> presence of other more uniformly distributed keys. We could fix that> with (say) a 95% rule.> B. Lack of good alternative execution strategy when the escape valve> is triggered. A batch cannot be split effectively, but cannot fit in> work_mem, so for now we decide to ignore work_mem.> C. Unmetered explosion of batches and thus BufFiles, probably usually> caused by problem A, but theoretically also due to a real need for> partitions.However, I would like to lay out the problem space a little bitdifferently. (using the end of the alphabet to differentiate).The following scenarios are how you could end up running out ofmemory:Y. Plan-time underestimation of the number of required batches withrelatively uniform data distributionIn this case, the best join execution strategy is a plain hashjoinwith spilling as needed.nbatches should be increased as needed, because the data is ~evenlydistributed.slicing should be employed when buffile overhead exceeds somethreshhold for the ratio of work_mem to be used for buffile overheadZ. Plan and or execution time underestimation of the number ofrequired batches with skewed dataIf you knew this at planning time, you could have picked anotherjoin-type, though, there might be cases where it would actually beless costly to use plain hashjoin for all batches except the bad batchand fall back to hash block nested loop join just for the duplicatevalues.If you could not have known this at planning time, the best joinexecution strategy is a hybrid hashjoin/hash block nested loop join.To do this, preview if increasing nbatches would move tuples, and, ifit would, do this (also, employing slicing if buffile overhead exceedsthe threshold)If increasing nbatches wouldn't move tuples, process this batch withhash block nested loop join.Essentially, what we want is logical units of tuples which arework_mem-sized. In some cases, each unit may contain multiple batches(a slice in Tomas' patch) and in other cases, each unit may containonly part of a batch (a chunk is the term I used in my hash blocknested loop join patch [1]).For slicing, each unit, a slice, has multiple batches but one spillfile.For hbnlj, each unit, a chunk, is one of multiple chunks in a singlebatch, all of which are in the same spill file (1 batch = 1 spillfile).Thinking through it, it seems to make the most sense to split the workinto ~ 3 independent pieces:patch1 - \"preview\" a batch increase (not yet written [I think])patch2 - slicing (Tomas' patch [2] but add in threshhold for portion ofwork_mem buffile overhead is using)patch3 - hash block nested loop join (my patch [1])patch1 allows us to re-enable growth and was mentioned upthread, but Iwill quote it here for simplicity:> I think we can fix A by relaxing the escape valve condition, and then> rechecking it once in a while. So we fill work_mem, realize it didn't> actually reduce the batch size significantly and disable nbatch growth.> But at the same time we increase the threshold to 2x work_mem, and after> reaching it we \"consider\" a nbatch increase. That is, we walk the batch> and see how many tuples would move if we increased nbatch (that should be> fairly cheap) - if it helps, great, enable growth and split the batch. If> not, double the threshold again. Rinse and repeat.We don't want to fill up work_mem with buffile overhead afterincreasing nbatches many times just to move a few tuples for one batchand end up disabling growth thus making it so that later we can'tincrease nbatches and repartition for a batch that would nicelypartition (like Hubert's case, I believe [3]).We want to identify when re-partitioning would help and only do itthen and, for times when it wouldn't help, use a fallback strategythat still allows progress on the hashjoin, and, for some spiky data,where we have re-partitioned for the right reasons, but there arestill a lot of batches that are small enough that they could all fitin memory at once, we want to track them with as little overhead aspossible -- lump them into slices.We should probably consider deciding to use slices based on somethreshold for the portion of work_mem which is allowed to be occupiedby buffile overhead instead of waiting until the buffile overhead isliterally taking up most of work_mem.The above summary is to address the concern in this thread about aholistic solution.I think the slicing patch is independent of both the hash block nestedloop join patch and the \"preview\" mode for batch increasing.If slicing is made to work for parallel-aware hashjoin and the code isin a committable state (and probably has the threshold I mentionedabove), then I think that this patch should go in.[1] https://www.postgresql.org/message-id/CAAKRu_ZkkukQgXCK8ADe-PmvcmpZh6G1Uin8pqqovL4x7P30mQ%40mail.gmail.com[2] https://www.postgresql.org/message-id/20190508150844.rij36rtuk4lhvztw%40development [3] https://www.postgresql.org/message-id/CAB0yre%3De8ysPyoUvZqjKYAxc6-VB%3DJKHL-7XKZSxy0FT5vY7BQ%40mail.gmail.com-- Melanie Plageman",
"msg_date": "Wed, 10 Jul 2019 16:51:02 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:\n>Okay, so, while I do have specific, actual code review/commitfest-y\n>feedback for the patch in this thread registered for this commitfest,\n>I wanted to defer that for a later email and use this one to cover off\n>on a few higher level issues.\n>\n>1) How this patch's approach fits into the wider set of problems with\n>hybrid hashjoin.\n>\n>2) Parallel HashJoin implementation of this patch's approach\n>\n>I think implementing support for parallel hashjoin or explicitly\n>disabling it would be the bare minimum for this patch, which is why I\n>made 2 its own item. I've marked it as returned to author for this\n>reason.\n>\n\nOK. I'm a bit confused / unsure what exactly our solution to the various\nhashjoin issues is. I have not been paying attention to all the various\nthreads, but I thought we kinda pivoted to the BNL approach, no? I'm not\nagainst pushing this patch (the slicing one) forward and then maybe add\nBNL on top.\n\n>I do think that accounting for Buffile overhead when estimating the\n>size of the hashtable during ExecChooseHashTableSize() so it can be\n>used during planning is a worthwhile patch by itself (though I know it\n>is not even part of this patch).\n>\n\n+1 to that\n\n>I'll start with 2 since I have less to say there.\n>\n>From comments upthread, I take it this would not work with parallel\n>hashjoin as expected. Is this because each worker operates on batches\n>independently and now batches are lumped into slices?\n>\n>Thinking through a parallel-aware implementation, it seems like you\n>would use slice-based barriers for the build phase but batch-based\n>barriers for the probe phase to avoid getting out of sync (workers\n>with outer tuples from one batch should not try and join those with\n>tuples from another batch, even if in the same slice).\n>\n>You would, of course, need to add code to make slices work with\n>SharedTuplestore--caveat here is I still haven't tried to understand\n>how parallel-aware hashjoin works/uses SharedTuplestore.\n>\n\nI don't know. I haven't thought about the parallel version very much. I\nwonder if Thomas Munro has some thoughts about it ...\n\n>Now, addressing 1, how this patch fits into the wider set of problem's\n>with current hybrid hashjoin:\n>\n>Thomas Munro nicely summarized roughly what I'm about to lay out like\n>this (upthread) -- he called them \"three separate but related\n>problems\":\n>\n>> A. Broken escape valve: sometimes we generate a huge number of\n>> batches while trying to split up many duplicates, because of the\n>> presence of other more uniformly distributed keys. We could fix that\n>> with (say) a 95% rule.\n>> B. Lack of good alternative execution strategy when the escape valve\n>> is triggered. A batch cannot be split effectively, but cannot fit in\n>> work_mem, so for now we decide to ignore work_mem.\n>> C. Unmetered explosion of batches and thus BufFiles, probably usually\n>> caused by problem A, but theoretically also due to a real need for\n>> partitions.\n>\n>However, I would like to lay out the problem space a little bit\n>differently. (using the end of the alphabet to differentiate).\n>\n>The following scenarios are how you could end up running out of\n>memory:\n>\n>Y. Plan-time underestimation of the number of required batches with\n>relatively uniform data distribution\n>\n>In this case, the best join execution strategy is a plain hashjoin\n>with spilling as needed.\n>nbatches should be increased as needed, because the data is ~evenly\n>distributed.\n>slicing should be employed when buffile overhead exceeds some\n>threshhold for the ratio of work_mem to be used for buffile overhead\n>\n\nOK, makes sense. But at some point we get so many slices the overflow\nfiles alone use more than work_mem. Of course, to hit that the\nunderestimate needs to be sufficiently serious. My understanding was we'll\nroll until that point and then switch to BNL.\n\n>Z. Plan and or execution time underestimation of the number of\n>required batches with skewed data\n>\n>If you knew this at planning time, you could have picked another\n>join-type, though, there might be cases where it would actually be\n>less costly to use plain hashjoin for all batches except the bad batch\n>and fall back to hash block nested loop join just for the duplicate\n>values.\n>\n>If you could not have known this at planning time, the best join\n>execution strategy is a hybrid hashjoin/hash block nested loop join.\n>\n>To do this, preview if increasing nbatches would move tuples, and, if\n>it would, do this (also, employing slicing if buffile overhead exceeds\n>the threshold)\n>\n>If increasing nbatches wouldn't move tuples, process this batch with\n>hash block nested loop join.\n>\n\nOK.\n\n>Essentially, what we want is logical units of tuples which are\n>work_mem-sized. In some cases, each unit may contain multiple batches\n>(a slice in Tomas' patch) and in other cases, each unit may contain\n>only part of a batch (a chunk is the term I used in my hash block\n>nested loop join patch [1]).\n>\n\nOK, although with slicing the work_mem-sized unit is still one batch. The\nslice just ensures the metadata we need to keep in memory does not grow as\nO(N) with the number of batches (instead it's O(log(N)) I think).\n\n>For slicing, each unit, a slice, has multiple batches but one spill\n>file.\n>For hbnlj, each unit, a chunk, is one of multiple chunks in a single\n>batch, all of which are in the same spill file (1 batch = 1 spill\n>file).\n>\n\nYep.\n\n>Thinking through it, it seems to make the most sense to split the work\n>into ~ 3 independent pieces:\n>\n>patch1 - \"preview\" a batch increase (not yet written [I think])\n>patch2 - slicing (Tomas' patch [2] but add in threshhold for portion of\n>work_mem buffile overhead is using)\n>patch3 - hash block nested loop join (my patch [1])\n>\n>patch1 allows us to re-enable growth and was mentioned upthread, but I\n>will quote it here for simplicity:\n>\n>> I think we can fix A by relaxing the escape valve condition, and then\n>> rechecking it once in a while. So we fill work_mem, realize it didn't\n>> actually reduce the batch size significantly and disable nbatch growth.\n>> But at the same time we increase the threshold to 2x work_mem, and after\n>> reaching it we \"consider\" a nbatch increase. That is, we walk the batch\n>> and see how many tuples would move if we increased nbatch (that should be\n>> fairly cheap) - if it helps, great, enable growth and split the batch. If\n>> not, double the threshold again. Rinse and repeat.\n>\n>We don't want to fill up work_mem with buffile overhead after\n>increasing nbatches many times just to move a few tuples for one batch\n>and end up disabling growth thus making it so that later we can't\n>increase nbatches and repartition for a batch that would nicely\n>partition (like Hubert's case, I believe [3]).\n>\n\nYes, this seems like a very reasonable plan. Also, I now see it actually\nexplains what the plan with BNL vs. slicing is.\n\n>We want to identify when re-partitioning would help and only do it\n>then and, for times when it wouldn't help, use a fallback strategy\n>that still allows progress on the hashjoin, and, for some spiky data,\n>where we have re-partitioned for the right reasons, but there are\n>still a lot of batches that are small enough that they could all fit\n>in memory at once, we want to track them with as little overhead as\n>possible -- lump them into slices.\n>\n>We should probably consider deciding to use slices based on some\n>threshold for the portion of work_mem which is allowed to be occupied\n>by buffile overhead instead of waiting until the buffile overhead is\n>literally taking up most of work_mem.\n>\n\nBut that heuristics is already there, no? That's the \"Don't use more than\n2*work_mem/3 for batch BufFiles\" at which point we start adding slices.\n\n>The above summary is to address the concern in this thread about a\n>holistic solution.\n>\n>I think the slicing patch is independent of both the hash block nested\n>loop join patch and the \"preview\" mode for batch increasing.\n>\n>If slicing is made to work for parallel-aware hashjoin and the code is\n>in a committable state (and probably has the threshold I mentioned\n>above), then I think that this patch should go in.\n>\n\nYes, I think this seems like a good plan.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 17:40:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, May 6, 2019 at 9:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Stepping back a bit, I think there is something fishy about the way we\n> detect extreme skew. Is that a factor in this case? Right now we\n> wait until we have a batch that gets split into child batches\n> containing exactly 0% and 100% of the tuples before we give up.\n> Previously I had thought of that as merely a waste of time, but\n> clearly it's also a waste of unmetered memory. Oops.\n>\n> I think our extreme skew detector should go off sooner, because\n> otherwise if you have N nicely distributed unique keys and also M\n> duplicates of one bad egg key that'll never fit in memory, we keep\n> repartitioning until none of the N keys fall into the batch containing\n> the key for the M duplicates before we give up! You can use\n> balls-into-bins maths to figure out the number, but I think that means\n> we expect to keep splitting until we have N * some_constant batches,\n> and that's just silly and liable to create massive numbers of\n> partitions proportional to N, even though we're trying to solve a\n> problem with M. In another thread I suggested we should stop when\n> (say) 95% of the tuples go to one child batch. I'm not sure how you\n> pick the number.\n\nAnother thing that is fishy about this is that we can't split a batch\nor a bucket without splitting them all. Let's say that nbatches *\nnbuckets = 16 million. One bucket in one batch contains 90% of the\ntuples. Splitting *that* bucket might be a good idea if only 5% of the\ntuples end up moving, perhaps even if only 1% end up moving. But, if\nyou have to double the total number of batches to get that benefit,\nit's a lot less compelling, because now you have to rescan the outer\nside more times.\n\nI wonder whether we should be dividing things into batches unevenly,\nbased on the distribution of tuples we've seen so far. For example,\nsuppose we've gotten to 1024 buckets and that's all we can fit in\nmemory. If we decide to go to 2 batches, we'll use the next bit of the\nhash key to decide which things go into batch 0 and which things go\ninto batch 1. But if we know that 50% of the data is in bucket 17, why\nare we not making bucket 17 into a batch and everything else into\nanother batch? Then, when we process the batch that was derived from\nbucket-17, we can use 10 completely new bits from the hash key to\nslice the data from that bucket as finely as possible.\n\nNow the bucket might be entirely duplicates, in which case no number\nof additional bits will help. However, even in that case it's still a\ngood idea to make it its own batch, and then use some other algorithm\nto process that batch. And if it's *not* entirely duplicates, but\nthere are say 2 or 3 really common values that unluckily hash to the\nsame bucket, then being able to use a lot more bits for that portion\nof the data gives us the best chance of managing to spread it out into\ndifferent buckets.\n\nSimilarly, if we split the hash join into four batches, and batch 0\nfits in memory but batch 1 does not, we cannot further split batch 1\nwithout splitting batch 2 and batch 3 also. That's not good either,\nbecause those batches might be small and not need splitting.\n\nI guess what I'm trying to say is that our algorithms for dealing with\nmis-estimation seem to be largely oblivious to the problem of skew,\nand I don't think the problem is confined to extreme skew. Suppose you\nhave some data that is only moderately skewed, so that when you build\na hash table with 1024 buckets, 25% of the data is in buckets 0-19,\n25% in buckets 20-768, 25% in buckets 769-946, and the last 25% in\nbuckets 947-1023. If you knew that, then when you discover that the\ndata is 4x too large to fit in memory, you can divide the data into 4\nbatches using those bucket number ranges, and get it done in exactly 4\nbatches. As it is, you'll need to split until every uniform range of\nbuckets fits in memory: 0-31 is going to be too big a range, so you're\ngoing to go with 0-15, which means you'll have 64 batches instead of\n4.\n\nIt seems to me that a good chunk of what's being proposed right now\nbasically ignores the fact that we're not really responding to the\nskew in a very effective way. Thomas wants to stop splitting all the\nbuckets when splitting one of the buckets produces only a very small\nbenefit rather than when it produces no benefit at all, but he's not\nasking why we're splitting all of the buckets in the first place.\nTomas wants to slice the array of batches because there are so many of\nthem, but why are there so many? As he said himself, \"it gets to that\nmany batches because some of the values are very common and we don't\ndisable the growth earlier.\" Realistically, I don't see how there can\nbe so many batches that we can't even fit the metadata about the\nmatches into memory unless we're unnecessarily creating a lot of\nlittle tiny batches that we don't really need.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 11 Jul 2019 13:16:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 1:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, May 6, 2019 at 9:49 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > Stepping back a bit, I think there is something fishy about the way we\n> > detect extreme skew. Is that a factor in this case? Right now we\n> > wait until we have a batch that gets split into child batches\n> > containing exactly 0% and 100% of the tuples before we give up.\n> > Previously I had thought of that as merely a waste of time, but\n> > clearly it's also a waste of unmetered memory. Oops.\n> >\n> > I think our extreme skew detector should go off sooner, because\n> > otherwise if you have N nicely distributed unique keys and also M\n> > duplicates of one bad egg key that'll never fit in memory, we keep\n> > repartitioning until none of the N keys fall into the batch containing\n> > the key for the M duplicates before we give up! You can use\n> > balls-into-bins maths to figure out the number, but I think that means\n> > we expect to keep splitting until we have N * some_constant batches,\n> > and that's just silly and liable to create massive numbers of\n> > partitions proportional to N, even though we're trying to solve a\n> > problem with M. In another thread I suggested we should stop when\n> > (say) 95% of the tuples go to one child batch. I'm not sure how you\n> > pick the number.\n>\n> Another thing that is fishy about this is that we can't split a batch\n> or a bucket without splitting them all. Let's say that nbatches *\n> nbuckets = 16 million. One bucket in one batch contains 90% of the\n> tuples. Splitting *that* bucket might be a good idea if only 5% of the\n> tuples end up moving, perhaps even if only 1% end up moving. But, if\n> you have to double the total number of batches to get that benefit,\n> it's a lot less compelling, because now you have to rescan the outer\n> side more times.\n\nIt seems to me that a good chunk of what's being proposed right now\n> basically ignores the fact that we're not really responding to the\n> skew in a very effective way. Thomas wants to stop splitting all the\n> buckets when splitting one of the buckets produces only a very small\n> benefit rather than when it produces no benefit at all, but he's not\n> asking why we're splitting all of the buckets in the first place.\n> Tomas wants to slice the array of batches because there are so many of\n> them, but why are there so many? As he said himself, \"it gets to that\n> many batches because some of the values are very common and we don't\n> disable the growth earlier.\" Realistically, I don't see how there can\n> be so many batches that we can't even fit the metadata about the\n> matches into memory unless we're unnecessarily creating a lot of\n> little tiny batches that we don't really need.\n>\n>\n+1 on Robert's suggestion. It's worth to find the root cause of the batch\nexplosion problem.\nAs Robert pointed out \"we can't split a batch without spilling them all\".\nIn fact, the hybrid hash join algorithm should only split the overflow\nbatch and avoid to split the small batch which could be processed in\nmemory. Planner should calculate the initial batch number which ensure the\naverage size batch could be processed in memory giving different data\ndistribution. Executor should spilt skew batch in an one-batch-a-time\nmanner.\n\nI will firstly show an example to help understand batch explosion problem.\nSuppose we are going to join R and S and planner calculate the initial\nnbatch as 4.\nIn the first batch run, during HJ_BUILD_HASHTABLE state we Scan R and build\nin memory hash table for batch1 and spill other tuples of R into different\nbatch files(R2-R4). During HJ_NEED_NEW_OUTER and HJ_SCAN_BUCKET state, we\ndo two things: 1. if tuple in S belong to current batch, match it with in\nmemory R1 and emit result to parent plan node; 2. if tuple in S doesn't\nbelong to current batch, spill it to batch files of S2-S4. As a result,\nafter the first batch run we get:\n6 disk files: batch2(R2,S2), batch3(R3,S3) batch4(R4,S4)\n\nNow we run into HJ_NEED_NEW_BATCH state and begin to process R2 and S2.\nSuppose the second batch R2 is skewed and need to split batch number to 8.\nWhen building in-memory hash table for R2, we also split some tuples in R2\ninto spill file R6.(Based on our hash function, tuples belong to R2 will\nnot be shuffled to batches except R6). After R2's hash table is built, we\nbegin to probe tuples in S2. Since batch number is changed from 4 to 8,\nsome of tuples in S2 now belong to S6 and we spilt them to disk file S6.\nFor other tuples in S2, we match them with R2 and output the result to\nparent plannode. After the second batch processed, we got:\ndisk files: batch3(R3,S3), batch4(R4,S4),batch(R6,S6)\n\nNext, we begin to process R3 and S3. The third batch R3 is not skewed, but\nsince our hash function depends on batch number, which is 8 now. So we have\nto split some tuples in R3 to disk file R7, *which is not necessary*. When\nProbing S3, we also need to spilt some tuples in S3 into S7, *which is not\nnecessary either*. Since R3 could be loaded into memory entirely, spill\npart of R3 to disk file not only introduce more file and file buffers(which\nis problem Tomas try to solve), but also slow down the performance. After\nthe third batch processed, we got:\ndisk files: batch4(R4,S4),batch(R6,S6),batch(R7,S7)\n\nNext, we begin to process R4 and S4. Similar to R3, some tuples in R4 also\nneed to be spilled to file R8. But after this splitting, suppose R4 is\nstill skewed, and we increase the batch number again to 16. As a result,\nsome tuples in R4 will be spilled to file R12 and R16. When probing S4,\nsimilarly we need to split some tuples in S4 into S8,S12 and S16. After\nthis step, we get:\n disk files:\nbatch(R6,S6),batch(R7,S7),batch(R8,S8),batch(R12,S12),batch(R16,S16).\n\nNext, when we begin to process R6 and S6, even if we could build hash table\nfor R6 all in memory, but we have to spilt R6 based on new batch number 16\nand spill to file: R14. *It's not necessary.*\n\nNow we could conclude that increasing batch number would introduce\nunnecessary repeated spill not only on original batch(R3,S3) but also on\nnew generated batch(R6,S6) in a cascade way. *In a worse case, suppose R2\nis super skew and need to split 10 times, while R3 is OK to build hash\ntable all in memory. In this case, we have to introduce R7,R11,....,R4095,\ntotal 1023 unnecessary spill files. Each of these files may only contain\nless than ten tuples. Also, we need to palloc file buffer(512KB) for these\nspill files. This is the so called batch explosion problem.*\n\n*Solutions:*\nTo avoid these unnecessary repeated spill, I propose to make function\nExecHashGetBucketAndBatch\nas a hash function chain to determine the batchno.\nHere is the original implementation of ExecHashGetBucketAndBatch\n```\n\n//nbatch is the global batch number\n\n*batchno = (hashvalue >> hashtable->log2_nbuckets) & (nbatch - 1);\n```\nWe can see the original hash function basically calculate MOD of global\nbatch number(IBN).\n\nA real hybrid hash join should use a hash function chain to determine the\nbatchno. In the new algorithm, the component of hash function chain is\ndefined as: MOD of #IBN, MOD of #IBN*2, MOD of #IBN*4,MOD of #IBN*8\n....etc. A small batch will just use the first hash function in chain,\nwhile the skew batch will use the same number of hash functions in chain as\nthe times it is split.\nHere is the new implementation of ExecHashGetBucketAndBatch\n```\n/* i is the current batchno we are processing */\n/* hashChainLen record the times batch i is spilt */\nfor (j=0;j<hashChainLen[i];j++)\n{\n batchno = (hashvalue >> hashtable->log2_nbuckets) & ((#initialBatch)*\n(2^j) - 1);\n /* if the calculated batchno is still i, we need to call more hash\nfunctions\n * in chain to determine the final bucketno, else we could return\ndirectly.\n */\n if ( batchno != i )\n return batchno;\n}\nreturn batchno;\n```\n\nA quick example, Suppose R3's input is 3,7,11,15,19,23,27,31,35,15,27(we\ncould ensure they MOD4=3)\nSuppose Initial batch number is 4 and memory could contain 4 tuples, the\n5th tuple need to do batch spilt.\nStep1: batch3 process 3,7,11,15,19 and now need to split,\n chainLen[3]=2\n batch3: 3,11,19\n batch7: 7,15\nStep2: 23,27,31 coming\n batch3: 3,11,19,27\n batch7: 7,15,23,31\nStep 3: 35 coming, batch3 need to split again\n chainLen[3]=3\n batch3: 3,19,35\n batch7: 7,15,23,31\n batch11: 11,27\nStep 4 15 coming, HashFun1 15%4=3, HashFun2 15%8=7;\n since 7!=3 spill 15 to batch7.\nStep 5 27 coming, 27%4=3, 27%8=3, 27%16 =11\n since 27!=3 spill 27 to batch 11.\nFinal state:\n chainLen[3]=3\n batch3: 3,19,35\n batch7: 7,15,23,31,15\n batch11: 11,27,27\n\nHere is pseudo code of processing of batch i:\n```\n/*Step 1: build hash table for Ri*/\ntuple = ReadFromFile(Ri);\n/* get batchno by the new function*/\nbatchno =NewExecHashGetBucketAndBatch()\n/* do spill if not belong to current batch*/\nif(batchno != i)\n spill to file[batchno]\nflag = InsertTupleToHashTable(HT, tuple);\nif (flag == NEED_SPILT)\n{\n hashChainLen[i] ++;\n /* then call ExecHashIncreaseNumBatches() to do the real spill */\n}\n\n/* probe stage */\ntuple = ReadFromFile(S[i+Bi*k]);\nbatchno = NewExecHashGetBucketAndBatch()\nif (batchno == curbatch)\n probe and match\nelse\n spillToFile(tuple, batchno)\n}\n```\n\nThis solution only split the batch which needs to be split in a lazy way.\nIf this solution makes sense, I would like write the real patch.\nAny comment?\n\n\n-- \nThanks\n\nHubert Zhang\n\nOn Fri, Jul 12, 2019 at 1:16 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, May 6, 2019 at 9:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Stepping back a bit, I think there is something fishy about the way we\n> detect extreme skew. Is that a factor in this case? Right now we\n> wait until we have a batch that gets split into child batches\n> containing exactly 0% and 100% of the tuples before we give up.\n> Previously I had thought of that as merely a waste of time, but\n> clearly it's also a waste of unmetered memory. Oops.\n>\n> I think our extreme skew detector should go off sooner, because\n> otherwise if you have N nicely distributed unique keys and also M\n> duplicates of one bad egg key that'll never fit in memory, we keep\n> repartitioning until none of the N keys fall into the batch containing\n> the key for the M duplicates before we give up! You can use\n> balls-into-bins maths to figure out the number, but I think that means\n> we expect to keep splitting until we have N * some_constant batches,\n> and that's just silly and liable to create massive numbers of\n> partitions proportional to N, even though we're trying to solve a\n> problem with M. In another thread I suggested we should stop when\n> (say) 95% of the tuples go to one child batch. I'm not sure how you\n> pick the number.\n\nAnother thing that is fishy about this is that we can't split a batch\nor a bucket without splitting them all. Let's say that nbatches *\nnbuckets = 16 million. One bucket in one batch contains 90% of the\ntuples. Splitting *that* bucket might be a good idea if only 5% of the\ntuples end up moving, perhaps even if only 1% end up moving. But, if\nyou have to double the total number of batches to get that benefit,\nit's a lot less compelling, because now you have to rescan the outer\nside more times. \nIt seems to me that a good chunk of what's being proposed right now\nbasically ignores the fact that we're not really responding to the\nskew in a very effective way. Thomas wants to stop splitting all the\nbuckets when splitting one of the buckets produces only a very small\nbenefit rather than when it produces no benefit at all, but he's not\nasking why we're splitting all of the buckets in the first place.\nTomas wants to slice the array of batches because there are so many of\nthem, but why are there so many? As he said himself, \"it gets to that\nmany batches because some of the values are very common and we don't\ndisable the growth earlier.\" Realistically, I don't see how there can\nbe so many batches that we can't even fit the metadata about the\nmatches into memory unless we're unnecessarily creating a lot of\nlittle tiny batches that we don't really need.\n+1 on Robert's suggestion. It's worth to find the root cause of the batch explosion problem.As Robert pointed out \"we can't split a batch without spilling them all\". In fact, the hybrid hash join algorithm should only split the overflow batch and avoid to split the small batch which could be processed in memory. Planner should calculate the initial batch number which ensure the average size batch could be processed in memory giving different data distribution. Executor should spilt skew batch in an one-batch-a-time manner.I will firstly show an example to help understand batch explosion problem. Suppose we are going to join R and S and planner calculate the initial nbatch as 4.In the first batch run, during HJ_BUILD_HASHTABLE state we Scan R and build in memory hash table for batch1 and spill other tuples of R into different batch files(R2-R4). During HJ_NEED_NEW_OUTER and HJ_SCAN_BUCKET state, we do two things: 1. if tuple in S belong to current batch, match it with in memory R1 and emit result to parent plan node; 2. if tuple in S doesn't belong to current batch, spill it to batch files of S2-S4. As a result, after the first batch run we get:\n6 disk files: batch2(R2,S2), batch3(R3,S3) batch4(R4,S4)Now we run into HJ_NEED_NEW_BATCH state and begin to process R2 and S2. Suppose the second batch R2 is skewed and need to split batch number to 8. When building in-memory hash table for R2, we also split some tuples in R2 into spill file R6.(Based on our hash function, tuples belong to R2 will not be shuffled to batches except R6). After R2's hash table is built, we begin to probe tuples in S2. Since batch number is changed from 4 to 8, some of tuples in S2 now belong to S6 and we spilt them to disk file S6. For other tuples in S2, we match them with R2 and output the result to parent plannode. After the second batch processed, we got:disk files: batch3(R3,S3), batch4(R4,S4),batch(R6,S6)Next, we begin to process R3 and S3. The third batch R3 is not skewed, but since our hash function depends on batch number, which is 8 now. So we have to split some tuples in R3 to disk file R7, which is not necessary. When Probing S3, we also need to spilt some tuples in S3 into S7, which is not necessary either. Since R3 could be loaded into memory entirely, spill part of R3 to disk file not only introduce more file and file buffers(which is problem Tomas try to solve), but also slow down the performance. After the third batch processed, we got:disk files: batch4(R4,S4),batch(R6,S6),batch(R7,S7)Next, we begin to process R4 and S4. Similar to R3, some tuples in R4 also need to be spilled to file R8. But after this splitting, suppose R4 is still skewed, and we increase the batch number again to 16. As a result, some tuples in R4 will be spilled to file R12 and R16. When probing S4, similarly we need to split some tuples in S4 into S8,S12 and S16. After this step, we get: disk files: batch(R6,S6),batch(R7,S7),batch(R8,S8),batch(R12,S12),batch(R16,S16).Next, when we begin to process R6 and S6, even if we could build hash table for R6 all in memory, but we have to spilt R6 based on new batch number 16 and spill to file: R14. It's not necessary.Now we could conclude that increasing batch number would introduce unnecessary repeated spill not only on original batch(R3,S3) but also on new generated batch(R6,S6) in a cascade way. In a worse case, suppose R2 is super skew and need to split 10 times, while R3 is OK to build hash table all in memory. In this case, we have to introduce R7,R11,....,R4095, total 1023 unnecessary spill files. Each of these files may only contain less than ten tuples. Also, we need to palloc file buffer(512KB) for these spill files. This is the so called batch explosion problem.Solutions:To avoid these unnecessary repeated spill, I propose to make function ExecHashGetBucketAndBatch as a hash function chain to determine the batchno.Here is the original implementation of ExecHashGetBucketAndBatch```\n//nbatch is the global batch number*batchno = (hashvalue >> hashtable->log2_nbuckets) & (nbatch - 1);```We can see the original hash function basically calculate MOD of global batch number(IBN).A real hybrid hash join should use a hash function chain to determine the batchno. In the new algorithm, the component of hash function chain is defined as: MOD of #IBN, MOD of #IBN*2, MOD of #IBN*4,MOD of #IBN*8 ....etc. A small batch will just use the first hash function in chain, while the skew batch will use the same number of hash functions in chain as the times it is split.Here is the new implementation of ExecHashGetBucketAndBatch```/* i is the current batchno we are processing *//* hashChainLen record the times batch i is spilt */for (j=0;j<hashChainLen[i];j++){ batchno = (hashvalue >> hashtable->log2_nbuckets) & ((#initialBatch)* (2^j) - 1); /* if the calculated batchno is still i, we need to call more hash functions * in chain to determine the final bucketno, else we could return directly. */ if ( batchno != i ) return batchno;}return batchno;```A quick example, Suppose R3's input is 3,7,11,15,19,23,27,31,35,15,27(we could ensure they MOD4=3)Suppose Initial batch number is 4 and memory could contain 4 tuples, the 5th tuple need to do batch spilt.Step1: batch3 process 3,7,11,15,19 and now need to split, chainLen[3]=2 batch3: 3,11,19 batch7: 7,15Step2: 23,27,31 coming batch3: 3,11,19,27 batch7: 7,15,23,31Step 3: 35 coming, batch3 need to split again chainLen[3]=3 batch3: 3,19,35 batch7: 7,15,23,31 batch11: 11,27Step 4 15 coming, HashFun1 15%4=3, HashFun2 15%8=7; since 7!=3 spill 15 to batch7.Step 5 27 coming, 27%4=3, 27%8=3, 27%16 =11 since 27!=3 spill 27 to batch 11. Final state: chainLen[3]=3 batch3: 3,19,35 batch7: 7,15,23,31,15 batch11: 11,27,27Here is pseudo code of processing of batch i:```/*Step 1: build hash table for Ri*/tuple = ReadFromFile(Ri);/* get batchno by the new function*/batchno =NewExecHashGetBucketAndBatch()/* do spill if not belong to current batch*/if(batchno != i) spill to file[batchno]\nflag = InsertTupleToHashTable(HT, tuple);if (flag == NEED_SPILT){ hashChainLen[i] ++; /* then call ExecHashIncreaseNumBatches() to do the real spill */\n}/* probe stage */tuple = ReadFromFile(S[i+Bi*k]);batchno = NewExecHashGetBucketAndBatch()if (batchno == curbatch) probe and matchelse spillToFile(tuple, batchno)}```This solution only split the batch which needs to be split in a lazy way. If this solution makes sense, I would like write the real patch. Any comment?\n-- ThanksHubert Zhang",
"msg_date": "Wed, 14 Aug 2019 18:30:26 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On 2019-Jul-11, Tomas Vondra wrote:\n\n> On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:\n\n> > I think implementing support for parallel hashjoin or explicitly\n> > disabling it would be the bare minimum for this patch, which is why I\n> > made 2 its own item. I've marked it as returned to author for this\n> > reason.\n> \n> OK. I'm a bit confused / unsure what exactly our solution to the various\n> hashjoin issues is. I have not been paying attention to all the various\n> threads, but I thought we kinda pivoted to the BNL approach, no? I'm not\n> against pushing this patch (the slicing one) forward and then maybe add\n> BNL on top.\n\nSo what's a good way forward for this patch? Stalling forever like a\nglacier is not an option; it'll probably end up melting. There's a lot\nof discussion on this thread which I haven't read, and it's not\nimmediately clear to me whether this patch should just be thrown away in\nfavor of something completely different, or it can be considered a first\nstep in a long road.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 12:36:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 9:36 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Jul-11, Tomas Vondra wrote:\n>\n> > On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:\n>\n> > > I think implementing support for parallel hashjoin or explicitly\n> > > disabling it would be the bare minimum for this patch, which is why I\n> > > made 2 its own item. I've marked it as returned to author for this\n> > > reason.\n> >\n> > OK. I'm a bit confused / unsure what exactly our solution to the various\n> > hashjoin issues is. I have not been paying attention to all the various\n> > threads, but I thought we kinda pivoted to the BNL approach, no? I'm not\n> > against pushing this patch (the slicing one) forward and then maybe add\n> > BNL on top.\n>\n> So what's a good way forward for this patch? Stalling forever like a\n> glacier is not an option; it'll probably end up melting. There's a lot\n> of discussion on this thread which I haven't read, and it's not\n> immediately clear to me whether this patch should just be thrown away in\n> favor of something completely different, or it can be considered a first\n> step in a long road.\n>\n\nSo, I have been working on the fallback to block nested loop join\npatch--latest non-parallel version posted here [1]. I am currently\nstill working on the parallel version but don't have a complete\nworking patch yet. I am hoping to finish it and solicit feedback in\nthe next couple weeks.\n\nMy patch chunks up a bad inner side batch and processes it a chunk\nat a time. I haven't spent too much time yet thinking about Hubert's\nsuggestion proposed upthread. In the past I had asked Tomas about the\nidea of splitting up only \"bad batches\" to avoid having other batches\nwhich are very small. It seemed like this introduced additional\ncomplexity for future spilled tuples finding a home, however, I had\nnot considered the hash function chain method Hubert is mentioning.\n\nEven if we implemented additional strategies like the one Hubert is\nsuggesting, I still think that both the slicing patch originally\nproposed in this thread as well as a BNLJ fallback option could all\nwork together, as I believe they solve slightly different problems.\n\nIf Tomas or someone else has time to pick up and modify BufFile\naccounting patch, committing that still seems like the nest logical\nstep.\n\nI will work on getting a complete (parallel-aware) BNLJ patch posted\nsoon.\n\n[1]\nhttps://www.postgresql.org/message-id/CAAKRu_ZsRU%2BnszShs3AGVorx%3De%2B2jYkL7X%3DjiNO6%2Bqbho7vRpw%40mail.gmail.com\n\n-- \nMelanie Plageman\n\nOn Tue, Sep 3, 2019 at 9:36 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Jul-11, Tomas Vondra wrote:\n\n> On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:\n\n> > I think implementing support for parallel hashjoin or explicitly\n> > disabling it would be the bare minimum for this patch, which is why I\n> > made 2 its own item. I've marked it as returned to author for this\n> > reason.\n> \n> OK. I'm a bit confused / unsure what exactly our solution to the various\n> hashjoin issues is. I have not been paying attention to all the various\n> threads, but I thought we kinda pivoted to the BNL approach, no? I'm not\n> against pushing this patch (the slicing one) forward and then maybe add\n> BNL on top.\n\nSo what's a good way forward for this patch? Stalling forever like a\nglacier is not an option; it'll probably end up melting. There's a lot\nof discussion on this thread which I haven't read, and it's not\nimmediately clear to me whether this patch should just be thrown away in\nfavor of something completely different, or it can be considered a first\nstep in a long road.So, I have been working on the fallback to block nested loop joinpatch--latest non-parallel version posted here [1]. I am currentlystill working on the parallel version but don't have a completeworking patch yet. I am hoping to finish it and solicit feedback inthe next couple weeks.My patch chunks up a bad inner side batch and processes it a chunkat a time. I haven't spent too much time yet thinking about Hubert'ssuggestion proposed upthread. In the past I had asked Tomas about theidea of splitting up only \"bad batches\" to avoid having other batcheswhich are very small. It seemed like this introduced additionalcomplexity for future spilled tuples finding a home, however, I hadnot considered the hash function chain method Hubert is mentioning.Even if we implemented additional strategies like the one Hubert issuggesting, I still think that both the slicing patch originallyproposed in this thread as well as a BNLJ fallback option could allwork together, as I believe they solve slightly different problems.If Tomas or someone else has time to pick up and modify BufFileaccounting patch, committing that still seems like the nest logicalstep.I will work on getting a complete (parallel-aware) BNLJ patch postedsoon.[1] https://www.postgresql.org/message-id/CAAKRu_ZsRU%2BnszShs3AGVorx%3De%2B2jYkL7X%3DjiNO6%2Bqbho7vRpw%40mail.gmail.com-- Melanie Plageman",
"msg_date": "Thu, 5 Sep 2019 09:54:33 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 09:54:33AM -0700, Melanie Plageman wrote:\n>On Tue, Sep 3, 2019 at 9:36 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n>wrote:\n>\n>> On 2019-Jul-11, Tomas Vondra wrote:\n>>\n>> > On Wed, Jul 10, 2019 at 04:51:02PM -0700, Melanie Plageman wrote:\n>>\n>> > > I think implementing support for parallel hashjoin or explicitly\n>> > > disabling it would be the bare minimum for this patch, which is why I\n>> > > made 2 its own item. I've marked it as returned to author for this\n>> > > reason.\n>> >\n>> > OK. I'm a bit confused / unsure what exactly our solution to the various\n>> > hashjoin issues is. I have not been paying attention to all the various\n>> > threads, but I thought we kinda pivoted to the BNL approach, no? I'm not\n>> > against pushing this patch (the slicing one) forward and then maybe add\n>> > BNL on top.\n>>\n>> So what's a good way forward for this patch? Stalling forever like a\n>> glacier is not an option; it'll probably end up melting. There's a lot\n>> of discussion on this thread which I haven't read, and it's not\n>> immediately clear to me whether this patch should just be thrown away in\n>> favor of something completely different, or it can be considered a first\n>> step in a long road.\n>>\n>\n>So, I have been working on the fallback to block nested loop join\n>patch--latest non-parallel version posted here [1]. I am currently\n>still working on the parallel version but don't have a complete\n>working patch yet. I am hoping to finish it and solicit feedback in\n>the next couple weeks.\n>\n>My patch chunks up a bad inner side batch and processes it a chunk\n>at a time. I haven't spent too much time yet thinking about Hubert's\n>suggestion proposed upthread. In the past I had asked Tomas about the\n>idea of splitting up only \"bad batches\" to avoid having other batches\n>which are very small. It seemed like this introduced additional\n>complexity for future spilled tuples finding a home, however, I had\n>not considered the hash function chain method Hubert is mentioning.\n>\n>Even if we implemented additional strategies like the one Hubert is\n>suggesting, I still think that both the slicing patch originally\n>proposed in this thread as well as a BNLJ fallback option could all\n>work together, as I believe they solve slightly different problems.\n>\n\nI have to admit I kinda lost track of how exactly all the HJ patches\nposted in various -hackers threads shall work together in the end. We have\nfar too many in-flight patches dealing with this part of the code at the\nmoment. It's a bit like with the buses - for years there were no patches\nfixing those issues, and now we have 17 ;-)\n\nMy feeling is that we should get the BNLJ committed first, and then maybe\nuse some of those additional strategies as fallbacks (depending on which\nissues are still unsolved by the BNLJ).\n\n>If Tomas or someone else has time to pick up and modify BufFile\n>accounting patch, committing that still seems like the nest logical\n>step.\n>\n\nOK, I'll look into that (i.e. considering BufFile memory during planning,\nand disabling HJ if not possible).\n\n>I will work on getting a complete (parallel-aware) BNLJ patch posted\n>soon.\n>\n\nGood!\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 10 Sep 2019 15:47:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 03:47:51PM +0200, Tomas Vondra wrote:\n> My feeling is that we should get the BNLJ committed first, and then maybe\n> use some of those additional strategies as fallbacks (depending on which\n> issues are still unsolved by the BNLJ).\n\nThe glacier is melting more. Tomas, what's your status here? The\npatch has been waiting on author for two months now. If you are not\nplanning to work more on this one, then it should be marked as\nreturned with feedback?\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 17:33:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 05:33:35PM +0900, Michael Paquier wrote:\n>On Tue, Sep 10, 2019 at 03:47:51PM +0200, Tomas Vondra wrote:\n>> My feeling is that we should get the BNLJ committed first, and then maybe\n>> use some of those additional strategies as fallbacks (depending on which\n>> issues are still unsolved by the BNLJ).\n>\n>The glacier is melting more. Tomas, what's your status here? The\n>patch has been waiting on author for two months now. If you are not\n>planning to work more on this one, then it should be marked as\n>returned with feedback?\n\nI'm not planning to do any any immediate work on this, so I agree with\nmarking it as RWF. I think Melanie is working on the BNL patch, which\nseems like the right solution.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 25 Nov 2019 19:11:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 07:11:19PM +0100, Tomas Vondra wrote:\n> I'm not planning to do any any immediate work on this, so I agree with\n> marking it as RWF. I think Melanie is working on the BNL patch, which\n> seems like the right solution.\n\nThanks, I have switched the patch as returned with feedback.\n--\nMichael",
"msg_date": "Tue, 26 Nov 2019 13:59:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 10:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Mon, Nov 25, 2019 at 05:33:35PM +0900, Michael Paquier wrote:\n> >On Tue, Sep 10, 2019 at 03:47:51PM +0200, Tomas Vondra wrote:\n> >> My feeling is that we should get the BNLJ committed first, and then\n> maybe\n> >> use some of those additional strategies as fallbacks (depending on which\n> >> issues are still unsolved by the BNLJ).\n> >\n> >The glacier is melting more. Tomas, what's your status here? The\n> >patch has been waiting on author for two months now. If you are not\n> >planning to work more on this one, then it should be marked as\n> >returned with feedback?\n>\n> I'm not planning to do any any immediate work on this, so I agree with\n> marking it as RWF. I think Melanie is working on the BNL patch, which\n> seems like the right solution.\n>\n>\nSorry for the delay. I have posted the parallel-aware version BNLJ\n(adaptive HJ) of this in the thread which originally had all of the\npatches for it [1]. It's not near committable, so I wasn't going to\nregister it for a commitfest yet, but I would love feedback on the\nprototype.\n\n[1]\nhttps://www.postgresql.org/message-id/CAAKRu_YsWm7gc_b2nBGWFPE6wuhdOLfc1LBZ786DUzaCPUDXCA%40mail.gmail.com\n-- \nMelanie Plageman\n\nOn Mon, Nov 25, 2019 at 10:11 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Mon, Nov 25, 2019 at 05:33:35PM +0900, Michael Paquier wrote:\n>On Tue, Sep 10, 2019 at 03:47:51PM +0200, Tomas Vondra wrote:\n>> My feeling is that we should get the BNLJ committed first, and then maybe\n>> use some of those additional strategies as fallbacks (depending on which\n>> issues are still unsolved by the BNLJ).\n>\n>The glacier is melting more. Tomas, what's your status here? The\n>patch has been waiting on author for two months now. If you are not\n>planning to work more on this one, then it should be marked as\n>returned with feedback?\n\nI'm not planning to do any any immediate work on this, so I agree with\nmarking it as RWF. I think Melanie is working on the BNL patch, which\nseems like the right solution.\nSorry for the delay. I have posted the parallel-aware version BNLJ(adaptive HJ) of this in the thread which originally had all of thepatches for it [1]. It's not near committable, so I wasn't going toregister it for a commitfest yet, but I would love feedback on theprototype.[1] https://www.postgresql.org/message-id/CAAKRu_YsWm7gc_b2nBGWFPE6wuhdOLfc1LBZ786DUzaCPUDXCA%40mail.gmail.com-- Melanie Plageman",
"msg_date": "Fri, 3 Jan 2020 09:44:58 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: accounting for memory used for BufFile during hash joins"
}
] |
[
{
"msg_contents": "\n\n\n\n\nHi,I am trying to efficiently rollback a manually selectedd subset of committed SQL transactions by scanning an SQL transaction log. This feature is useful when a database administrator wants to rollback not the entire database system, but only particular SQL statements that affect a certain set of SQL tables. Unfortunately, this is impossible in the current PostgreSQL setup, because PostgreSQL's WAL(Write-Ahead Log) file doesn't provide any SQL statement-level redo records, but only physical block-level redo records.To this end, my goal is to improve PostgreSQL to produce augmented transaction logs. In particular, the augmented transaction log's every committed transaction ID will contain an additional section called \"rollback SQL statements\", which is a minimal series of DELETE & INSERT SQL statements that effectively rolls back one transaction to its immediately previous transaction. For example, suppose that we have the following SQL table:\n\n=================\nTable1\ncolumn1 | column2 \n1 | 20\n2 | 30\n3 | 40\n3 | 40\n4 | 50\n=================\n\nAnd suppose that the following 100th transaction was committed:\n\nUPDATE Table1\n SET column1 = 10, column2 = 20\n WHERE colum2 > 20 ;\n\n\n\nThen, the augmented transaction log file will generate the following log entry for the above committed transaction: \n\n\nCommitted Transaction ID = 100\nRollback SQL Statements = \n- DELETE FROM Table1 WHERE column1 = 2 AND column2 = 30\n- INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20)\n- DELETE FROM Table1 WHERE column1 = 3 AND column2 = 40\n- INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20)\n- DELETE FROM Table1 WHERE column1 = 4 AND column2 = 50\n- INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20)\n\nNote that the above Rollback SQL statements are in the simplest forms without involving any complex SQL operations such as JOIN or sub-queries. Also note that we cannot create the above Rollback SQL statements purely based on original consecutive SQL transactions, because we don't know which rows of Table1 will need to be DELETED without actually scanning the entire Table1 and evaluating Transction #100's WHERE clause (i.e., colum2 > 20) on every single row of Table1. Therefore, to generate a list of simple Rollback SQL statements like the above, we have no choice but to embed this logging feature in the PostgreSQL's source code where the WAL(Write-Ahead Log) file is being updated. \n\nSince the current PostgreSQL doesn't support this feature, I plan to implement the above feature in the source code. But I have never worked on PostgreSQL source code in my life, and I wonder if anybody could give me a hint on which source code files (and functions) are about recording redo records in the WAL file. In particular, when the SQL server records the information of updated block location & values into the WAL file for each SQL statement that modifies any relations, we can additionally make the SQL server also write the list of the simplest INSERT & DELETE SQL statements that effectively enforces such SQL table write operations. If such an SQL-level inforcement information is available in the WAL file, one can easily conjecture what will be the corresponding Rollback (i.e., inverse) SQL statements from there.\nThanks for anybody's comments. Ronny\n\n\n\n\n",
"msg_date": "Sat, 4 May 2019 14:32:06 +0900 (KST)",
"msg_from": "Ronny Ko <gogo9th@hanmail.net>",
"msg_from_op": true,
"msg_subject": "Logging the feature of SQL-level read/write commits"
},
{
"msg_contents": "On Sat, May 04, 2019 at 02:32:06PM +0900, Ronny Ko wrote:\n> Hi, \n> \n> I am trying to efficiently rollback a manually selectedd subset of \n> committed SQL transactions by scanning an SQL transaction log. This \n> feature is useful when a database administrator wants to rollback not the \n> entire database system, but only particular SQL statements that affect a \n> certain set of SQL tables. Unfortunately, this is impossible in the \n> current PostgreSQL setup, because PostgreSQL's WAL(Write-Ahead Log) file \n> doesn't provide any SQL statement-level redo records, but only physical \n> block-level redo records. \n> \n> To this end, my goal is to improve PostgreSQL to produce augmented \n> transaction logs. In particular, the augmented transaction log's every \n> committed transaction ID will contain an additional section called \n> \"rollback SQL statements\", which is a minimal series of DELETE & INSERT \n> SQL statements that effectively rolls back one transaction to its \n> immediately previous transaction. For example, suppose that we have the \n> following SQL table: \n> \n> ================= \n> \n> Table1 \n> \n> column1 | column2 \n> \n> 1 | 20 \n> \n> 2 | 30 \n> \n> 3 | 40 \n> \n> 3 | 40 \n> \n> 4 | 50 \n> \n> ================= \n> \n> And suppose that the following 100th transaction was committed: \n> \n> UPDATE Table1 \n> SET column1 = 10, column2 = 20 \n> WHERE colum2 > 20 ; \n> \n> Then, the augmented transaction log file will generate the following log \n> entry for the above committed transaction: \n> \n> Committed Transaction ID = 100 \n> \n> Rollback SQL Statements = \n> \n> - DELETE FROM Table1 WHERE column1 = 2 AND column2 = 30 \n> \n> - INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20) \n> \n> - DELETE FROM Table1 WHERE column1 = 3 AND column2 = 40 \n> \n> - INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20) \n> \n> - DELETE FROM Table1 WHERE column1 = 4 AND column2 = 50 \n> \n> - INSERT INTO TABLE Table1 VALUES(column1, column2) (10, 20) \n> \n> Note that the above Rollback SQL statements are in the simplest forms \n> without involving any complex SQL operations such as JOIN or sub-queries. \n> Also note that we cannot create the above Rollback SQL statements purely \n> based on original consecutive SQL transactions, because we don't know \n> which rows of Table1 will need to be DELETED without actually scanning the \n> entire Table1 and evaluating Transction #100's WHERE clause (i.e., colum2 \n> > 20) on every single row of Table1. Therefore, to generate a list of \n> simple Rollback SQL statements like the above, we have no choice but to \n> embed this logging feature in the PostgreSQL's source code where the \n> WAL(Write-Ahead Log) file is being updated. \n> \n> \n> \n> Since the current PostgreSQL doesn't support this feature, I plan to \n> implement the above feature in the source code. But I have never worked on \n> PostgreSQL source code in my life, and I wonder if anybody could give me \n> a hint on which source code files (and functions) are about recording redo \n> records in the WAL file. In particular, when the SQL server records the \n> information of updated block location & values into the WAL file for each \n> SQL statement that modifies any relations, we can additionally make the \n> SQL server also write the list of the simplest INSERT & DELETE SQL \n> statements that effectively enforces such SQL table write operations. If \n> such an SQL-level inforcement information is available in the WAL file, \n> one can easily conjecture what will be the corresponding Rollback (i.e., \n> inverse) SQL statements from there. \n> \n\nYou probably need to look at ./src/backend/access/transam/, particularly\nxlog.c and xact.c. That being said, this seems like a rather difficult\ntask for someone who never worked with PostgreSQL source code.\n\nWhat's worse, I have serious doubts it's possible to implement the\nfeature you proposed - for a number of reasons. Firstly, WAL is a redo\nlog, so it seems like a rather poor fit for what is essentially an undo.\nSecondly, those \"undo\" records may require quite a bit of space, and I\nwonder if generating all of that at commit time may cause issues e.g.\nfor very large transactions (it'll certainly add significant overhead to\nall transactions, because you don't know what might be rolled back\nlater). And if it's in WAL, it's subject to WAL rotation, limits, etc.\nSo if the admin does not initiate the rollback within three checkpoints,\nit's pretty much game over because the WAL may be already over.\n\nBut the most serious prboblem is that this assumes the database knows\nhow to generate the \"undo commands\". And it can't, because it's very\napplication specific.\n\nFor example if you have two transactions, updating the same row:\n\n T1: UPDATE t SET v = 100 WHERE id = 1;\n T1: COMMIT\n\n T2: UPDATE t SET v = 200 WHERE id = 1;\n T2: COMMIT\n\nand then you decide to \"rollback\" T1, how do you do that? That depends\non whether T2 just written entirely new value into column \"v\" (ignoring\nthe original value written by T1) or whether it for example incremented\nthe original value. In the first case the \"undo\" is \"do nothing\", in the\nsecond case it's \"UPDATE t SET v = v - 100 WHERE id = 1\". Or perhaps\nsomething even more complex.\n\nAnd no, I don't think the database can deduce this - those operations\noften happen in the application code, outside the database. This is why\napplications (e.g. banking systems) implement this functionality as\npretty much new transaction, doing application-specific things.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 5 May 2019 01:00:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Logging the feature of SQL-level read/write commits"
},
{
"msg_contents": "Hello,\n\nmay be you can find more informations regarding WAL concepts in\nWrite Ahead Logging — WAL\nhttp://www.interdb.jp/pg/pgsql09.html\n\nIt seems very complicated to change WAL format ...\n\nMaybe there are other solutions to answer your need,\nI found many interesting solutions in postgres archives searching\nwith words \"flashback\", \"timetravel\", \"tablelog\", ...\n\nMy prefered is \"AS OF queries\"\nhttps://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru\n\nand more specificaly syntax\nVERSIONS BETWEEN SYSTEM TIME ... AND ...\nthat would permit to get history of modifications for some rows\nand help fixing current data\n\nRegards\nPAscal\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 5 May 2019 03:14:36 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logging the feature of SQL-level read/write commits"
},
{
"msg_contents": "On Sat, May 4, 2019 at 02:32:06PM +0900, Ronny Ko wrote:\n> Hi,\n> \n> I am trying to efficiently rollback a manually selectedd subset of committed\n> SQL transactions by scanning an SQL transaction log. This feature is useful\n> when a database administrator wants to rollback not the entire database system,\n> but only particular SQL statements that affect a certain set of SQL tables.\n> Unfortunately, this is impossible in the current PostgreSQL setup, because\n> PostgreSQL's WAL(Write-Ahead Log) file doesn't provide any SQL statement-level\n> redo records, but only physical block-level redo records.\n\nMy blog entry covers some of this:\n\n\thttps://momjian.us/main/blogs/pgblog/2019.html#March_6_2019\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sun, 5 May 2019 08:09:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Logging the feature of SQL-level read/write commits"
},
{
"msg_contents": "\n\n\n\n\nHi Legrand & Bruce,\nThanks for your thoughts. I think MariaDB's temporal queries could work. \nIf I want to use it in PostgreSQL, I could use Logical Decoding plugins for recording all INSERT/UPDATE/DELETE history of each table:\n- https://debezium.io/docs/install/postgres-plugins/\n- wal2json\n\nDo you think PostgreSQL's wal2json already implemented the feature I want?\n\n\n\n",
"msg_date": "Mon, 6 May 2019 06:56:41 +0900 (KST)",
"msg_from": "Ronny Ko <gogo9th@hanmail.net>",
"msg_from_op": true,
"msg_subject": "RE: Re: Logging the feature of SQL-level read/write commits"
},
{
"msg_contents": "Hi,\n\ngood point !\nwal2Json seems to correspond to your needs, \nthis is first designed for Change Data Capture, \ntaht could generate a (very) big size of logs.\n\nYou didn't tell us much about your use case ...\nand maybe, if the number of data modifications \nis not too big, and the number of tables to track \nlimited, an history_table fed with (after insert, \nafter update, before delete) \ntriggers an other (simpler) solution ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 5 May 2019 15:57:09 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: Re: Logging the feature of SQL-level read/write commits"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading vacuumdb code, I just noticed that it can return 0 if an\nerror happen when -j is used, if errors happen on the last batch of\ncommands.\n\nFor instance:\nsession 1\nalter database postgres set lock_timeout = 1;\nbegin;\nlock table pg_extension;\n\nsession 2\n$ vacuumdb -d postgres -t pg_extension -t pg_extension\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of table \"pg_catalog.pg_extension\" in\ndatabase \"postgres\" failed: ERROR: canceling statement due to lock\ntimeout\n\n$ echo $?\n1\n\n$ vacuumdb -d postgres -t pg_extension -t pg_extension -j2\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of database \"postgres\" failed: ERROR:\ncanceling statement due to lock timeout\n\n$ echo $?\n0\n\nbut\n\n$ vacuumdb -d postgres -t pg_extension -t pg_extension -t pg_extension -j2\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of database \"postgres\" failed: ERROR:\ncanceling statement due to lock timeout\n\n$ echo $?\n1\n\nThis behavior exists since 9.5. Trivial patch attached. I'm not sure\nthat a TAP test is required here, so I didn't add one. I'll be happy\nto do so though if needed.",
"msg_date": "Sat, 4 May 2019 10:35:23 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 04, 2019 at 10:35:23AM +0200, Julien Rouhaud wrote:\n> While reading vacuumdb code, I just noticed that it can return 0 if an\n> error happen when -j is used, if errors happen on the last batch of\n> commands.\n\nYes, I agree that this is wrong. GetIdleSlot() is much more careful\nabout that than vacuum_one_database(), so your patch looks good at\nquick glance.\n\n> This behavior exists since 9.5. Trivial patch attached. I'm not sure\n> that a TAP test is required here, so I didn't add one. I'll be happy\n> to do so though if needed.\n\nYou could make that reliable by getting a lock on a table using a\ntwo-phase transaction, and your test case from upthread won't fly high\nas we have no facility in PostgresNode.pm to keep around a session's\nstate using psql. FWIW, I am not convinced that it is a case worth\nbothering, so no tests is fine.\n--\nMichael",
"msg_date": "Sat, 4 May 2019 18:15:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 4, 2019 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I'm not sure\n> > that a TAP test is required here, so I didn't add one. I'll be happy\n> > to do so though if needed.\n>\n> You could make that reliable by getting a lock on a table using a\n> two-phase transaction, and your test case from upthread won't fly high\n> as we have no facility in PostgresNode.pm to keep around a session's\n> state using psql. FWIW, I am not convinced that it is a case worth\n> bothering, so no tests is fine.\n\nYes, adding a test for this case looked like requiring a lot of\ncreativity using TAP infrastructure, that's the main reason why I\ndidn't add one. 2PC is a good idea though.\n\n\n",
"msg_date": "Sat, 4 May 2019 11:22:22 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 4, 2019 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 04, 2019 at 10:35:23AM +0200, Julien Rouhaud wrote:\n> > While reading vacuumdb code, I just noticed that it can return 0 if an\n> > error happen when -j is used, if errors happen on the last batch of\n> > commands.\n>\n> Yes, I agree that this is wrong. GetIdleSlot() is much more careful\n> about that than vacuum_one_database(), so your patch looks good at\n> quick glance.\n>\n\nThe fix looks good to me as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 May 2019 16:34:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 04, 2019 at 04:34:59PM +0530, Amit Kapila wrote:\n> The fix looks good to me as well.\n\nWe are very close to the next minor release, so it may not be that\nwise to commit a fix for that issue now as we should have a couple of\nclean buildfarm clean runs. Are there any objections to wait after\nthe release? Or would folks prefer if this is fixed before the\nrelease?\n--\nMichael",
"msg_date": "Sat, 4 May 2019 21:17:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 4, 2019 at 2:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 04, 2019 at 04:34:59PM +0530, Amit Kapila wrote:\n> > The fix looks good to me as well.\n>\n> We are very close to the next minor release, so it may not be that\n> wise to commit a fix for that issue now as we should have a couple of\n> clean buildfarm clean runs.\n\nAgreed.\n\n> Are there any objections to wait after\n> the release? Or would folks prefer if this is fixed before the\n> release?\n\nNo objection from me. It's been broken since introduction in 9.5 and\nhas never been noticed since, so it can wait until next release.\nShould I register the patch in the next commitfest to keep track of\nit?\n\n\n",
"msg_date": "Sat, 4 May 2019 14:28:48 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 04, 2019 at 02:28:48PM +0200, Julien Rouhaud wrote:\n> No objection from me. It's been broken since introduction in 9.5 and\n> has never been noticed since, so it can wait until next release.\n> Should I register the patch in the next commitfest to keep track of\n> it?\n\nNo need to. I am marking on my agenda to have an extra look at it\nnext week and potentially commit it after the release.\n--\nMichael",
"msg_date": "Sat, 4 May 2019 21:41:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 4, 2019 at 2:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 04, 2019 at 02:28:48PM +0200, Julien Rouhaud wrote:\n> > No objection from me. It's been broken since introduction in 9.5 and\n> > has never been noticed since, so it can wait until next release.\n> > Should I register the patch in the next commitfest to keep track of\n> > it?\n>\n> No need to. I am marking on my agenda to have an extra look at it\n> next week and potentially commit it after the release.\n\nOk, thanks!\n\n\n",
"msg_date": "Sat, 4 May 2019 14:43:29 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, May 4, 2019 at 2:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> We are very close to the next minor release, so it may not be that\n>> wise to commit a fix for that issue now as we should have a couple of\n>> clean buildfarm clean runs.\n\n> Agreed.\n\n+1, waiting till after the minor releases are tagged seems wisest.\nWe can still push it before 12beta1, so it will get tested in the beta\nperiod.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 11:48:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Sat, May 04, 2019 at 11:48:53AM -0400, Tom Lane wrote:\n> +1, waiting till after the minor releases are tagged seems wisest.\n> We can still push it before 12beta1, so it will get tested in the beta\n> period.\n\nThe new minor releases have been tagged, so committed.\n--\nMichael",
"msg_date": "Thu, 9 May 2019 10:32:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
},
{
"msg_contents": "On Thu, May 9, 2019 at 3:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 04, 2019 at 11:48:53AM -0400, Tom Lane wrote:\n> > +1, waiting till after the minor releases are tagged seems wisest.\n> > We can still push it before 12beta1, so it will get tested in the beta\n> > period.\n>\n> The new minor releases have been tagged, so committed.\n\nThanks a lot!\n\n\n",
"msg_date": "Thu, 9 May 2019 08:19:44 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong return code in vacuumdb when multiple jobs are used"
}
] |
[
{
"msg_contents": "I am interesting in working for a PostgreSQL project, If it's not already\ntaken, my preference is \"Write a PostgreSQL technical mumbo-jumbo\ndictionary \".\n\nMy name is Federico Razzoli. Some facts about me:\n* I am a database consultant.\n* In the past as a DBA I administered PostgreSQL for several companies\n(mostly for analytics, to be honest)\n* I have some experience in writing. I authored \"Mastering MariaDB\" and\n\"MariaDB Essentials\". My blog (federico-razzoli.com) has several technical\narticles. I occasionally speak at conferences, like Percona Live.\n* Located in London.\n\nI regularly contribute to some other communities, but I've never been\ninvolved in the PostgreSQL community. I would be happy to start with this\nproject.\n\nCheers,\nFederico\n\nI am interesting in working for a PostgreSQL project, If it's not already taken, my preference is \"Write a PostgreSQL technical mumbo-jumbo dictionary \".My name is Federico Razzoli. Some facts about me:* I am a database consultant.* In the past as a DBA I administered PostgreSQL for several companies (mostly for analytics, to be honest)* I have some experience in writing. I authored \"Mastering MariaDB\" and \"MariaDB Essentials\". My blog (federico-razzoli.com) has several technical articles. I occasionally speak at conferences, like Percona Live.* Located in London.I regularly contribute to some other communities, but I've never been involved in the PostgreSQL community. I would be happy to start with this project.Cheers,Federico",
"msg_date": "Sat, 4 May 2019 12:38:41 +0100",
"msg_from": "Federico Razzoli <federico.razzoli.dba@gmail.com>",
"msg_from_op": true,
"msg_subject": "season of docs proposal"
}
] |
[
{
"msg_contents": "In side-note in another thread Tom pointed out the speed improvements of\nusing an autoconf cache when re-building, which sounded nice to me as\nconfig takes an annoyingly long time and is not parallelized.\n\nBut the config.cache files gets deleted by make maintainer-clean. Doesn't\nthat mostly defeat the purpose of having a cache? Am I doing something\nwrong here, or just thinking about it wrong?\n\ntime ./configure --config-cache > /dev/null\nreal 0m21.538s\n\ntime ./configure --config-cache > /dev/null\nreal 0m3.425s\n\nmake maintainer-clean > /dev/null ;\n## presumably git checkout a new commit here\ntime ./configure --config-cache > /dev/null\nreal 0m21.260s\n\nCheers,\n\nJeff\n\nIn side-note in another thread Tom pointed out the speed improvements of using an autoconf cache when re-building, which sounded nice to me as config takes an annoyingly long time and is not parallelized.But the config.cache files gets deleted by make maintainer-clean. Doesn't that mostly defeat the purpose of having a cache? Am I doing something wrong here, or just thinking about it wrong?time ./configure --config-cache > /dev/null real 0m21.538stime ./configure --config-cache > /dev/null real 0m3.425smake maintainer-clean > /dev/null ; ## presumably git checkout a new commit heretime ./configure --config-cache > /dev/null real 0m21.260sCheers,Jeff",
"msg_date": "Sat, 4 May 2019 09:16:57 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "make maintainer-clean and config.cache"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> But the config.cache files gets deleted by make maintainer-clean. Doesn't\n> that mostly defeat the purpose of having a cache? Am I doing something\n> wrong here, or just thinking about it wrong?\n\nWell, a few things about that:\n\n(1) distclean *must* remove config.cache to be sure we don't accidentally\ninclude one in tarballs;\n\n(2) it's also a good idea for distclean to remove it so that the common\npattern \"make distclean; git pull\" doesn't leave you with a stale cache\nfile if somebody updated configure;\n\n(3) IMO, letting it default to config.cache isn't best practice anyway.\n\nI always use --cache-file to select a cache file, which I keep outside\nthe source directory. The main advantage of that is that it's possible\nto switch between different CFLAGS settings, different compilers, etc,\nwithout having to lose your cache, by instead specifying a different\ncache file for each set of settings. I don't take this as far as some\npeople might want to --- I just keep one for default CFLAGS and one for\nnon-default, per branch. Somebody who was more into performance testing\nthan I might have a more complex rule for that.\n\nFor amusement's sake, here's (most of) my standard shell script for\ninvoking PG's configure. This goes along with a setup script that\nsets $PGINSTROOT and some other variables depending on which branch\nI'm testing:\n\n# This provides caching for just one non-default CFLAGS setting per branch,\n# but that seems like enough for now.\nif [ x\"$CFLAGS\" = x\"\" ]; then\n ACCACHEFILE=\"$HOME/accache/config-`basename $PGINSTROOT`.cache\"\nelse\n ACCACHEFILE=\"$HOME/accache/config-`basename $PGINSTROOT`-cflags.cache\"\nfi\n\n# Trash the cache file when configure script changes.\nif [ ./configure -nt \"$ACCACHEFILE\" ]; then\n rm -f \"$ACCACHEFILE\"\nfi\n\n./configure --with-pgport=\"$DEFPORT\" --prefix=\"$PGINSTROOT\" \\\n\t--cache-file=\"$ACCACHEFILE\" \\\n\t--enable-debug --enable-cassert $PGCONFIGOPTS \"$@\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 May 2019 11:39:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make maintainer-clean and config.cache"
}
] |
[
{
"msg_contents": "Hi\n\nI am Raghav Jajodia, a software engineer from Bangalore, India. I have been\na diligent open source contributor and have been a student under the\nfollowing:\n1. Google Summer of Code 2017 student\n2. OWASP Code Sprint Winner\n3. FOSSASIA CodeHeat 2018 Grand Prize winner\n\nI have mentored more that 250 students in technology, majority being\nfemales to improve diversity in tech space. I have served as a mentor in\nvarious programs like\n1. Google Summer of Code 2018\n2. Google Code In 2018\n3. IIT Kharagpur Winter of Code 2017\n4. Rails Girls Summer of Code Coach\n5. Wootech Mentorship Program Singapore\n6. GirlScript Summer of Code 2017\n7. LearnITGirl (4th Edition) 2019\n\nI have made code contributions to challenging technology organisations like\nZulip, FOSSASIA, NRNB, OWASP etc. I am proficient in the technologies used\nin the development by the organisations. I am fluent in written and verbal\nEnglish. I have been making code and documentation contributions since last\n4 years. I have gone through the wiki for the Google Season of Docs 2019.\n\nApart from building a solid proposal, I would like to know, what exactly\nshould I do in this period to increase my chances of selection. I am\nlooking forward to be a long term contributor, both in terms of code and\ndocumentation for PostgreSQL organisation. Looking forward to your response.\n\nThanks,\nRaghav Jajodia\nGithub: https://github.com/jajodiaraghav\nLinkedin: https://www.linkedin.com/in/jajodiaraghav\n\nHiI am Raghav Jajodia, a software engineer from Bangalore, India. I have been a diligent open source contributor and have been a student under the following:1. Google Summer of Code 2017 student2. OWASP Code Sprint Winner3. FOSSASIA CodeHeat 2018 Grand Prize winnerI have mentored more that 250 students in technology, majority being females to improve diversity in tech space. I have served as a mentor in various programs like1. Google Summer of Code 20182. Google Code In 20183. IIT Kharagpur Winter of Code 20174. Rails Girls Summer of Code Coach5. Wootech Mentorship Program Singapore6. GirlScript Summer of Code 20177. LearnITGirl (4th Edition) 2019I have made code contributions to challenging technology organisations like Zulip, FOSSASIA, NRNB, OWASP etc. I am proficient in the technologies used in the development by the organisations. I am fluent in written and verbal English. I have been making code and documentation contributions since last 4 years. I have gone through the wiki for the Google Season of Docs 2019.Apart from building a solid proposal, I would like to know, what exactly should I do in this period to increase my chances of selection. I am looking forward to be a long term contributor, both in terms of code and documentation for PostgreSQL organisation. Looking forward to your response.Thanks,Raghav JajodiaGithub: https://github.com/jajodiaraghavLinkedin: https://www.linkedin.com/in/jajodiaraghav",
"msg_date": "Sun, 5 May 2019 23:02:12 +0530",
"msg_from": "Raghav Jajodia <jajodia.raghav@gmail.com>",
"msg_from_op": true,
"msg_subject": "Google Season of Docs 2019 - PostgreSQL"
},
{
"msg_contents": "Greetings,\n\n* Raghav Jajodia (jajodia.raghav@gmail.com) wrote:\n> I am Raghav Jajodia, a software engineer from Bangalore, India. I have been\n> a diligent open source contributor and have been a student under the\n> following:\n> 1. Google Summer of Code 2017 student\n> 2. OWASP Code Sprint Winner\n> 3. FOSSASIA CodeHeat 2018 Grand Prize winner\n\nGSoD, based on my understanding, is not intended as an internship and is\nnot comparable to GSoC in that regard. Instead, it's for experienced\ntechnical writers.\n\n> Apart from building a solid proposal, I would like to know, what exactly\n> should I do in this period to increase my chances of selection. I am\n> looking forward to be a long term contributor, both in terms of code and\n> documentation for PostgreSQL organisation. Looking forward to your response.\n\nI would suggest that you reach out to Google to discuss if GSoD is a\ngood fit for you. Note also that the PG GSoD wiki page asks for\nproposals to be sent to the pgsql-docs mailing list, not here.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 May 2019 09:46:37 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Google Season of Docs 2019 - PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nConsider the following test scenario:\n\ncreate table test ( c1 serial, c2 int not null ) partition by list (c2);\ncreate table test_p1 partition of test for values in ( 1);\ncreate table test_p2 partition of test for values in ( 2);\n\nrushabh@rushabh:postgresql$ ./db/bin/pg_dump db1 > dump.sql\n\nWhile restoring above dump it's throwing a below error:\n\nCREATE TABLE\npsql:dump.sql:66: ERROR: column \"c1\" in child table must be marked NOT NULL\nALTER TABLE\nCREATE TABLE\npsql:dump.sql:79: ERROR: column \"c1\" in child table must be marked NOT NULL\nALTER TABLE\n\nProblem got introduced with below commit:\n\ncommit 3b23552ad8bbb1384381b67f860019d14d5b680e\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Wed Apr 24 15:30:37 2019 -0400\n\n Make pg_dump emit ATTACH PARTITION instead of PARTITION OF\n\nAbove commit use ATTACH PARTITION instead of PARTITION OF, that\nmeans CREATE TABLE get build with attributes for each child. I found\nthat NOT NULL constraints not getting dump for the child table and that\nis the reason restore end up with above error.\n\nLooking at code found the below code which skip the NULL NULL\nconstraint for the inherited table - and which is the reason it also\nit end up not emitting the NOT NULL constraint for child table:\n\n /*\n * Not Null constraint --- suppress if inherited, except\n * in binary-upgrade case where that won't work.\n */\n bool has_notnull = (tbinfo->notnull[j] &&\n (!tbinfo->inhNotNull[j] ||\n dopt->binary_upgrade));\n\nPFA patch to fix the issue, which allow to dump the NOT NULL\nfor partition table.\n\nPS: we also need to backport this to v11.\n\nThanks,\n-- \nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Mon, 6 May 2019 11:13:51 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "Found another scenario where check constraint is not getting\ndump for the child table.\n\nTestcase:\n\ncreate table test ( c1 serial, c2 int not null, c3 integer CHECK (c3 > 0))\npartition by list (c2);\ncreate table test_p1 partition of test for values in ( 1);\ncreate table test_p2 partition of test for values in ( 2);\n\nIn the above test, check constraint for column c3 is not getting\ndump with CREATE TABLE, and that is the reason ATTACH\nPARTITION is failing.\n\nSeems like need to handle NOT NULL and CHECK CONSTRAINT\ndifferently than the inheritance table.\n\n\n\n\nOn Mon, May 6, 2019 at 11:13 AM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n> Hi,\n>\n> Consider the following test scenario:\n>\n> create table test ( c1 serial, c2 int not null ) partition by list (c2);\n> create table test_p1 partition of test for values in ( 1);\n> create table test_p2 partition of test for values in ( 2);\n>\n> rushabh@rushabh:postgresql$ ./db/bin/pg_dump db1 > dump.sql\n>\n> While restoring above dump it's throwing a below error:\n>\n> CREATE TABLE\n> psql:dump.sql:66: ERROR: column \"c1\" in child table must be marked NOT\n> NULL\n> ALTER TABLE\n> CREATE TABLE\n> psql:dump.sql:79: ERROR: column \"c1\" in child table must be marked NOT\n> NULL\n> ALTER TABLE\n>\n> Problem got introduced with below commit:\n>\n> commit 3b23552ad8bbb1384381b67f860019d14d5b680e\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Wed Apr 24 15:30:37 2019 -0400\n>\n> Make pg_dump emit ATTACH PARTITION instead of PARTITION OF\n>\n> Above commit use ATTACH PARTITION instead of PARTITION OF, that\n> means CREATE TABLE get build with attributes for each child. I found\n> that NOT NULL constraints not getting dump for the child table and that\n> is the reason restore end up with above error.\n>\n> Looking at code found the below code which skip the NULL NULL\n> constraint for the inherited table - and which is the reason it also\n> it end up not emitting the NOT NULL constraint for child table:\n>\n> /*\n> * Not Null constraint --- suppress if inherited,\n> except\n> * in binary-upgrade case where that won't work.\n> */\n> bool has_notnull = (tbinfo->notnull[j] &&\n> (!tbinfo->inhNotNull[j] ||\n> dopt->binary_upgrade));\n>\n> PFA patch to fix the issue, which allow to dump the NOT NULL\n> for partition table.\n>\n> PS: we also need to backport this to v11.\n>\n> Thanks,\n> --\n> Rushabh Lathia\n> www.EnterpriseDB.com\n>\n>\n\n-- \nRushabh Lathia\n\nFound another scenario where check constraint is not gettingdump for the child table.Testcase:create table test ( c1 serial, c2 int not null, c3 integer CHECK (c3 > 0)) partition by list (c2);create table test_p1 partition of test for values in ( 1);create table test_p2 partition of test for values in ( 2);In the above test, check constraint for column c3 is not gettingdump with CREATE TABLE, and that is the reason ATTACHPARTITION is failing. Seems like need to handle NOT NULL and CHECK CONSTRAINTdifferently than the inheritance table.On Mon, May 6, 2019 at 11:13 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:Hi,Consider the following test scenario:create table test ( c1 serial, c2 int not null ) partition by list (c2);create table test_p1 partition of test for values in ( 1);create table test_p2 partition of test for values in ( 2);rushabh@rushabh:postgresql$ ./db/bin/pg_dump db1 > dump.sqlWhile restoring above dump it's throwing a below error:CREATE TABLEpsql:dump.sql:66: ERROR: column \"c1\" in child table must be marked NOT NULLALTER TABLECREATE TABLEpsql:dump.sql:79: ERROR: column \"c1\" in child table must be marked NOT NULLALTER TABLEProblem got introduced with below commit:commit 3b23552ad8bbb1384381b67f860019d14d5b680eAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>Date: Wed Apr 24 15:30:37 2019 -0400 Make pg_dump emit ATTACH PARTITION instead of PARTITION OF Above commit use ATTACH PARTITION instead of PARTITION OF, thatmeans CREATE TABLE get build with attributes for each child. I foundthat NOT NULL constraints not getting dump for the child table and thatis the reason restore end up with above error.Looking at code found the below code which skip the NULL NULL constraint for the inherited table - and which is the reason it alsoit end up not emitting the NOT NULL constraint for child table: /* * Not Null constraint --- suppress if inherited, except * in binary-upgrade case where that won't work. */ bool has_notnull = (tbinfo->notnull[j] && (!tbinfo->inhNotNull[j] || dopt->binary_upgrade));PFA patch to fix the issue, which allow to dump the NOT NULLfor partition table.PS: we also need to backport this to v11.Thanks,-- Rushabh Lathiawww.EnterpriseDB.com\n-- Rushabh Lathia",
"msg_date": "Mon, 6 May 2019 14:25:39 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "On 2019-May-06, Rushabh Lathia wrote:\n\n> Found another scenario where check constraint is not getting\n> dump for the child table.\n\nYou're right, the patched code is bogus; I'm reverting it all for\ntoday's minors. Thanks for reporting.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 May 2019 12:13:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "On 2019-May-06, Alvaro Herrera wrote:\n\n> On 2019-May-06, Rushabh Lathia wrote:\n> \n> > Found another scenario where check constraint is not getting\n> > dump for the child table.\n> \n> You're right, the patched code is bogus; I'm reverting it all for\n> today's minors. Thanks for reporting.\n\nHere's another version of this patch. This time, I added some real\ntests in pg_dump's suite, including a SERIAL column and NOT NULL\nconstraints. The improved test verifies that the partition is created\nseparately and later attached, and it includes constraints from the\nparent as well as some locally defined ones. I also added tests with\nlegacy inheritance, which was not considered previously in pg_dump tests\nas far as I could see.\n\nI looked for other cases that could have been broken by changing the\npartition creation methodology in pg_dump, and didn't find anything.\nThat part of pg_dump (dumpTableSchema) is pretty spaghettish, though;\nthe fact that shouldPrintColumn makes some partitioned-related decisions\nand then dumpTableSchema make them again is notoriously confusing. I\ncould have easily missed something.\n\n\nOne weird thing about pg_dump's output of the serial column in a\npartitioned table is that it emits the parent table itself first without\na DEFAULT clause, then the sequence and marks it as owned by the column;\nthen it emits the partition *with* the default clause, and finally it\nalters the parent table's column to set the default. Now there is some\nmethod in this madness (the OWNED BY clause for the sequence is mingled\ntogether with the sequence itself), but I think this arrangement makes\na partial restore of the partition fail.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 7 Jun 2019 14:36:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "On 2019-Jun-07, Alvaro Herrera wrote:\n\n> I looked for other cases that could have been broken by changing the\n> partition creation methodology in pg_dump, and didn't find anything.\n> That part of pg_dump (dumpTableSchema) is pretty spaghettish, though;\n> the fact that shouldPrintColumn makes some partitioned-related decisions\n> and then dumpTableSchema make them again is notoriously confusing. I\n> could have easily missed something.\n\nThere was indeed one more problem, that only the pg10 pg_upgrade test\ndetected. Namely, binary-upgrade dump didn't restore for locally\ndefined constraints: they were dumped twice, first in the table\ndefinition and later by the ALTER TABLE ADD CONSTRAINT bit for binary\nupgrade that I had failed to notice. Ooops. The reason pg10 detected\nit and the other branches didn't, is that the only constraint of this\nilk that remained after running regress was removed by 05bd889904e0 :-(\n\nPushed to the three branches. Hopefully it won't explode as\nspectacularly this time ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 19:07:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> There was indeed one more problem, that only the pg10 pg_upgrade test\n> detected. Namely, binary-upgrade dump didn't restore for locally\n> defined constraints: they were dumped twice, first in the table\n> definition and later by the ALTER TABLE ADD CONSTRAINT bit for binary\n> upgrade that I had failed to notice. Ooops. The reason pg10 detected\n> it and the other branches didn't, is that the only constraint of this\n> ilk that remained after running regress was removed by 05bd889904e0 :-(\n\nSeems like we'd better put back some coverage for that case, no?\nBut I'm confused by your reference to 05bd889904e0. It looks like\nthat didn't change anything about tables that weren't getting dropped\nanyhow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 20:46:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
},
{
"msg_contents": "On 2019-Jun-12, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > There was indeed one more problem, that only the pg10 pg_upgrade test\n> > detected. Namely, binary-upgrade dump didn't restore for locally\n> > defined constraints: they were dumped twice, first in the table\n> > definition and later by the ALTER TABLE ADD CONSTRAINT bit for binary\n> > upgrade that I had failed to notice. Ooops. The reason pg10 detected\n> > it and the other branches didn't, is that the only constraint of this\n> > ilk that remained after running regress was removed by 05bd889904e0 :-(\n> \n> Seems like we'd better put back some coverage for that case, no?\n\nI'll work on that.\n\n> But I'm confused by your reference to 05bd889904e0. It looks like\n> that didn't change anything about tables that weren't getting dropped\n> anyhow.\n\nAh ... yeah, I pasted the wrong commit ID. That commit indeed removed\none occurrence of constraint check_b, but it wasn't the one that\ndetected the failure -- the one that did (also named check_b) was\nremoved by commit 6f6b99d1335b (pg11 only).\n\nCommit aa56671836e6 (in pg10, two months after 05bd889904e0) changed\nthose tables afterwards so that they wouldn't be dropped. \n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 23:26:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: fail to restore partition table with serial type"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems that 582edc369cd caused $subject.\n\nTrivial fix attached, though I obviously didn't actually test it\nagainst such server.",
"msg_date": "Mon, 6 May 2019 10:04:45 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 6, 2019 at 10:04 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> It seems that 582edc369cd caused $subject.\n>\n> Trivial fix attached, though I obviously didn't actually test it\n> against such server.\n\nAhem, correct fix attached. I'm going to get a coffee and hide for\nthe rest of the day.",
"msg_date": "Mon, 6 May 2019 10:17:05 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "\nOn 5/6/19 4:17 AM, Julien Rouhaud wrote:\n> On Mon, May 6, 2019 at 10:04 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Hi,\n>>\n>> It seems that 582edc369cd caused $subject.\n>>\n>> Trivial fix attached, though I obviously didn't actually test it\n>> against such server.\n> Ahem, correct fix attached. I'm going to get a coffee and hide for\n> the rest of the day.\n\n\n\nWhy do we even have code referring to pre-7.3 servers? Wouldn't it be\nsimpler just to remove that code?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 6 May 2019 08:32:44 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 06, 2019 at 08:32:44AM -0400, Andrew Dunstan wrote:\n> Why do we even have code referring to pre-7.3 servers? Wouldn't it be\n> simpler just to remove that code?\n\nEven for pg_dump, we only support servers down to 8.0. Let's nuke\nthis code.\n--\nMichael",
"msg_date": "Mon, 6 May 2019 21:49:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, May 06, 2019 at 08:32:44AM -0400, Andrew Dunstan wrote:\n> > Why do we even have code referring to pre-7.3 servers? Wouldn't it be\n> > simpler just to remove that code?\n> \n> Even for pg_dump, we only support servers down to 8.0. Let's nuke\n> this code.\n\nAgreed.\n\nSeems like we should probably have all of our client tools in-sync\nregarding what version they support down to. There's at least some\ncode in psql that tries to work with pre-8.0 too (around tablespaces and\nsavepoints, specifically, it looks like), but I have doubts that recent\nchanges to psql have been tested back to pre-8.0.\n\nAt least... for the client tools that support multiple major versions.\nSeems a bit unfortunate that we don't really define formally anywhere\nwhich tools in src/bin/ work with multiple major versions and which\ndon't, even though that's a pretty big distinction and one that matters\nto packagers and users.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 6 May 2019 09:45:39 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 06, 2019 at 08:32:44AM -0400, Andrew Dunstan wrote:\n>> Why do we even have code referring to pre-7.3 servers? Wouldn't it be\n>> simpler just to remove that code?\n\n> Even for pg_dump, we only support servers down to 8.0. Let's nuke\n> this code.\n\n+1. I think psql claims to support down to 7.4, but that's still not\na precedent for trying to handle pre-7.3. Also, the odds that we'd\nnot break this code path again in future seem pretty bad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 09:49:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 6, 2019 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, May 06, 2019 at 08:32:44AM -0400, Andrew Dunstan wrote:\n> >> Why do we even have code referring to pre-7.3 servers? Wouldn't it be\n> >> simpler just to remove that code?\n>\n> > Even for pg_dump, we only support servers down to 8.0. Let's nuke\n> > this code.\n>\n> +1. I think psql claims to support down to 7.4, but that's still not\n> a precedent for trying to handle pre-7.3. Also, the odds that we'd\n> not break this code path again in future seem pretty bad.\n\nWFM. Updated patch attached, I also removed another similar chunk in\nthe same file while at it.",
"msg_date": "Mon, 6 May 2019 17:31:17 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> WFM. Updated patch attached, I also removed another similar chunk in\n> the same file while at it.\n\nUh, that looks backwards:\n\n@@ -146,10 +146,6 @@ connectDatabase(const char *dbname, const char *pghost,\n \t\texit(1);\n \t}\n \n-\tif (PQserverVersion(conn) >= 70300)\n-\t\tPQclear(executeQuery(conn, ALWAYS_SECURE_SEARCH_PATH_SQL,\n-\t\t\t\t\t\t\t progname, echo));\n-\n \treturn conn;\n }\n \n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 11:34:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 6, 2019 at 5:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > WFM. Updated patch attached, I also removed another similar chunk in\n> > the same file while at it.\n>\n> Uh, that looks backwards:\n\nArgh, sorry :(\n\nI'm definitely done for today.",
"msg_date": "Mon, 6 May 2019 17:39:18 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 06, 2019 at 05:39:18PM +0200, Julien Rouhaud wrote:\n> I'm definitely done for today.\n\nLooks good to me, so committed.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 09:46:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 06, 2019 at 05:39:18PM +0200, Julien Rouhaud wrote:\n>> I'm definitely done for today.\n\n> Looks good to me, so committed.\n\nThe originally-complained-of breakage exists in all active branches,\nso is it really OK to commit this only in HEAD?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 22:23:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 06, 2019 at 10:23:07PM -0400, Tom Lane wrote:\n> The originally-complained-of breakage exists in all active branches,\n> so is it really OK to commit this only in HEAD?\n\nI did not think that it would be that critical for back-branches, but\nI don't mind going ahead and remove the code there as well. Are there\nany objections with it?\n\nAlso, wouldn't we want instead to apply on back-branches the first\npatch proposed on this thread which fixes the query generation for\nthis pre-7.3 related code?\n--\nMichael",
"msg_date": "Tue, 7 May 2019 12:18:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 06, 2019 at 10:23:07PM -0400, Tom Lane wrote:\n>> The originally-complained-of breakage exists in all active branches,\n>> so is it really OK to commit this only in HEAD?\n\n> I did not think that it would be that critical for back-branches, but\n> I don't mind going ahead and remove the code there as well. Are there\n> any objections with it?\n\n> Also, wouldn't we want instead to apply on back-branches the first\n> patch proposed on this thread which fixes the query generation for\n> this pre-7.3 related code?\n\nGiven that we pushed out the bad code a year ago and nobody's complained,\nI think it's safe to assume that no one is using any supported release\nwith a pre-7.3 server.\n\nIt's reasonable to doubt that this is the only problem the affected\napplications would have with such a server, too. I don't see a lot\nof point in \"fixing\" this code unless somebody actually tests that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 23:24:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Mon, May 06, 2019 at 11:24:31PM -0400, Tom Lane wrote:\n> It's reasonable to doubt that this is the only problem the affected\n> applications would have with such a server, too. I don't see a lot\n> of point in \"fixing\" this code unless somebody actually tests that.\n\nOkay, point taken. I'll go apply that to the back-branches as well.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 12:39:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
},
{
"msg_contents": "On Tue, May 07, 2019 at 12:39:13PM +0900, Michael Paquier wrote:\n> Okay, point taken. I'll go apply that to the back-branches as well.\n\nAnd done.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 14:29:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb & clusterdb broken against pre-7.3 servers"
}
] |
[
{
"msg_contents": "Is there a reason pg_checksums is plural and not singular, i.e.,\npg_checksum? I know it is being renamed for PG 12. It might have\nneeded to be plural when it was pg_verify_checksums.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 6 May 2019 13:56:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Naming of pg_checksums"
},
{
"msg_contents": "On Mon, May 06, 2019 at 01:56:47PM -0400, Bruce Momjian wrote:\n> Is there a reason pg_checksums is plural and not singular, i.e.,\n> pg_checksum? I know it is being renamed for PG 12. It might have\n> needed to be plural when it was pg_verify_checksums.\n\nBecause it applies to checksums to many pages first, and potentially\nto more things than data checksums in the future if we want to extend\nit with more checksum-related things? In short I'd like to think that\nthe plural is just but fine. If somebody wishes to do again a\nrenaming, that's fine by me as well but I don't think the current name\nis an issue.\n--\nMichael",
"msg_date": "Tue, 7 May 2019 12:50:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Naming of pg_checksums"
},
{
"msg_contents": "On Mon, May 6, 2019 at 1:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Is there a reason pg_checksums is plural and not singular, i.e.,\n> pg_checksum? I know it is being renamed for PG 12. It might have\n> needed to be plural when it was pg_verify_checksums.\n\nThat is a good question, IMHO. I am not sure whether pg_checksum is\nbetter, but I'm pretty sure it's not worse.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 May 2019 16:46:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Naming of pg_checksums"
}
] |
[
{
"msg_contents": "Commit 8fa30f906be reduced the elevel of a number of \"can't happen\"\nerrors from PANIC to ERROR. These were all critical-section-adjacent\nerrors involved in nbtree page splits, and nbtree page deletion. It\nalso established the following convention within _bt_split(), which\nallowed Tom to keep the length of the critical section just as short\nas it had always been:\n\n/*\n * origpage is the original page to be split. leftpage is a temporary\n * buffer that receives the left-sibling data, which will be copied back\n * into origpage on success. rightpage is the new page that receives the\n * right-sibling data. If we fail before reaching the critical section,\n * origpage hasn't been modified and leftpage is only workspace. In\n * principle we shouldn't need to worry about rightpage either, because it\n * hasn't been linked into the btree page structure; but to avoid leaving\n * possibly-confusing junk behind, we are careful to rewrite rightpage as\n * zeroes before throwing any error.\n */\n\nThe INCLUDE indexes work looks like it subtly broke this, because it\nallocated memory after the initialization of the right page --\nallocating memory can always fail. On the other hand, even when\n8fa30f906be went in back in 2010 this \"rule\" was arguably broken,\nbecause we were already calling PageGetTempPage() after the right\nsibling page is initialized, which palloc()s a full BLCKSZ, which is\nfar more than truncation is every likely to allocate.\n\nOn the other other hand, it seems to me that the PageGetTempPage()\nthing might have been okay, because it happens before the high key is\ninserted on the new right buffer page. The same cannot be said for the\nway we generate a new high key for the left/old page via suffix\ntruncation, which happens to occur after the right buffer page is\nfirst modified by inserted its high key (the original/left page's\noriginal high key). I think that there may be a risk that VACUUM's\npage deletion code will get confused by finding an errant right\nsibling page from a failed page split when there is a high key. If so,\nthat would be a risk that was introduced in Postgres 11, and made much\nmore likely in practice in Postgres 12. (I haven't got as far as doing\nan analysis of the risks to page deletion, though. The \"fastpath\"\nrightmost page insertion optimization that was also added to Postgres\n11 seems like it also might need to be considered here.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 6 May 2019 12:48:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "_bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Mon, May 6, 2019 at 12:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On the other other hand, it seems to me that the PageGetTempPage()\n> thing might have been okay, because it happens before the high key is\n> inserted on the new right buffer page. The same cannot be said for the\n> way we generate a new high key for the left/old page via suffix\n> truncation, which happens to occur after the right buffer page is\n> first modified by inserted its high key (the original/left page's\n> original high key). I think that there may be a risk that VACUUM's\n> page deletion code will get confused by finding an errant right\n> sibling page from a failed page split when there is a high key. If so,\n> that would be a risk that was introduced in Postgres 11, and made much\n> more likely in practice in Postgres 12. (I haven't got as far as doing\n> an analysis of the risks to page deletion, though. The \"fastpath\"\n> rightmost page insertion optimization that was also added to Postgres\n> 11 seems like it also might need to be considered here.)\n\nIt seems like my fears about page deletion were well-founded, at least\nif you assume that the risk of an OOM at the wrong time is greater\nthan negligible.\n\nIf I simulate an OOM error during suffix truncation, then\nnon-rightmost page splits leave the tree in a state that confuses\nVACUUM/page deletion. When I simulate an OOM on page 42, we will later\nsee the dreaded \"failed to re-find parent key in index \"foo\" for\ndeletion target page 42\" error message from a VACUUM. That's not good.\n\nIt doesn't matter if the same things happens when splitting a\nrightmost page, which naturally doesn't insert a new high key on the\nnew right half. This confirms my theory that the PageGetTempPage()\nmemory allocation can fail without confusing VACUUM, since that\nallocation occurs before the critical-but-not-critical point (the\npoint that we really start to modify the new right half of the split).\n\nFortunately, this bug seems easy enough to fix: we can simply move the\n\"insert new high key on right page\" code so that it comes after suffix\ntruncation. This makes it safe for suffix truncation to have an OOM,\nor at least as safe as the PageGetTempPage() allocation that seems\nsafe to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 6 May 2019 14:51:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Commit 8fa30f906be reduced the elevel of a number of \"can't happen\"\n> errors from PANIC to ERROR. These were all critical-section-adjacent\n> errors involved in nbtree page splits, and nbtree page deletion. It\n> also established the following convention within _bt_split(), which\n> allowed Tom to keep the length of the critical section just as short\n> as it had always been:\n\n> /*\n> * origpage is the original page to be split. leftpage is a temporary\n> * buffer that receives the left-sibling data, which will be copied back\n> * into origpage on success. rightpage is the new page that receives the\n> * right-sibling data. If we fail before reaching the critical section,\n> * origpage hasn't been modified and leftpage is only workspace. In\n> * principle we shouldn't need to worry about rightpage either, because it\n> * hasn't been linked into the btree page structure; but to avoid leaving\n> * possibly-confusing junk behind, we are careful to rewrite rightpage as\n> * zeroes before throwing any error.\n> */\n\n> The INCLUDE indexes work looks like it subtly broke this, because it\n> allocated memory after the initialization of the right page --\n> allocating memory can always fail.\n\nYeah, as _bt_split is currently coded, _bt_truncate has to be a \"no\nerrors\" function, which it isn't. The pfree for its result is being\ndone in an ill-chosen place, too.\n\nAnother problem now that I look at it is that the _bt_getbuf for the right\nsibling is probably not too safe. And the _bt_vacuum_cycleid() call seems\na bit dangerous from this standpoint as well.\n\n> On the other hand, even when\n> 8fa30f906be went in back in 2010 this \"rule\" was arguably broken,\n> because we were already calling PageGetTempPage() after the right\n> sibling page is initialized, which palloc()s a full BLCKSZ, which is\n> far more than truncation is every likely to allocate.\n\nI'm not really concerned about that one because at that point the\nright page is still in a freshly-pageinit'd state. It's perhaps\nnot quite as nice as having it be zeroes, but it won't look like\nit has any interesting data. (But, having said that, we could\ncertainly reorder the code to construct the temp page first.)\n\nIn any case, once we've started to fill the ropaque area, it would really\nbe better if we don't call anything that could throw errors.\n\nMaybe we should bite the bullet and use two temp pages, so that none\nof the data ends up in the shared buffer arena until we reach the\ncritical section? The extra copying is slightly annoying, but\nit certainly seems like enforcing this invariant over such a\nlong stretch of code is not very maintainable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 May 2019 18:29:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Mon, May 6, 2019 at 3:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, as _bt_split is currently coded, _bt_truncate has to be a \"no\n> errors\" function, which it isn't. The pfree for its result is being\n> done in an ill-chosen place, too.\n\nI am tempted to move the call to _bt_truncate() out of _bt_split()\nentirely on HEAD, possibly relocating it to\nnbtsplitloc.c/_bt_findsplitloc(). That way, there is a clearer\nseparation between how split points are chosen, suffix truncation, and\nthe mechanical process of executing a legal page split.\n\n> Another problem now that I look at it is that the _bt_getbuf for the right\n> sibling is probably not too safe. And the _bt_vacuum_cycleid() call seems\n> a bit dangerous from this standpoint as well.\n\nYeah, we can tighten those up without much difficulty.\n\n> I'm not really concerned about that one because at that point the\n> right page is still in a freshly-pageinit'd state. It's perhaps\n> not quite as nice as having it be zeroes, but it won't look like\n> it has any interesting data.\n\nThe important question is how VACUUM will recognize it. It's clearly\nnot as bad as something that causes \"failed to re-find parent key\"\nerrors, but I think that VACUUM might not be reclaiming it for the FSM\n(haven't checked). Note that _bt_unlink_halfdead_page() is perfectly\nhappy to ignore the fact that the left sibling of a half-dead page has\na rightlink that doesn't point back to the target. Because, uh, there\nmight have been a concurrent page deletion, somehow.\n\nWe have heard a lot about \"failed to re-find parent key\" errors from\nVACUUM before now because that is about the only strong cross-check\nthat it does. (Not that I'm arguing that we need more of that.)\n\n> In any case, once we've started to fill the ropaque area, it would really\n> be better if we don't call anything that could throw errors.\n>\n> Maybe we should bite the bullet and use two temp pages, so that none\n> of the data ends up in the shared buffer arena until we reach the\n> critical section? The extra copying is slightly annoying, but\n> it certainly seems like enforcing this invariant over such a\n> long stretch of code is not very maintainable.\n\nWhile I think that the smarts we have around deciding a split point\nwill probably improve in future releases, and that we'll probably make\n_bt_truncate() itself do more, the actual business of performing a\nsplit has no reason to change that I can think of. I would like to\nkeep _bt_split() as simple as possible anyway -- it should only be\ncopying tuples using simple primitives like the bufpage.c routines.\nLiving with what we have now (not using a temp buffer for the right\npage) seems fine.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 6 May 2019 16:11:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Mon, May 6, 2019 at 4:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The important question is how VACUUM will recognize it. It's clearly\n> not as bad as something that causes \"failed to re-find parent key\"\n> errors, but I think that VACUUM might not be reclaiming it for the FSM\n> (haven't checked). Note that _bt_unlink_halfdead_page() is perfectly\n> happy to ignore the fact that the left sibling of a half-dead page has\n> a rightlink that doesn't point back to the target. Because, uh, there\n> might have been a concurrent page deletion, somehow.\n\nVACUUM asserts P_FIRSTDATAKEY(opaque) > PageGetMaxOffsetNumber(page)\nwithin _bt_mark_page_halfdead(), but doesn't test that condition in\nrelease builds. This means that the earliest modifications of the\nright page, before the high key PageAddItem(), are enough to cause a\nsubsequent \"failed to re-find parent key\" failure in VACUUM. Merely\nsetting the sibling blocks in the right page special area is enough to\ncause VACUUM to refuse to run.\n\nOf course, the problem goes away if you restart the database, because\nthe right page buffer is never marked dirty, and never can be. That\nfactor would probably make the problem appear to be an intermittent\nissue in the kinds of environments where it is most likely to be seen.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 6 May 2019 17:15:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Mon, May 6, 2019 at 5:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> VACUUM asserts P_FIRSTDATAKEY(opaque) > PageGetMaxOffsetNumber(page)\n> within _bt_mark_page_halfdead(), but doesn't test that condition in\n> release builds. This means that the earliest modifications of the\n> right page, before the high key PageAddItem(), are enough to cause a\n> subsequent \"failed to re-find parent key\" failure in VACUUM. Merely\n> setting the sibling blocks in the right page special area is enough to\n> cause VACUUM to refuse to run.\n\nTo be clear, my point here was that this confirms what you said about\nPageGetTempPage() failing after _bt_getbuf() has initialized the\nbuffer for the new right page -- that is not in itself a problem.\nHowever, practically any other change to the right page that might\noccur before an error is raised within _bt_split() is a problem -- not\njust adding a new item. (You were right about that, too.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 6 May 2019 17:26:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Mon, May 6, 2019 at 4:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am tempted to move the call to _bt_truncate() out of _bt_split()\n> entirely on HEAD, possibly relocating it to\n> nbtsplitloc.c/_bt_findsplitloc(). That way, there is a clearer\n> separation between how split points are chosen, suffix truncation, and\n> the mechanical process of executing a legal page split.\n\nI decided against that -- better to make it clear how truncation deals\nwith space overhead within _bt_split(). Besides, the resource\nmanagement around sharing a maybe-palloc()'d high key across module\nboundaries seems complicated, and best avoided.\n\nAttached draft patch for HEAD fixes the bug by organizing _bt_split()\ninto clear phases. _bt_split() now works as follows, which is a little\ndifferent:\n\n* An initial phase that is entirely concerned with the left page temp\nbuffer itself -- initializes its special area.\n\n* Suffix truncation to get left page's new high key, and then add it\nto left page.\n\n* A phase that is mostly concerned with initializing the right page\nspecial area, but also finishes off one or two details about the left\npage that needed to be delayed. This is also where the \"shadow\ncritical section\" begins. Note also that this is where\n_bt_vacuum_cycleid() is called, because its contract actually\n*requires* that caller has a buffer lock on both pages at once. This\nshould not be changed on the grounds that _bt_vacuum_cycleid() might\nfail (nor for any other reason).\n\n* Add new high key to right page if needed. (No change, other than the\nfact that it happens later now.)\n\n* Add other items to both leftpage and rightpage. Critical section\nthat copies leftpage into origpage buffer. (No changes here.)\n\nI suppose I'm biased, but I prefer the new approach anyway. Adding the\nleft high key first, and then the right high key seems simpler and\nmore logical. It emphasizes the similarities and differences between\nleftpage and rightpage. Furthermore, this approach fixes the\ntheoretical risk of leaving behind a minimally-initialized nbtree page\nthat has existed since 2010. We don't allocated *any* memory after the\npoint that a new rightpage buffer is acquired.\n\nI suppose that this will need to be backpatched.\n\nThoughts?\n--\nPeter Geoghegan",
"msg_date": "Tue, 7 May 2019 18:15:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Tue, May 7, 2019 at 6:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I suppose I'm biased, but I prefer the new approach anyway. Adding the\n> left high key first, and then the right high key seems simpler and\n> more logical. It emphasizes the similarities and differences between\n> leftpage and rightpage.\n\nI came up with a better way of doing it in the attached revision. Now,\n_bt_split() calls _bt_findsplitloc() directly. This makes it possible\nto significantly simplify the signature of _bt_split().\n\nIt makes perfect sense for _bt_split() to call _bt_findsplitloc()\ndirectly, since _bt_findsplitloc() is already aware of almost every\n_bt_split() implementation detail, whereas those same details are not\nof interest anywhere else. _bt_findsplitloc() also knows all about\nsuffix truncation. It's also nice that the actual _bt_truncate() call\nis closely tied to the _bt_findsplitloc() call.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 8 May 2019 15:37:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
},
{
"msg_contents": "On Wed, May 8, 2019 at 3:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It makes perfect sense for _bt_split() to call _bt_findsplitloc()\n> directly, since _bt_findsplitloc() is already aware of almost every\n> _bt_split() implementation detail, whereas those same details are not\n> of interest anywhere else.\n\nI discovered that it even used to work like that until 1997, when\ncommit 71b3e93c505 added handling of duplicate index tuples. Tom\nripped the duplicate handling stuff out a couple of years later, for\nwhat seemed to me to be very good reasons, but _bt_findsplitloc()\nremained outside of _bt_split() until now.\n\nI intend to push ahead with the fix for both v11 and HEAD on Monday.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 May 2019 15:32:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_split(), and the risk of OOM before its critical section"
}
] |
[
{
"msg_contents": "Hi\n\nCommit cc8d4151 [*] introduced a dependency between some functions in\nlibpgcommon and libpgfeutils, which is not reflected in the linker options\nprovided when building an external program using PGXS, e.g. attempting to\nbuild the attached (trivial) example results in:\n\n $ PATH=$PG_HEAD:$PATH USE_PGXS=1 make\n gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -ggdb -Og -g3 -fno-omit-frame-pointer -I. -I./ -I/home/ibarwick/devel/builds/HEAD/include/postgresql/server -I/home/ibarwick/devel/builds/HEAD/include/postgresql/internal -D_GNU_SOURCE -c -o pgxs-test.o pgxs-test.c\n gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -ggdb -Og -g3 -fno-omit-frame-pointer pgxs-test.o -L/home/ibarwick/devel/builds/HEAD/lib -Wl,--as-needed -Wl,-rpath,'/home/ibarwick/devel/builds/HEAD/lib',--enable-new-dtags -L/home/ibarwick/devel/builds/HEAD/lib -lpgcommon -lpgport -L/home/ibarwick/devel/builds/HEAD/lib -lpq -lpgcommon -lpgport -lpthread -lssl -lcrypto -lgssapi_krb5 -lz -lreadline -lrt -lcrypt -ldl -lm -o pgxs-test\n /home/ibarwick/devel/builds/HEAD/lib/libpgcommon.a(pgfnames.o): In function `pgfnames':\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:48: undefined reference to `__pg_log_level'\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:48: undefined reference to `pg_log_generic'\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:69: undefined reference to `__pg_log_level'\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:69: undefined reference to `pg_log_generic'\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:74: undefined reference to `__pg_log_level'\n /home/ibarwick/devel/postgresql/src/common/pgfnames.c:74: undefined reference to `pg_log_generic'\n collect2: error: ld returned 1 exit status\n make: *** [pgxs-test] Error 1\n\nwhich is a regression compared to PG11 and earlier.\n\nWorkaround/possible fix is to include \"pgfeutils\" in the \"libpq_pgport\" definition, i.e.:\n\n *** a/src/Makefile.global.in\n --- b/src/Makefile.global.in\n *************** libpq = -L$(libpq_builddir) -lpq\n *** 561,567 ****\n # on client link lines, since that also appears in $(LIBS).\n # libpq_pgport_shlib is the same idea, but for use in client shared libraries.\n ifdef PGXS\n ! libpq_pgport = -L$(libdir) -lpgcommon -lpgport $(libpq)\n libpq_pgport_shlib = -L$(libdir) -lpgcommon_shlib -lpgport_shlib $(libpq)\n else\n libpq_pgport = -L$(top_builddir)/src/common -lpgcommon -L$(top_builddir)/src/port -lpgport $(libpq)\n --- 561,567 ----\n # on client link lines, since that also appears in $(LIBS).\n # libpq_pgport_shlib is the same idea, but for use in client shared libraries.\n ifdef PGXS\n ! libpq_pgport = -L$(libdir) -lpgcommon -lpgport -lpgfeutils $(libpq)\n libpq_pgport_shlib = -L$(libdir) -lpgcommon_shlib -lpgport_shlib $(libpq)\n else\n libpq_pgport = -L$(top_builddir)/src/common -lpgcommon -L$(top_builddir)/src/port -lpgport $(libpq)\n\nI presume a similar modification may need to be added to the following lines in\nthat section but haven't had a chance to look in detail yet (and may be barking\nup the wrong tree entirely of course).\n\n[*] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=cc8d41511721d25d557fc02a46c053c0a602fed\n\n\nRegards\n\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 7 May 2019 14:24:19 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> Commit cc8d4151 [*] introduced a dependency between some functions in\n> libpgcommon and libpgfeutils,\n\nThis seems rather seriously broken. I do not think the answer is to\ncreate a global dependency on libpgfeutils.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 09:46:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "On Tue, May 07, 2019 at 09:46:07AM -0400, Tom Lane wrote:\n>Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n>> Commit cc8d4151 [*] introduced a dependency between some functions in\n>> libpgcommon and libpgfeutils,\n>\n>This seems rather seriously broken. I do not think the answer is to\n>create a global dependency on libpgfeutils.\n>\n\nYeah. I've added it to the open items.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 21:29:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "I wrote:\n> Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n>> Commit cc8d4151 [*] introduced a dependency between some functions in\n>> libpgcommon and libpgfeutils,\n\n> This seems rather seriously broken. I do not think the answer is to\n> create a global dependency on libpgfeutils.\n\nOr, to be clearer: fe_utils has had dependencies on libpgcommon since\nits inception. What we are seeing here is that libpgcommon has now\ngrown some dependencies on libpgfeutils. That can't be allowed to\nstand. We'd be better off giving up on the separation between those\nlibraries than having circular dependencies between them.\n\nI'm not especially on board with the idea of moving FE-specific error\nhandling code into libpgcommon, as that breaks the concept that\nsrc/common/ is broadly for code that can work in either frontend or\nbackend contexts. However, we already have a few violations of that\nrule: common/Makefile already has\n\n# A few files are currently only built for frontend, not server\nOBJS_FRONTEND = $(OBJS_COMMON) fe_memutils.o file_utils.o restricted_token.o\n\nSo maybe the answer is to move these logging support functions into\nsrc/common, in a file that's only built for frontend.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 13:20:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "On 2019-May-09, Tom Lane wrote:\n\n> I'm not especially on board with the idea of moving FE-specific error\n> handling code into libpgcommon, as that breaks the concept that\n> src/common/ is broadly for code that can work in either frontend or\n> backend contexts. However, we already have a few violations of that\n> rule: common/Makefile already has\n> \n> # A few files are currently only built for frontend, not server\n> OBJS_FRONTEND = $(OBJS_COMMON) fe_memutils.o file_utils.o restricted_token.o\n> \n> So maybe the answer is to move these logging support functions into\n> src/common, in a file that's only built for frontend.\n\nI wonder if a better solution isn't to move the file_utils stuff to\nfe_utils. Half of it is frontend-specific. The only one that should be\nshared to backend seems to be fsync_fname ... but instead of sharing it,\nwe have a second copy in fd.c.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 May 2019 13:39:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-09, Tom Lane wrote:\n>> I'm not especially on board with the idea of moving FE-specific error\n>> handling code into libpgcommon, as that breaks the concept that\n>> src/common/ is broadly for code that can work in either frontend or\n>> backend contexts. However, we already have a few violations of that\n>> rule: common/Makefile already has\n>> \n>> # A few files are currently only built for frontend, not server\n>> OBJS_FRONTEND = $(OBJS_COMMON) fe_memutils.o file_utils.o restricted_token.o\n>> \n>> So maybe the answer is to move these logging support functions into\n>> src/common, in a file that's only built for frontend.\n\n> I wonder if a better solution isn't to move the file_utils stuff to\n> fe_utils. Half of it is frontend-specific. The only one that should be\n> shared to backend seems to be fsync_fname ... but instead of sharing it,\n> we have a second copy in fd.c.\n\nHm, if file_utils is the only thing in common/ that uses this, and we\nexpect that to remain true, that would fix the issue. But ...\n\nThe thing I was looking at was mainly fe_memutils, which is justifiably\nhere on the grounds that it provides backend-like palloc support and\nthereby eases the task of making other common/ modules work in both\ncontexts. If we built elog/ereport emulations on top of Peter's logging\nfunctions, there'd be a very clear case for having that in common/.\nPeter didn't do that for v12, but I hope we get there at some point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 13:47:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I wonder if a better solution isn't to move the file_utils stuff to\n>> fe_utils. Half of it is frontend-specific. The only one that should be\n>> shared to backend seems to be fsync_fname ... but instead of sharing it,\n>> we have a second copy in fd.c.\n\n> Hm, if file_utils is the only thing in common/ that uses this, and we\n> expect that to remain true, that would fix the issue. But ...\n\nThumbing through commit cc8d41511, I see that it already touched\nfive common/ modules\n\ndiff --git a/src/common/controldata_utils.c b/src/common/controldata_utils.c\ndiff --git a/src/common/file_utils.c b/src/common/file_utils.c\ndiff --git a/src/common/pgfnames.c b/src/common/pgfnames.c\ndiff --git a/src/common/restricted_token.c b/src/common/restricted_token.c\ndiff --git a/src/common/rmtree.c b/src/common/rmtree.c\n\nSeveral of those have substantial backend components, so moving them\nto fe_utils is a nonstarter. I think moving fe_utils/logging.[hc] to\ncommon/ is definitely the way to get out of this problem.\n\n\nI started working on a patch to do that, and soon noticed that there\nare pre-existing files logging.[hc] in src/bin/pg_rewind/. This seems\nlike a Bad Thing, in fact the #includes in pg_rewind/ are already a\nlittle confused due to this. I think we should either rename those\ntwo pg_rewind files to something else, or rename the generic ones,\nperhaps to \"fe_logging.[hc]\". The latter could be done nearly\ntrivially as part of the movement patch, but on cosmetic grounds\nI'd be more inclined to do the former instead. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 15:33:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "On 2019-May-13, Tom Lane wrote:\n\n> I started working on a patch to do that, and soon noticed that there\n> are pre-existing files logging.[hc] in src/bin/pg_rewind/. This seems\n> like a Bad Thing, in fact the #includes in pg_rewind/ are already a\n> little confused due to this. I think we should either rename those\n> two pg_rewind files to something else, or rename the generic ones,\n> perhaps to \"fe_logging.[hc]\". The latter could be done nearly\n> trivially as part of the movement patch, but on cosmetic grounds\n> I'd be more inclined to do the former instead. Thoughts?\n\nI'd rename both :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 15:58:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-13, Tom Lane wrote:\n>> I started working on a patch to do that, and soon noticed that there\n>> are pre-existing files logging.[hc] in src/bin/pg_rewind/. This seems\n>> like a Bad Thing, in fact the #includes in pg_rewind/ are already a\n>> little confused due to this. I think we should either rename those\n>> two pg_rewind files to something else, or rename the generic ones,\n>> perhaps to \"fe_logging.[hc]\". The latter could be done nearly\n>> trivially as part of the movement patch, but on cosmetic grounds\n>> I'd be more inclined to do the former instead. Thoughts?\n\n> I'd rename both :-)\n\nOn closer inspection, there's so little left in pg_rewind's logging.h/.c\n(one function and a couple of global variables) that the better answer\nis probably just to move those objects somewhere else and nuke the\nseparate files altogether. As attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 13 May 2019 18:51:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "I wrote:\n> I think moving fe_utils/logging.[hc] to\n> common/ is definitely the way to get out of this problem.\n\nI've pushed that, so Ian's problem should be gone as of HEAD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2019 14:38:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
},
{
"msg_contents": "On 5/15/19 3:38 AM, Tom Lane wrote:\n> I wrote:\n>> I think moving fe_utils/logging.[hc] to\n>> common/ is definitely the way to get out of this problem.\n> \n> I've pushed that, so Ian's problem should be gone as of HEAD.\n\nThanks, that resolves the issue!\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 08:50:46 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: PG12, PGXS and linking pgfeutils"
}
] |
[
{
"msg_contents": "Folks,\n\nIt can get a little tedious turning on (or off) all the boolean\noptions to EXPLAIN, so please find attached a shortcut.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 7 May 2019 09:30:47 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 09:30:47AM +0200, David Fetter wrote:\n> Folks,\n> \n> It can get a little tedious turning on (or off) all the boolean\n> options to EXPLAIN, so please find attached a shortcut.\n> \n> Best,\n> David.\n\nIt helps to have a working patch for this.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 7 May 2019 09:50:47 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi\n\nI liked this idea.\n\n+\t\t\ttiming_set = true;\n+\t\t\tes->timing = defGetBoolean(opt);\n+\t\t\tsummary_set = true;\n+\t\t\tes->timing = defGetBoolean(opt);\n\nsecond es->timing should be es->summary, right?\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 07 May 2019 11:13:15 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, 7 May 2019 at 09:30, David Fetter <david@fetter.org> wrote:\n>\n> Folks,\n>\n> It can get a little tedious turning on (or off) all the boolean\n> options to EXPLAIN, so please find attached a shortcut.\n>\n\nI don't understand this, do you mind explaining a bit may be with an\nexample on how you want it to work.\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Tue, 7 May 2019 11:03:23 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 09:30:47 +0200, David Fetter wrote:\n> It can get a little tedious turning on (or off) all the boolean\n> options to EXPLAIN, so please find attached a shortcut.\n\nI'm not convinced this is a good idea - it seems likely that we'll add\nconflicting options at some point, and then this will just be a pain in\nthe neck.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 08:41:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 11:13:15AM +0300, Sergei Kornilov wrote:\n> Hi\n> \n> I liked this idea.\n> \n> +\t\t\ttiming_set = true;\n> +\t\t\tes->timing = defGetBoolean(opt);\n> +\t\t\tsummary_set = true;\n> +\t\t\tes->timing = defGetBoolean(opt);\n> \n> second es->timing should be es->summary, right?\n\nYou are correct! Sorry about the copy-paste-o.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 7 May 2019 18:28:22 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 11:03:23AM +0200, Rafia Sabih wrote:\n> On Tue, 7 May 2019 at 09:30, David Fetter <david@fetter.org> wrote:\n> >\n> > Folks,\n> >\n> > It can get a little tedious turning on (or off) all the boolean\n> > options to EXPLAIN, so please find attached a shortcut.\n> \n> I don't understand this, do you mind explaining a bit may be with an\n> example on how you want it to work.\n\nIf you're tuning a query interactively, it's a lot simpler to prepend,\nfor example,\n\n EXPLAIN (ALL, FORMAT JSON)\n\nto it than to prepend something along the lines of\n\n EXPLAIN(ANALYZE, VERBOSE, COSTS, BUFFERS, SETTINGS, TIMING, SUMMARY, PARTRIDGE_IN_A_PEAR_TREE, FORMAT JSON)\n\nto it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 7 May 2019 18:31:29 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 08:41:47AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-07 09:30:47 +0200, David Fetter wrote:\n> > It can get a little tedious turning on (or off) all the boolean\n> > options to EXPLAIN, so please find attached a shortcut.\n> \n> I'm not convinced this is a good idea - it seems likely that we'll\n> add conflicting options at some point, and then this will just be a\n> pain in the neck.\n\nI already left out FORMAT for a similar reason, namely that it's not a\nboolean, so it's not part of flipping on (or off) all the switches.\n\nAre you seeing a point in the future where there'd be both mutually\nexclusive boolean options and no principled reason to choose among\nthem? If so, what might it look like?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 7 May 2019 18:34:11 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 18:34:11 +0200, David Fetter wrote:\n> On Tue, May 07, 2019 at 08:41:47AM -0700, Andres Freund wrote:\n> > On 2019-05-07 09:30:47 +0200, David Fetter wrote:\n> > > It can get a little tedious turning on (or off) all the boolean\n> > > options to EXPLAIN, so please find attached a shortcut.\n> > \n> > I'm not convinced this is a good idea - it seems likely that we'll\n> > add conflicting options at some point, and then this will just be a\n> > pain in the neck.\n> \n> I already left out FORMAT for a similar reason, namely that it's not a\n> boolean, so it's not part of flipping on (or off) all the switches.\n\nWhich is already somewhat hard to explain.\n\nImagine if we had CPU_PROFILE = on (which'd be *extremely*\nuseful). Would you want that to be switched on automatically? How about\nRECORD_IO_TRACE? How about DISTINCT_BUFFERS which'd be like BUFFERS\nexcept that we'd track how many different buffers are accessed using HLL\nor such? Would also be extremely useful.\n\n\n> Are you seeing a point in the future where there'd be both mutually\n> exclusive boolean options and no principled reason to choose among\n> them? If so, what might it look like?\n\nYes. CPU_PROFILE_PERF, CPU_PROFILE_VTUNE. And lots more.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 09:44:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 09:44:30AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-07 18:34:11 +0200, David Fetter wrote:\n> > On Tue, May 07, 2019 at 08:41:47AM -0700, Andres Freund wrote:\n> > > On 2019-05-07 09:30:47 +0200, David Fetter wrote:\n> > > > It can get a little tedious turning on (or off) all the boolean\n> > > > options to EXPLAIN, so please find attached a shortcut.\n> > > \n> > > I'm not convinced this is a good idea - it seems likely that we'll\n> > > add conflicting options at some point, and then this will just be a\n> > > pain in the neck.\n> > \n> > I already left out FORMAT for a similar reason, namely that it's not a\n> > boolean, so it's not part of flipping on (or off) all the switches.\n> \n> Which is already somewhat hard to explain.\n> \n> Imagine if we had CPU_PROFILE = on (which'd be *extremely*\n> useful). Would you want that to be switched on automatically? How about\n> RECORD_IO_TRACE? How about DISTINCT_BUFFERS which'd be like BUFFERS\n> except that we'd track how many different buffers are accessed using HLL\n> or such? Would also be extremely useful.\n> \n> \n> > Are you seeing a point in the future where there'd be both mutually\n> > exclusive boolean options and no principled reason to choose among\n> > them? If so, what might it look like?\n> \n> Yes. CPU_PROFILE_PERF, CPU_PROFILE_VTUNE. And lots more.\n\nThanks for clarifying.\n\nWould you agree that there's a problem here as I described as\nmotivation for people who operate databases?\n\nIf so, do you have one or more abbreviations in mind that aren't\ncalled ALL? I realize that Naming Things™ is one of two hard problems\nin computer science, but it's still one we have to tackle.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 7 May 2019 23:23:55 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 7, 2019 at 9:31 AM David Fetter <david@fetter.org> wrote:\n> If you're tuning a query interactively, it's a lot simpler to prepend,\n> for example,\n>\n> EXPLAIN (ALL, FORMAT JSON)\n>\n> to it than to prepend something along the lines of\n>\n> EXPLAIN(ANALYZE, VERBOSE, COSTS, BUFFERS, SETTINGS, TIMING, SUMMARY, PARTRIDGE_IN_A_PEAR_TREE, FORMAT JSON)\n>\n> to it.\n\nFWIW, I have the following in my psqlrc:\n\n\\set ea 'EXPLAIN (ANALYZE, SETTINGS, VERBOSE, BUFFERS) '\n\nThe idea behind that is that I can prepend \":ea\" as needed, rather\nthan doing a lot of typing each time, as in:\n\n:ea SELECT ...\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 7 May 2019 14:45:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-07 23:23:55 +0200, David Fetter wrote:\n> Would you agree that there's a problem here as I described as\n> motivation for people who operate databases?\n\nYea, but I don't think the solution is where you seek it. I think the\nproblem is that our defaults for EXPLAIN, in particular EXPLAIN ANALYZE,\nare dumb. And that your desire for ALL stems from that, rather than it\nbeing desirable on its own.\n\nWe really e.g. should just enable BUFFERS by default. The reason we\ncan't is that right now we have checks like:\nEXPLAIN (BUFFERS) SELECT 1;\nERROR: 22023: EXPLAIN option BUFFERS requires ANALYZE\nLOCATION: ExplainQuery, explain.c:206\n\nbut we ought to simply remove them. There's no benefit, and besides\npreventing from enabling BUFFERS by default it means that\nenabling/disabling ANALYZE is more work than necessary.\n\n\n> If so, do you have one or more abbreviations in mind that aren't\n> called ALL? I realize that Naming Things™ is one of two hard problems\n> in computer science, but it's still one we have to tackle.\n\nAs I said, I don't think ALL is a good idea under any name. Like it\njust makes no sense to have ANALYZE, SUMMARY, VERBOSE, BUFFERS,\nSETTINGS, FORMAT controlled by one option, unless you call it DWIM. It's\nseveral separate axis (query is executed or not (ANALYZE), verbosity\n(SUMMARY, VERBOSE), collecting additional information (BUFFERS, TIMING),\noutput format).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 May 2019 14:54:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> As I said, I don't think ALL is a good idea under any name. Like it\n> just makes no sense to have ANALYZE, SUMMARY, VERBOSE, BUFFERS,\n> SETTINGS, FORMAT controlled by one option, unless you call it DWIM. It's\n> several separate axis (query is executed or not (ANALYZE), verbosity\n> (SUMMARY, VERBOSE), collecting additional information (BUFFERS, TIMING),\n> output format).\n\nFWIW, I find this line of argument fairly convincing. There may well\nbe a case for rethinking just how EXPLAIN's options behave, but \"ALL\"\ndoesn't seem like a good conceptual model.\n\nOne idea that comes to mind is that VERBOSE could be redefined as some\nsort of package of primitive options, including all of the \"additional\ninformation\" options, with the ability to turn individual ones off again\nif you wanted. So for example (VERBOSE, BUFFERS OFF) would give you\neverything except buffer stats. We'd need a separate flag/flags to\ncontrol what VERBOSE originally did, but that doesn't bother me ---\nit's an opportunity for more clarity of definition, anyway.\n\nI do feel that it's a good idea to keep ANALYZE separate. \"Execute\nthe query or not\" is a mighty fundamental thing. I've never liked\nthat name for the option though --- maybe we could deprecate it\nin favor of EXECUTE?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 18:06:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > As I said, I don't think ALL is a good idea under any name. Like it\n> > just makes no sense to have ANALYZE, SUMMARY, VERBOSE, BUFFERS,\n> > SETTINGS, FORMAT controlled by one option, unless you call it DWIM. It's\n> > several separate axis (query is executed or not (ANALYZE), verbosity\n> > (SUMMARY, VERBOSE), collecting additional information (BUFFERS, TIMING),\n> > output format).\n> \n> FWIW, I find this line of argument fairly convincing. There may well\n> be a case for rethinking just how EXPLAIN's options behave, but \"ALL\"\n> doesn't seem like a good conceptual model.\n> \n> One idea that comes to mind is that VERBOSE could be redefined as some\n> sort of package of primitive options, including all of the \"additional\n> information\" options, with the ability to turn individual ones off again\n> if you wanted. So for example (VERBOSE, BUFFERS OFF) would give you\n> everything except buffer stats. We'd need a separate flag/flags to\n> control what VERBOSE originally did, but that doesn't bother me ---\n> it's an opportunity for more clarity of definition, anyway.\n\nI'm generally in favor of doing something like what Tom is suggesting\nwith VERBOSE, but I also feel like it should be the default for formats\nlike JSON. If you're asking for the output in JSON, then we really\nshould include everything that a flag like VERBOSE would contain because\nyou're pretty clearly planning to copy/paste that output into something\nelse to read it anyway.\n\n> I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n> the query or not\" is a mighty fundamental thing. I've never liked\n> that name for the option though --- maybe we could deprecate it\n> in favor of EXECUTE?\n\nLet's not fool ourselves by saying we'd 'deprecate' it because that\nimplies, at least to me, that there's some intention of later on\nremoving it and people will potentially propose patches to do that,\nwhich we will then almost certainly spend hours arguing about with the\nresult being that we don't actually remove it.\n\nI'm all in favor of adding an alias for analyze called 'execute', as\nthat makes a lot more sense and then updating our documentation to use\nit, with 'analyze is accepted as an alias' as a footnote.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 May 2019 18:12:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I'm generally in favor of doing something like what Tom is suggesting\n> with VERBOSE, but I also feel like it should be the default for formats\n> like JSON. If you're asking for the output in JSON, then we really\n> should include everything that a flag like VERBOSE would contain because\n> you're pretty clearly planning to copy/paste that output into something\n> else to read it anyway.\n\nMeh --- I don't especially care for non-orthogonal behaviors like that.\nIf you wanted JSON but *not* all of the additional info, how would you\nspecify that? (The implementation I had in mind would make VERBOSE OFF\nmore or less a no-op, so that wouldn't get you there.)\n\n>> I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n>> the query or not\" is a mighty fundamental thing. I've never liked\n>> that name for the option though --- maybe we could deprecate it\n>> in favor of EXECUTE?\n\n> Let's not fool ourselves by saying we'd 'deprecate' it because that\n> implies, at least to me, that there's some intention of later on\n> removing it\n\nTrue, the odds of ever actually removing it are small :-(. I meant\nmostly changing all of our docs to use the other spelling, except\nfor some footnote. Maybe we could call ANALYZE a \"legacy spelling\"\nof EXECUTE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 18:25:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 06:12:56PM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > \n> > One idea that comes to mind is that VERBOSE could be redefined as\n> > some sort of package of primitive options, including all of the\n> > \"additional information\" options, with the ability to turn\n> > individual ones off again if you wanted. So for example (VERBOSE,\n> > BUFFERS OFF) would give you everything except buffer stats. We'd\n> > need a separate flag/flags to control what VERBOSE originally did,\n> > but that doesn't bother me --- it's an opportunity for more\n> > clarity of definition, anyway.\n> \n> I'm generally in favor of doing something like what Tom is\n> suggesting with VERBOSE, but I also feel like it should be the\n> default for formats like JSON. If you're asking for the output in\n> JSON, then we really should include everything that a flag like\n> VERBOSE would contain because you're pretty clearly planning to\n> copy/paste that output into something else to read it anyway.\n\nSo basically, every format but text gets the full treatment for\n(essentially) the functionality I wrapped up in ALL? That makes a lot\nof sense.\n\n> > I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n> > the query or not\" is a mighty fundamental thing. I've never liked\n> > that name for the option though --- maybe we could deprecate it\n> > in favor of EXECUTE?\n> \n> Let's not fool ourselves by saying we'd 'deprecate' it because that\n> implies, at least to me, that there's some intention of later on\n> removing it and people will potentially propose patches to do that,\n> which we will then almost certainly spend hours arguing about with the\n> result being that we don't actually remove it.\n\nExcellent point.\n\n> I'm all in favor of adding an alias for analyze called 'execute', as\n> that makes a lot more sense and then updating our documentation to\n> use it, with 'analyze is accepted as an alias' as a footnote.\n\nHow about making ANALYZE a backward-compatibility feature in the sense\nof replacing examples, docs, etc., with EXECUTE? If most of our users\nare in the future, this makes those same users's better without\nqualification, and helps some positive fraction of our current users.\n\nOn a slightly related topic, we haven't, to date, made any promises\nabout what EXPLAIN will put out, but as we make more machine-readable\nversions, we should at least think about its schema and versioning\nof same.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 8 May 2019 00:39:01 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm generally in favor of doing something like what Tom is suggesting\n> > with VERBOSE, but I also feel like it should be the default for formats\n> > like JSON. If you're asking for the output in JSON, then we really\n> > should include everything that a flag like VERBOSE would contain because\n> > you're pretty clearly planning to copy/paste that output into something\n> > else to read it anyway.\n> \n> Meh --- I don't especially care for non-orthogonal behaviors like that.\n> If you wanted JSON but *not* all of the additional info, how would you\n> specify that? (The implementation I had in mind would make VERBOSE OFF\n> more or less a no-op, so that wouldn't get you there.)\n\nYou'd do it the same way you proposed for verbose- eg: BUFFERS OFF, et\nal, but, really, the point here is that what you're doing with the JSON\nresult is fundamentally different- you're going to paste it into some\nother tool and it should be that tool's job to manage the visualization\nof it and what's included or not in what you see. Passing the\ninformation about what should be seen in the json-based EXPLAIN viewer\nby way of omitting things from the JSON output strikes me as downright\nodd, and doesn't give that other tool the ability to show that data if\nthe users ends up wanting it without rerunning the query.\n\n> >> I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n> >> the query or not\" is a mighty fundamental thing. I've never liked\n> >> that name for the option though --- maybe we could deprecate it\n> >> in favor of EXECUTE?\n> \n> > Let's not fool ourselves by saying we'd 'deprecate' it because that\n> > implies, at least to me, that there's some intention of later on\n> > removing it\n> \n> True, the odds of ever actually removing it are small :-(. I meant\n> mostly changing all of our docs to use the other spelling, except\n> for some footnote. Maybe we could call ANALYZE a \"legacy spelling\"\n> of EXECUTE.\n\nSure, that'd be fine too.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 May 2019 18:51:48 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Greetings,\n\n* David Fetter (david@fetter.org) wrote:\n> On Tue, May 07, 2019 at 06:12:56PM -0400, Stephen Frost wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > One idea that comes to mind is that VERBOSE could be redefined as\n> > > some sort of package of primitive options, including all of the\n> > > \"additional information\" options, with the ability to turn\n> > > individual ones off again if you wanted. So for example (VERBOSE,\n> > > BUFFERS OFF) would give you everything except buffer stats. We'd\n> > > need a separate flag/flags to control what VERBOSE originally did,\n> > > but that doesn't bother me --- it's an opportunity for more\n> > > clarity of definition, anyway.\n> > \n> > I'm generally in favor of doing something like what Tom is\n> > suggesting with VERBOSE, but I also feel like it should be the\n> > default for formats like JSON. If you're asking for the output in\n> > JSON, then we really should include everything that a flag like\n> > VERBOSE would contain because you're pretty clearly planning to\n> > copy/paste that output into something else to read it anyway.\n> \n> So basically, every format but text gets the full treatment for\n> (essentially) the functionality I wrapped up in ALL? That makes a lot\n> of sense.\n\nSomething along those lines is what I was thinking, yes, and it's what\nat least some other projects do (admittedly, that's at least partially\nmy fault because I'm thinking of the 'info' command for pgbackrest, but\nDavid Steele seemed to think it made sense also, at least).\n\n> > I'm all in favor of adding an alias for analyze called 'execute', as\n> > that makes a lot more sense and then updating our documentation to\n> > use it, with 'analyze is accepted as an alias' as a footnote.\n> \n> How about making ANALYZE a backward-compatibility feature in the sense\n> of replacing examples, docs, etc., with EXECUTE? If most of our users\n> are in the future, this makes those same users's better without\n> qualification, and helps some positive fraction of our current users.\n\nI'd rather not refer to it as a backwards-compatibility feature since\nwe, thankfully, don't typically do that and I generally think that's the\nright way to go- but in some cases, like this one, having a 'legacy'\nspelling or an alias seems to be darn near free without opening the box\nof trying to provide backwards compatibility for everything.\n\n> On a slightly related topic, we haven't, to date, made any promises\n> about what EXPLAIN will put out, but as we make more machine-readable\n> versions, we should at least think about its schema and versioning\n> of same.\n\nNot really sure that I agree on this point. Do you see a reason to need\nversioning or schema when the schema is, essentially, included in each\nresult since it's JSON or the other machine-readable formats? I can\nimagine that we might need a version if we decided to redefine some\nexisting field in a non-compatible or non-sensible way, but is that\npossibility likely enough to warrent adding versioning and complicating\neverything downstream? I have a hard time seeing that.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 May 2019 18:58:24 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 7, 2019 at 6:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh --- I don't especially care for non-orthogonal behaviors like that.\n> If you wanted JSON but *not* all of the additional info, how would you\n> specify that? (The implementation I had in mind would make VERBOSE OFF\n> more or less a no-op, so that wouldn't get you there.)\n\n+1. Assuming we know which information the user wants on the basis of\ntheir choice of output format seems like a bad idea. I mean, suppose\nwe introduced a new option that gathered lots of additional detail but\nmade the query run 3x slower. Would everyone want that enabled all\nthe time any time they chose a non-text format? Probably not.\n\nIf people want BUFFERS turned on essentially all the time, then let's\njust flip the default for that, so that EXPLAIN ANALYZE does the\nequivalent of what EXPLAIN (ANALYZE, BUFFERS) currently does, and make\npeople say EXPLAIN (ANALYZE, BUFFERS OFF) if they don't want all that\ndetail. I think that's more or less what Andres was suggesting\nupthread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 May 2019 16:03:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 7, 2019 at 12:31 PM David Fetter <david@fetter.org> wrote:\n> If you're tuning a query interactively, it's a lot simpler to prepend,\n> for example,\n>\n> EXPLAIN (ALL, FORMAT JSON)\n>\n> to it than to prepend something along the lines of\n>\n> EXPLAIN(ANALYZE, VERBOSE, COSTS, BUFFERS, SETTINGS, TIMING, SUMMARY, PARTRIDGE_IN_A_PEAR_TREE, FORMAT JSON)\n>\n> to it.\n\nThis is something of an exaggeration of what could ever be necessary,\nbecause COSTS and TIMING default to TRUE and SUMMARY defaults to TRUE\nwhen ANALYZE is specified, and the PARTRIDGE_IN_A_PEAR_TREE option\nseems not to have made it into the tree this cycle.\n\nBut you could need EXPLAIN (ANALYZE, VERBOSE, BUFFERS, SETTINGS,\nFORMAT JSON), which is not quite so long, but admittedly still\nsomewhat long. Flipping some of the defaults seems like it might be\nthe way to go. I think turning SETTINGS and BUFFERS on by default\nwould be pretty sensible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 May 2019 16:09:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 07/05/2019 09:30, David Fetter wrote:\n> Folks,\n> \n> It can get a little tedious turning on (or off) all the boolean\n> options to EXPLAIN, so please find attached a shortcut.\n\nI would rather have a set of gucs such as default_explain_buffers,\ndefault_explain_summary, and default_explain_format.\n\nOf course if you default BUFFERS to on(*) and don't do ANALYZE, that\nshould not result in an error.\n\n(*) Defaulting BUFFERS to on is something I want regardless of anything\nelse we do.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n",
"msg_date": "Wed, 8 May 2019 23:22:10 +0200",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "\r\n> On May 8, 2019, at 4:22 PM, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\r\n> \r\n> On 07/05/2019 09:30, David Fetter wrote:\r\n>> Folks,\r\n>> \r\n>> It can get a little tedious turning on (or off) all the boolean\r\n>> options to EXPLAIN, so please find attached a shortcut.\r\n> \r\n> I would rather have a set of gucs such as default_explain_buffers,\r\n> default_explain_summary, and default_explain_format.\r\n> \r\n> Of course if you default BUFFERS to on(*) and don't do ANALYZE, that\r\n> should not result in an error.\r\n> \r\n> (*) Defaulting BUFFERS to on is something I want regardless of anything\r\n> else we do.\r\n\r\nI think this, plus Tom’s suggesting of changing what VERBOSE does, is the best way to handle this. Especially since VERBOSE is IMHO pretty useless...\r\n\r\nI’m +1 on trying to move away from ANALYZE as well, though I think it’s mostly orthogonal...\r\n\r\n",
"msg_date": "Wed, 8 May 2019 22:31:15 +0000",
"msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 07, 2019 at 06:25:12PM -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I'm generally in favor of doing something like what Tom is suggesting\n> > with VERBOSE, but I also feel like it should be the default for formats\n> > like JSON. If you're asking for the output in JSON, then we really\n> > should include everything that a flag like VERBOSE would contain because\n> > you're pretty clearly planning to copy/paste that output into something\n> > else to read it anyway.\n> \n> Meh --- I don't especially care for non-orthogonal behaviors like that.\n> If you wanted JSON but *not* all of the additional info, how would you\n> specify that? (The implementation I had in mind would make VERBOSE OFF\n> more or less a no-op, so that wouldn't get you there.)\n> \n> >> I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n> >> the query or not\" is a mighty fundamental thing. I've never liked\n> >> that name for the option though --- maybe we could deprecate it\n> >> in favor of EXECUTE?\n> \n> > Let's not fool ourselves by saying we'd 'deprecate' it because that\n> > implies, at least to me, that there's some intention of later on\n> > removing it\n> \n> True, the odds of ever actually removing it are small :-(. I meant\n> mostly changing all of our docs to use the other spelling, except\n> for some footnote. Maybe we could call ANALYZE a \"legacy spelling\"\n> of EXECUTE.\n\nI tried changing it to EXEC (EXPLAIN EXECUTE is already a thing), but\ngot a giant flock of reduce-reduce conflicts along with a few\nshift-reduce conflicts.\n\nHow do I fix this?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 13 May 2019 07:51:12 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Mon, May 13, 2019 at 07:51:12AM +0200, David Fetter wrote:\n> On Tue, May 07, 2019 at 06:25:12PM -0400, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > I'm generally in favor of doing something like what Tom is suggesting\n> > > with VERBOSE, but I also feel like it should be the default for formats\n> > > like JSON. If you're asking for the output in JSON, then we really\n> > > should include everything that a flag like VERBOSE would contain because\n> > > you're pretty clearly planning to copy/paste that output into something\n> > > else to read it anyway.\n> > \n> > Meh --- I don't especially care for non-orthogonal behaviors like that.\n> > If you wanted JSON but *not* all of the additional info, how would you\n> > specify that? (The implementation I had in mind would make VERBOSE OFF\n> > more or less a no-op, so that wouldn't get you there.)\n> > \n> > >> I do feel that it's a good idea to keep ANALYZE separate. \"Execute\n> > >> the query or not\" is a mighty fundamental thing. I've never liked\n> > >> that name for the option though --- maybe we could deprecate it\n> > >> in favor of EXECUTE?\n> > \n> > > Let's not fool ourselves by saying we'd 'deprecate' it because that\n> > > implies, at least to me, that there's some intention of later on\n> > > removing it\n> > \n> > True, the odds of ever actually removing it are small :-(. I meant\n> > mostly changing all of our docs to use the other spelling, except\n> > for some footnote. Maybe we could call ANALYZE a \"legacy spelling\"\n> > of EXECUTE.\n> \n> I tried changing it to EXEC (EXPLAIN EXECUTE is already a thing), but\n> got a giant flock of reduce-reduce conflicts along with a few\n> shift-reduce conflicts.\n> \n> How do I fix this?\n\nFixed it.\n\nI hope the patch is a little easier to digest as now attached.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 15 May 2019 08:02:14 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> I hope the patch is a little easier to digest as now attached.\n\nTo be blunt, I find 500K worth of changes in the regression test\noutputs to be absolutely unacceptable, especially when said changes\nare basically worthless from a diagnostic standpoint. There are\nat least two reasons why this won't fly:\n\n* Such a change would be a serious obstacle to back-patching\nregression test cases that involve explain output.\n\n* Some buildfarm members use nonstandard settings (notably\nforce_parallel_mode, but I don't think that's the only one).\nWe are *not* going to maintain variant output files to try to cope\nwith all those combinations. It'd be even more disastrous for\nprivate forks that might have their own affected settings.\n\nI don't know how to make progress towards the original goal\nwithout having a regression-test disaster, but what we have\nhere is one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 09:32:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Wed, May 15, 2019 at 09:32:31AM -0400, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > I hope the patch is a little easier to digest as now attached.\n> \n> To be blunt, I find 500K worth of changes in the regression test\n> outputs to be absolutely unacceptable, especially when said changes\n> are basically worthless from a diagnostic standpoint.\n\nYou're right, of course. The fundamental problem is that our\nregression tests depend on (small sets of) fixed strings. TAP is an\nalternative, and could test the structure of the output rather than\nwhat really should be completely inconsequential changes in its form.\n\n> There are\n> at least two reasons why this won't fly:\n> \n> * Such a change would be a serious obstacle to back-patching\n> regression test cases that involve explain output.\n> \n> * Some buildfarm members use nonstandard settings (notably\n> force_parallel_mode, but I don't think that's the only one).\n> We are *not* going to maintain variant output files to try to cope\n> with all those combinations. It'd be even more disastrous for\n> private forks that might have their own affected settings.\n\nIndeed. I think we should move our regression tests to TAP and\ndispense with this.\n\n> I don't know how to make progress towards the original goal without\n> having a regression-test disaster, but what we have here is one.\n\nThis just highlights a disaster already in progress. I'm volunteering\nto help fix it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 15 May 2019 16:20:34 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn May 15, 2019 7:20:34 AM PDT, David Fetter <david@fetter.org> wrote:\n>On Wed, May 15, 2019 at 09:32:31AM -0400, Tom Lane wrote:\n>> David Fetter <david@fetter.org> writes:\n>> > I hope the patch is a little easier to digest as now attached.\n>> \n>> To be blunt, I find 500K worth of changes in the regression test\n>> outputs to be absolutely unacceptable, especially when said changes\n>> are basically worthless from a diagnostic standpoint.\n>\n>You're right, of course. The fundamental problem is that our\n>regression tests depend on (small sets of) fixed strings. TAP is an\n>alternative, and could test the structure of the output rather than\n>what really should be completely inconsequential changes in its form.\n>> There are\n>> at least two reasons why this won't fly:\n>> \n>> * Such a change would be a serious obstacle to back-patching\n>> regression test cases that involve explain output.\n>> \n>> * Some buildfarm members use nonstandard settings (notably\n>> force_parallel_mode, but I don't think that's the only one).\n>> We are *not* going to maintain variant output files to try to cope\n>> with all those combinations. It'd be even more disastrous for\n>> private forks that might have their own affected settings.\n>\n>Indeed. I think we should move our regression tests to TAP and\n>dispense with this.\n\n-inconceivably much\n\nThe effort to write tap tests over our main tests is much much higher. And they're usually much slower. Of course tap is more powerful, so it's good to have the option.\n\nAnd it'd be many months if not years worth of work, and would make backpatching much harder.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 15 May 2019 07:29:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On May 15, 2019 7:20:34 AM PDT, David Fetter <david@fetter.org> wrote:\n>> Indeed. I think we should move our regression tests to TAP and\n>> dispense with this.\n\n> -inconceivably much\n\nYeah, that's not happening.\n\nJust eyeing the patch again, it seems like most of the test-output churn\nis from a decision to make printing of planner options be on-by-default;\nwhich is also what creates the buildfarm-variant-options hazard. So\nI suggest reconsidering that. TBH, even without the regression test\nangle, I suspect that such a change would receive a lot of pushback.\nIt's a pretty big delta in the verbosity of EXPLAIN, and it is frankly\nof no value to a lot of people a lot of the time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 10:46:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 2019-May-13, David Fetter wrote:\n\n> I tried changing it to EXEC (EXPLAIN EXECUTE is already a thing), but\n> got a giant flock of reduce-reduce conflicts along with a few\n> shift-reduce conflicts.\n\nAfter eyeballing the giant patch set you sent[1], I think EXEC is a\nhorrible keyword to use -- IMO it should either be the complete word\nEXECUTE, or we should pick some other word. I realize that we do not\nwant to have different sets of keywords when using the legacy syntax (no\nparens) vs. new-style (with parens), but maybe we should just not\nsupport the EXECUTE keyword in the legacy syntax; there's already a\nnumber of options we don't support in the legacy syntax (BUFFERS,\nTIMING), so this isn't much of a stretch.\n\nIOW if we want to change ANALYZE to EXECUTE, I propose we change it in\nthe new-style syntax only and not the legacy one. So:\n\nEXPLAIN ANALYZE SELECT ...\t-- legacy syntax\nEXPLAIN (EXECUTE) SELECT ...\t-- new-style\nEXPLAIN (ANALYZE) SELECT ...\t-- we still support ANALYZE as an alias, for compatibility\n\nthis should not cause a conflict with EXPLAIN EXECUTE, so these all\nshould work:\n\nEXPLAIN ANALYZE EXECUTE ...\nEXPLAIN (EXECUTE) EXECUTE ...\nEXPLAIN (ANALYZE) EXECUTE ...\n\n\n[1] I think if you just leave out the GUC print from the changes, it\nbecomes a reasonable patch series.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 11:05:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 11:05:31 -0400, Alvaro Herrera wrote:\n> After eyeballing the giant patch set you sent[1], I think EXEC is a\n> horrible keyword to use -- IMO it should either be the complete word\n> EXECUTE, or we should pick some other word. I realize that we do not\n> want to have different sets of keywords when using the legacy syntax (no\n> parens) vs. new-style (with parens), but maybe we should just not\n> support the EXECUTE keyword in the legacy syntax; there's already a\n> number of options we don't support in the legacy syntax (BUFFERS,\n> TIMING), so this isn't much of a stretch.\n\nThat seems too confusing.\n\n\n> [1] I think if you just leave out the GUC print from the changes, it\n> becomes a reasonable patch series.\n\nYea, it really should be small incremental changes instead of proposals\nto \"just change everything\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 09:26:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hello,\n\nOn 2019-May-15, Andres Freund wrote:\n> On 2019-05-15 11:05:31 -0400, Alvaro Herrera wrote:\n\n> > After eyeballing the giant patch set you sent[1], I think EXEC is a\n> > horrible keyword to use -- IMO it should either be the complete word\n> > EXECUTE, or we should pick some other word. I realize that we do not\n> > want to have different sets of keywords when using the legacy syntax (no\n> > parens) vs. new-style (with parens), but maybe we should just not\n> > support the EXECUTE keyword in the legacy syntax; there's already a\n> > number of options we don't support in the legacy syntax (BUFFERS,\n> > TIMING), so this isn't much of a stretch.\n> \n> That seems too confusing.\n\nOk. Are you voting for using EXEC as a keyword to replace ANALYZE?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 13:05:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-15, Andres Freund wrote:\n>> On 2019-05-15 11:05:31 -0400, Alvaro Herrera wrote:\n>>> After eyeballing the giant patch set you sent[1], I think EXEC is a\n>>> horrible keyword to use -- IMO it should either be the complete word\n>>> EXECUTE, or we should pick some other word. I realize that we do not\n>>> want to have different sets of keywords when using the legacy syntax (no\n>>> parens) vs. new-style (with parens), but maybe we should just not\n>>> support the EXECUTE keyword in the legacy syntax; there's already a\n>>> number of options we don't support in the legacy syntax (BUFFERS,\n>>> TIMING), so this isn't much of a stretch.\n\n>> That seems too confusing.\n\n> Ok. Are you voting for using EXEC as a keyword to replace ANALYZE?\n\nFWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\nwe should probably just drop the whole idea. It seemed like a great\nidea at the time, but it's going to confuse people not just Bison.\n\nThis is such a fundamental option that it doesn't make sense to not\nhave it available in the simplified syntax. It also doesn't make sense\nto use different names for it in the simplified and extended syntaxes.\nAnd \"EXEC\", or other weird spellings, is in the end not an improvement\non \"ANALYZE\".\n\nSo ... never mind that suggestion. Can we get anywhere with the\nrest of it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 13:53:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 13:53:26 -0400, Tom Lane wrote:\n> FWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\n> we should probably just drop the whole idea. It seemed like a great\n> idea at the time, but it's going to confuse people not just Bison.\n\nI'm not particularly invested in the idea of renaming ANALYZE - but I\nthink we might be able to come up with something less ambiguous than\nEXECUTE. Even EXECUTION might be better.\n\n\n> So ... never mind that suggestion. Can we get anywhere with the\n> rest of it?\n\nYes, please. I still think getting rid of\n\n\tif (es->buffers && !es->analyze)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\t\t\t\t errmsg(\"EXPLAIN option BUFFERS requires ANALYZE\")));\nand\n\t/* check that timing is used with EXPLAIN ANALYZE */\n\tif (es->timing && !es->analyze)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\t\t\t\t errmsg(\"EXPLAIN option TIMING requires ANALYZE\")));\n\nand then changing the default for BUFFERs would be good. I assume they'd\nstill only apply to query execution.\n\nAlthouh, in the case of BUFFERS, I more than once wished we'd track the\nplan-time stats for buffers as well. But that's a significantly more\ncomplicated change.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 10:58:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Wed, May 15, 2019 at 10:46:39AM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On May 15, 2019 7:20:34 AM PDT, David Fetter <david@fetter.org> wrote:\n>>> Indeed. I think we should move our regression tests to TAP and\n>>> dispense with this.\n> \n>> -inconceivably much\n> \n> Yeah, that's not happening.\n\n+1 to the we-shall-not-move part.\n--\nMichael",
"msg_date": "Thu, 16 May 2019 15:23:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Wed, May 15, 2019 at 09:32:31AM -0400, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > I hope the patch is a little easier to digest as now attached.\n> \n> To be blunt, I find 500K worth of changes in the regression test\n> outputs to be absolutely unacceptable, especially when said changes\n> are basically worthless from a diagnostic standpoint. There are at\n> least two reasons why this won't fly:\n\nHere's a patch set with a much smaller change. Will that fly?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 18 May 2019 20:39:08 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Wed, May 15, 2019 at 1:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That seems too confusing.\n>\n> > Ok. Are you voting for using EXEC as a keyword to replace ANALYZE?\n>\n> FWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\n> we should probably just drop the whole idea. It seemed like a great\n> idea at the time, but it's going to confuse people not just Bison.\n\n+1. I think trying to replace ANALYZE with something else is setting\nourselves up for years, possibly decades, worth of confusion. And\nwithout any real benefit.\n\nDefaulting BUFFERS to ON is probably a reasonable change, thuogh.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 May 2019 12:32:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:32:21PM -0400, Robert Haas wrote:\n> On Wed, May 15, 2019 at 1:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> That seems too confusing.\n> >\n> > > Ok. Are you voting for using EXEC as a keyword to replace ANALYZE?\n> >\n> > FWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\n> > we should probably just drop the whole idea. It seemed like a great\n> > idea at the time, but it's going to confuse people not just Bison.\n> \n> +1. I think trying to replace ANALYZE with something else is setting\n> ourselves up for years, possibly decades, worth of confusion. And\n> without any real benefit.\n> \n> Defaulting BUFFERS to ON is probably a reasonable change, thuogh.\n\nWould this be worth back-patching? I ask because adding it will cause\nfairly large (if mechanical) churn in the regression tests.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 21 May 2019 19:38:57 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Tue, May 21, 2019 at 12:32:21PM -0400, Robert Haas wrote:\n>> Defaulting BUFFERS to ON is probably a reasonable change, thuogh.\n\n> Would this be worth back-patching? I ask because adding it will cause\n> fairly large (if mechanical) churn in the regression tests.\n\nIt really doesn't matter how much churn it causes in the regression tests.\nBack-patching a significant non-bug behavioral change like that is exactly\nthe kind of thing we don't do, because it will cause our users pain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 13:47:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, May 21, 2019 at 1:38 PM David Fetter <david@fetter.org> wrote:\n> Would this be worth back-patching? I ask because adding it will cause\n> fairly large (if mechanical) churn in the regression tests.\n\nNo. I can't believe you're even asking that question.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 May 2019 14:03:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 19:38:57 +0200, David Fetter wrote:\n> On Tue, May 21, 2019 at 12:32:21PM -0400, Robert Haas wrote:\n> > Defaulting BUFFERS to ON is probably a reasonable change, thuogh.\n> \n> Would this be worth back-patching? I ask because adding it will cause\n> fairly large (if mechanical) churn in the regression tests.\n\nThis is obviously a no. But I don't even know what large mechanical\nchurn you're talking about? There's not that many files with EXPLAIN\n(ANALYZE) in the tests - we didn't have any until recently, when we\nadded SUMMARY OFF, to turn off non-deterministic details (f9b1a0dd4).\n\n$ grep -irl 'summary off' src/test/regress/{sql,input}\nsrc/test/regress/sql/select.sql\nsrc/test/regress/sql/partition_prune.sql\nsrc/test/regress/sql/tidscan.sql\nsrc/test/regress/sql/subselect.sql\nsrc/test/regress/sql/select_parallel.sql\n\nadding a bunch of BUFFERS OFF to those wouldn't be particularly\npainful. And if we decided it somehow were painful, we could infer it\nfrom COSTS or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 11:12:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-21 19:38:57 +0200, David Fetter wrote:\n>> On Tue, May 21, 2019 at 12:32:21PM -0400, Robert Haas wrote:\n>>> Defaulting BUFFERS to ON is probably a reasonable change, thuogh.\n\n>> Would this be worth back-patching? I ask because adding it will cause\n>> fairly large (if mechanical) churn in the regression tests.\n\n> This is obviously a no. But I don't even know what large mechanical\n> churn you're talking about? There's not that many files with EXPLAIN\n> (ANALYZE) in the tests - we didn't have any until recently, when we\n> added SUMMARY OFF, to turn off non-deterministic details (f9b1a0dd4).\n\npartition_prune.sql has got kind of a lot of them though :-(\n\nsrc/test/regress/sql/tidscan.sql:3\nsrc/test/regress/sql/partition_prune.sql:46\nsrc/test/regress/sql/select_parallel.sql:3\nsrc/test/regress/sql/select.sql:1\nsrc/test/regress/sql/subselect.sql:1\n\nStill, if we're adding BUFFERS OFF in the same places we have\nSUMMARY OFF, I agree that it won't create much new hazard for\nback-patching --- all those places already have a limit on\nhow far they can be back-patched.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 14:33:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 2019-05-15 19:58, Andres Freund wrote:\n> On 2019-05-15 13:53:26 -0400, Tom Lane wrote:\n>> FWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\n>> we should probably just drop the whole idea. It seemed like a great\n>> idea at the time, but it's going to confuse people not just Bison.\n> I'm not particularly invested in the idea of renaming ANALYZE - but I\n> think we might be able to come up with something less ambiguous than\n> EXECUTE. Even EXECUTION might be better.\n\nThe GQL draft uses PROFILE as a separate top-level command, so it would be\n\n PROFILE SELECT ...\n\nThat seems nice and clear.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Jun 2019 23:08:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 11:08:31PM +0200, Peter Eisentraut wrote:\n> On 2019-05-15 19:58, Andres Freund wrote:\n> > On 2019-05-15 13:53:26 -0400, Tom Lane wrote:\n> >> FWIW, given the conflict against \"EXPLAIN EXECUTE prepared_stmt_name\",\n> >> we should probably just drop the whole idea. It seemed like a great\n> >> idea at the time, but it's going to confuse people not just Bison.\n> > I'm not particularly invested in the idea of renaming ANALYZE - but I\n> > think we might be able to come up with something less ambiguous than\n> > EXECUTE. Even EXECUTION might be better.\n> \n> The GQL draft uses PROFILE as a separate top-level command, so it would be\n> \n> PROFILE SELECT ...\n> \n> That seems nice and clear.\n\nAre you proposing something along the lines of this?\n\nPROFILE [statement]; /* Shows the plan */\nPROFILE RUN [statement]; /* Actually executes the query */\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 18 Jun 2019 23:15:25 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 2019-06-18 23:15, David Fetter wrote:\n> Are you proposing something along the lines of this?\n> \n> PROFILE [statement]; /* Shows the plan */\n> PROFILE RUN [statement]; /* Actually executes the query */\n\nNo, it would be\n\nEXPLAIN statement; /* Shows the plan */\nPROFILE statement; /* Actually executes the query */\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 08:15:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 19/06/2019 18:15, Peter Eisentraut wrote:\n> On 2019-06-18 23:15, David Fetter wrote:\n>> Are you proposing something along the lines of this?\n>>\n>> PROFILE [statement]; /* Shows the plan */\n>> PROFILE RUN [statement]; /* Actually executes the query */\n> No, it would be\n>\n> EXPLAIN statement; /* Shows the plan */\n> PROFILE statement; /* Actually executes the query */\n>\nI think that looks good, and the verbs seem well appropriate. IMnsHO\n\n\n\n",
"msg_date": "Wed, 19 Jun 2019 20:18:56 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "> On 19 Jun 2019, at 08:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-06-18 23:15, David Fetter wrote:\n>> Are you proposing something along the lines of this?\n>> \n>> PROFILE [statement]; /* Shows the plan */\n>> PROFILE RUN [statement]; /* Actually executes the query */\n> \n> No, it would be\n> \n> EXPLAIN statement; /* Shows the plan */\n> PROFILE statement; /* Actually executes the query */\n\nThat makes a lot of sense.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 19 Jun 2019 14:08:21 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 02:08:21PM +0200, Daniel Gustafsson wrote:\n> > On 19 Jun 2019, at 08:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > \n> > On 2019-06-18 23:15, David Fetter wrote:\n> >> Are you proposing something along the lines of this?\n> >> \n> >> PROFILE [statement]; /* Shows the plan */\n> >> PROFILE RUN [statement]; /* Actually executes the query */\n> > \n> > No, it would be\n> > \n> > EXPLAIN statement; /* Shows the plan */\n> > PROFILE statement; /* Actually executes the query */\n> \n> That makes a lot of sense.\n> \n> cheers ./daniel\n\n+1\n\nThanks for clarifying.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 19 Jun 2019 16:53:41 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On 2019-05-18 19:39, David Fetter wrote:\n> On Wed, May 15, 2019 at 09:32:31AM -0400, Tom Lane wrote:\n>> David Fetter <david@fetter.org> writes:\n>>> I hope the patch is a little easier to digest as now attached.\n>>\n>> To be blunt, I find 500K worth of changes in the regression test\n>> outputs to be absolutely unacceptable, especially when said changes\n>> are basically worthless from a diagnostic standpoint. There are at\n>> least two reasons why this won't fly:\n> \n> Here's a patch set with a much smaller change. Will that fly?\n\nThis appears to be the patch of record for this commit fest.\n\nI don't sense much enthusiasm for this change. What is the exact\nrationale for this proposal?\n\nI think using a new keyword EXEC that is similar to an existing one\nEXECUTE will likely just introduce a new class of confusion. (ANALYZE\nEXEC EXECUTE ...?)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 15:06:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New EXPLAIN option: ALL"
},
{
"msg_contents": "On Tue, Jul 02, 2019 at 03:06:52PM +0100, Peter Eisentraut wrote:\n> On 2019-05-18 19:39, David Fetter wrote:\n> > On Wed, May 15, 2019 at 09:32:31AM -0400, Tom Lane wrote:\n> >> David Fetter <david@fetter.org> writes:\n> >>> I hope the patch is a little easier to digest as now attached.\n> >>\n> >> To be blunt, I find 500K worth of changes in the regression test\n> >> outputs to be absolutely unacceptable, especially when said changes\n> >> are basically worthless from a diagnostic standpoint. There are at\n> >> least two reasons why this won't fly:\n> > \n> > Here's a patch set with a much smaller change. Will that fly?\n> \n> This appears to be the patch of record for this commit fest.\n> \n> I don't sense much enthusiasm for this change.\n\nNeither do I, so withdrawn.\n\nI do hope we can go with EXPLAIN and PROFILE, as opposed to\nEXPLAIN/EXPLAIN ANALYZE.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 2 Jul 2019 20:06:18 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: New EXPLAIN option: ALL"
}
] |
[
{
"msg_contents": "Attached is an attempt to match surrounding code. More broadly,\nthough, it seems the \"ID info\" comments belong with the SET_LOCKTAG_*\nmacros rather than with the LockTagType enum members.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 7 May 2019 15:41:50 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "copy-past-o comment in lock.h"
},
{
"msg_contents": "On Tue, May 07, 2019 at 03:41:50PM +0800, John Naylor wrote:\n> Attached is an attempt to match surrounding code. More broadly,\n> though, it seems the \"ID info\" comments belong with the SET_LOCKTAG_*\n> macros rather than with the LockTagType enum members.\n\n+ LOCKTAG_SPECULATIVE_TOKEN, /* for speculative insertion */\n+ /* ID info for a speculative token is TRANSACTION info + token */\nShouldn't the first comment be just \"speculative insertion\"? And the\nsecond one \"ID info for a speculative insertion is transaction ID +\nits speculative insert counter\"?\n--\nMichael",
"msg_date": "Tue, 7 May 2019 17:00:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: copy-past-o comment in lock.h"
},
{
"msg_contents": "On Tue, May 7, 2019 at 4:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 07, 2019 at 03:41:50PM +0800, John Naylor wrote:\n> > Attached is an attempt to match surrounding code. More broadly,\n> > though, it seems the \"ID info\" comments belong with the SET_LOCKTAG_*\n> > macros rather than with the LockTagType enum members.\n>\n> + LOCKTAG_SPECULATIVE_TOKEN, /* for speculative insertion */\n> + /* ID info for a speculative token is TRANSACTION info + token */\n> Shouldn't the first comment be just \"speculative insertion\"?\n\nThat's probably better.\n\n> And the\n> second one \"ID info for a speculative insertion is transaction ID +\n> its speculative insert counter\"?\n\nI was just going by the variable name at hand, but more precision may be good.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 May 2019 16:12:31 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: copy-past-o comment in lock.h"
},
{
"msg_contents": "On Tue, May 07, 2019 at 04:12:31PM +0800, John Naylor wrote:\n> That's probably better.\n\nWould you like to send an updated patch? Perhaps you have a better\nidea?\n--\nMichael",
"msg_date": "Wed, 8 May 2019 16:10:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: copy-past-o comment in lock.h"
},
{
"msg_contents": "On Wed, May 8, 2019 at 3:10 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 07, 2019 at 04:12:31PM +0800, John Naylor wrote:\n> > That's probably better.\n>\n> Would you like to send an updated patch? Perhaps you have a better\n> idea?\n> --\n> Michael\n\nIn the attached, I've used your language, and also moved the comments\ncloser to the code they are describing. That seems more logical and\nfuture proof.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 8 May 2019 15:59:36 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: copy-past-o comment in lock.h"
},
{
"msg_contents": "On Wed, May 08, 2019 at 03:59:36PM +0800, John Naylor wrote:\n> In the attached, I've used your language, and also moved the comments\n> closer to the code they are describing. That seems more logical and\n> future proof.\n\nGood idea to move the comments so what you proposes looks fine to me.\nAre there any objections?\n--\nMichael",
"msg_date": "Wed, 8 May 2019 17:03:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: copy-past-o comment in lock.h"
},
{
"msg_contents": "On Wed, May 08, 2019 at 05:03:31PM +0900, Michael Paquier wrote:\n> Good idea to move the comments so what you proposes looks fine to me.\n> Are there any objections?\n\nOkay, committed.\n--\nMichael",
"msg_date": "Fri, 10 May 2019 09:36:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: copy-past-o comment in lock.h"
}
] |
[
{
"msg_contents": "Spotted two minor typos when skimming through code, and a sentence on\nreturnvalue which seemed a bit odd since executeJsonPath() can exit on\nereport(). The attached diff fixes the typos and suggests a new wording.\n\ncheers ./daniel",
"msg_date": "Tue, 7 May 2019 14:38:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Typos and wording in jsonpath-exec.c"
},
{
"msg_contents": "On Tue, May 7, 2019 at 2:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Spotted two minor typos when skimming through code, and a sentence on\n> returnvalue which seemed a bit odd since executeJsonPath() can exit on\n> ereport(). The attached diff fixes the typos and suggests a new wording.\n>\n\nPushed. Thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, May 7, 2019 at 2:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:Spotted two minor typos when skimming through code, and a sentence on\nreturnvalue which seemed a bit odd since executeJsonPath() can exit on\nereport(). The attached diff fixes the typos and suggests a new wording.Pushed. Thanks! -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 7 May 2019 18:27:08 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Typos and wording in jsonpath-exec.c"
}
] |
[
{
"msg_contents": "Hi,\n\nvacuumdb command supports the corresponding options to\nany VACUUM parameters except INDEX_CLEANUP and TRUNCATE\nthat were added recently. Should vacuumdb also support those\nnew parameters, i.e., add --index-cleanup and --truncate options\nto the command?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 8 May 2019 02:40:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 8, 2019 at 2:41 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> Hi,\n>\n> vacuumdb command supports the corresponding options to\n> any VACUUM parameters except INDEX_CLEANUP and TRUNCATE\n> that were added recently. Should vacuumdb also support those\n> new parameters, i.e., add --index-cleanup and --truncate options\n> to the command?\n\nI think it's a good idea to add new options of these parameters for\nvacuumdb. While making INDEX_CLEANUP option patch I also attached the\npatch for INDEX_CLEANUP parameter before[1], although it adds\n--disable-index-cleanup option instead.\n\n[1] 0002 patch on\nhttps://www.postgresql.org/message-id/CAD21AoBtM%3DHGLkMKBgch37mf0-epa3_o%3DY1PU0_m9r5YmtS-NQ%40mail.gmail.com\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 8 May 2019 09:26:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 08, 2019 at 09:26:35AM +0900, Masahiko Sawada wrote:\n> I think it's a good idea to add new options of these parameters for\n> vacuumdb. While making INDEX_CLEANUP option patch I also attached the\n> patch for INDEX_CLEANUP parameter before[1], although it adds\n> --disable-index-cleanup option instead.\n\nI have added an open item for that. I think that we should added\nthese options.\n--\nMichael",
"msg_date": "Wed, 8 May 2019 16:06:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 8, 2019 at 9:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 08, 2019 at 09:26:35AM +0900, Masahiko Sawada wrote:\n> > I think it's a good idea to add new options of these parameters for\n> > vacuumdb. While making INDEX_CLEANUP option patch I also attached the\n> > patch for INDEX_CLEANUP parameter before[1], although it adds\n> > --disable-index-cleanup option instead.\n>\n> I have added an open item for that. I think that we should added\n> these options.\n\n+1, and thanks for adding the open item!\n\n\n",
"msg_date": "Wed, 8 May 2019 11:37:23 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 8, 2019 at 9:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 8, 2019 at 2:41 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > vacuumdb command supports the corresponding options to\n> > any VACUUM parameters except INDEX_CLEANUP and TRUNCATE\n> > that were added recently. Should vacuumdb also support those\n> > new parameters, i.e., add --index-cleanup and --truncate options\n> > to the command?\n>\n> I think it's a good idea to add new options of these parameters for\n> vacuumdb. While making INDEX_CLEANUP option patch I also attached the\n> patch for INDEX_CLEANUP parameter before[1], although it adds\n> --disable-index-cleanup option instead.\n\nRegarding INDEX_CLEANUP, now VACUUM has three modes;\n\n(1) VACUUM (INDEX_CLEANUP on) does index cleanup\n whatever vacuum_index_cleanup reloption is.\n(2) VACUUM (INDEX_CLEANUP off) does not do index cleanup\n whatever vacuum_index_cleanup reloption is.\n(3) plain VACUUM decides whether to do index cleanup\n according to vacuum_index_cleanup reloption.\n\nIf no option for index cleanup is specified, vacuumdb command\nshould work in the mode (3). IMO this is intuitive.\n\nThe question is; we should support vacuumdb option for (1), i.e.,,\nsomething like --index-cleanup option is added?\nOr for (2), i.e., something like --disable-index-cleanup option is added\nas your patch does? Or for both?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 9 May 2019 02:18:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n>\n> The question is; we should support vacuumdb option for (1), i.e.,,\n> something like --index-cleanup option is added?\n> Or for (2), i.e., something like --disable-index-cleanup option is added\n> as your patch does? Or for both?\n>\n--index-cleanup=BOOL\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Wed, 8 May 2019 18:21:09 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n>> The question is; we should support vacuumdb option for (1), i.e.,,\n>> something like --index-cleanup option is added?\n>> Or for (2), i.e., something like --disable-index-cleanup option is added\n>> as your patch does? Or for both?\n>\n> --index-cleanup=BOOL\n\nI agree with Euler's suggestion to have a 1-1 mapping between the\noption of vacuumdb and the VACUUM parameter, because that's more\nintuitive:\n- --index-cleanup=3Dfalse =3D> VACUUM (INDEX_CLEANUP=3Dfalse)\n- --index-cleanup=3Dtrue =3D> VACUUM (INDEX_CLEANUP=3Dtrue)\n- no --index-cleanup means to rely on the reloption.\n--\nMichael",
"msg_date": "Thu, 9 May 2019 10:00:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Thu, May 9, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> > Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n> >> The question is; we should support vacuumdb option for (1), i.e.,,\n> >> something like --index-cleanup option is added?\n> >> Or for (2), i.e., something like --disable-index-cleanup option is added\n> >> as your patch does? Or for both?\n> >\n> > --index-cleanup=BOOL\n>\n> I agree with Euler's suggestion to have a 1-1 mapping between the\n> option of vacuumdb and the VACUUM parameter\n\n+1. Attached the draft version patches for both options.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Thu, 9 May 2019 20:14:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "At Thu, 9 May 2019 20:14:51 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in <CAD21AoBmA9H3ZRuQFF+9io9PKhP+ePS=D+ThZ6ohRMdBm2x8Pw@mail.gmail.com>\n> On Thu, May 9, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> > > Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n> > >> The question is; we should support vacuumdb option for (1), i.e.,,\n> > >> something like --index-cleanup option is added?\n> > >> Or for (2), i.e., something like --disable-index-cleanup option is added\n> > >> as your patch does? Or for both?\n> > >\n> > > --index-cleanup=BOOL\n> >\n> > I agree with Euler's suggestion to have a 1-1 mapping between the\n> > option of vacuumdb and the VACUUM parameter\n> \n> +1. Attached the draft version patches for both options.\n\n+\tprintf(_(\" --index-cleanup=BOOLEAN do or do not index vacuuming and index cleanup\\n\"));\n+\tprintf(_(\" --truncate=BOOLEAN do or do not truncate off empty pages at the end of the table\\n\"));\n\nI *feel* that force/inhibit is suitable than true/false for the\noptions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Thu, 09 May 2019 20:38:16 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Thu, May 9, 2019 at 1:39 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n> At Thu, 9 May 2019 20:14:51 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in <CAD21AoBmA9H3ZRuQFF+9io9PKhP+ePS=D+ThZ6ohRMdBm2x8Pw@mail.gmail.com>\n> > On Thu, May 9, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> > > > Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n> > > >> The question is; we should support vacuumdb option for (1), i.e.,,\n> > > >> something like --index-cleanup option is added?\n> > > >> Or for (2), i.e., something like --disable-index-cleanup option is added\n> > > >> as your patch does? Or for both?\n> > > >\n> > > > --index-cleanup=BOOL\n> > >\n> > > I agree with Euler's suggestion to have a 1-1 mapping between the\n> > > option of vacuumdb and the VACUUM parameter\n> >\n> > +1. Attached the draft version patches for both options.\n>\n> + printf(_(\" --index-cleanup=BOOLEAN do or do not index vacuuming and index cleanup\\n\"));\n> + printf(_(\" --truncate=BOOLEAN do or do not truncate off empty pages at the end of the table\\n\"));\n>\n> I *feel* that force/inhibit is suitable than true/false for the\n> options.\n\nIndeed.\n\n+ If not specify this option\n+ the behavior depends on <literal>vacuum_index_cleanup</literal> option\n+ for the table to be vacuumed.\n\n+ If not specify this option\n+ the behavior depends on <literal>vacuum_truncate</literal> option\n+ for the table to be vacuumed.\n\nThose sentences should be rephrased to something like \"If this option\nis not specified, the bahvior...\".\n\n\n",
"msg_date": "Fri, 10 May 2019 14:03:06 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Fri, May 10, 2019 at 9:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, May 9, 2019 at 1:39 PM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> >\n> > At Thu, 9 May 2019 20:14:51 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in <CAD21AoBmA9H3ZRuQFF+9io9PKhP+ePS=D+ThZ6ohRMdBm2x8Pw@mail.gmail.com>\n> > > On Thu, May 9, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> > > > > Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n> > > > >> The question is; we should support vacuumdb option for (1), i.e.,,\n> > > > >> something like --index-cleanup option is added?\n> > > > >> Or for (2), i.e., something like --disable-index-cleanup option is added\n> > > > >> as your patch does? Or for both?\n> > > > >\n> > > > > --index-cleanup=BOOL\n> > > >\n> > > > I agree with Euler's suggestion to have a 1-1 mapping between the\n> > > > option of vacuumdb and the VACUUM parameter\n> > >\n> > > +1. Attached the draft version patches for both options.\n> >\n> > + printf(_(\" --index-cleanup=BOOLEAN do or do not index vacuuming and index cleanup\\n\"));\n> > + printf(_(\" --truncate=BOOLEAN do or do not truncate off empty pages at the end of the table\\n\"));\n> >\n> > I *feel* that force/inhibit is suitable than true/false for the\n> > options.\n>\n> Indeed.\n\nThe new VACUUM command option for these option take true and false as\nthe same meaning. What is the motivation is to change a 1-1 mapping\nname?\n\n>\n> + If not specify this option\n> + the behavior depends on <literal>vacuum_index_cleanup</literal> option\n> + for the table to be vacuumed.\n>\n> + If not specify this option\n> + the behavior depends on <literal>vacuum_truncate</literal> option\n> + for the table to be vacuumed.\n>\n> Those sentences should be rephrased to something like \"If this option\n> is not specified, the bahvior...\".\n\nThank you! I've incorporated your comment in my branch. I'll post the\nupdated version patch after the above discussion got a consensus.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 May 2019 19:28:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Mon, May 13, 2019 at 07:28:25PM +0900, Masahiko Sawada wrote:\n> Thank you! I've incorporated your comment in my branch. I'll post the\n> updated version patch after the above discussion got a consensus.\n\nFujii-san, any input about the way to move forward here? Beta1 is\nplanned for next week, hence it would be nice to progress on this\nfront this week.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 10:01:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Tue, May 14, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 13, 2019 at 07:28:25PM +0900, Masahiko Sawada wrote:\n> > Thank you! I've incorporated your comment in my branch. I'll post the\n> > updated version patch after the above discussion got a consensus.\n>\n> Fujii-san, any input about the way to move forward here? Beta1 is\n> planned for next week, hence it would be nice to progress on this\n> front this week.\n\nI think that we can push \"--index-cleanup=BOOLEAN\" version into beta1,\nand then change the interface of the options if we received many\ncomplaints about \"--index-cleanup=BOOLEAN\" from users. So this week,\nI'd like to review Sawada's patch and commit it if that's ok.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 15 May 2019 02:55:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Thu, May 9, 2019 at 8:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 9, 2019 at 10:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, May 08, 2019 at 06:21:09PM -0300, Euler Taveira wrote:\n> > > Em qua, 8 de mai de 2019 às 14:19, Fujii Masao <masao.fujii@gmail.com> escreveu:\n> > >> The question is; we should support vacuumdb option for (1), i.e.,,\n> > >> something like --index-cleanup option is added?\n> > >> Or for (2), i.e., something like --disable-index-cleanup option is added\n> > >> as your patch does? Or for both?\n> > >\n> > > --index-cleanup=BOOL\n> >\n> > I agree with Euler's suggestion to have a 1-1 mapping between the\n> > option of vacuumdb and the VACUUM parameter\n>\n> +1. Attached the draft version patches for both options.\n\nThanks for the patch!\n\n+ if (strncasecmp(opt_str, \"true\", 4) != 0 &&\n+ strncasecmp(opt_str, \"false\", 5) != 0)\n\nShouldn't we allow also \"on\" and \"off\", \"1\", \"0\" as a valid boolean value,\nlike VACUUM does?\n\n+ char *index_cleanup;\n\nThe patch would be simpler if enum trivalue is used for index_cleanup\nvariable as the type.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 15 May 2019 03:19:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 15, 2019 at 03:19:29AM +0900, Fujii Masao wrote:\n> + if (strncasecmp(opt_str, \"true\", 4) != 0 &&\n> + strncasecmp(opt_str, \"false\", 5) != 0)\n> \n> Shouldn't we allow also \"on\" and \"off\", \"1\", \"0\" as a valid boolean value,\n> like VACUUM does?\n\nI am wondering, in order to keep this patch simple, if you shouldn't\naccept any value and just let the parsing logic on the backend side\ndo all the work. That's what we do for other things like the\nconnection parameter replication for example, and there is no need to\nmimic a boolean parsing equivalent on the frontend with something like\ncheck_bool_str() as presented in the patch. The main downside is that\nthe error message gets linked to VACUUM and not vacuumdb.\n\nAnother thing which you may be worth looking at would be to make\nparse_bool() frontend aware, where pg_strncasecmp() is actually\navailable.\n--\nMichael",
"msg_date": "Wed, 15 May 2019 07:51:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 15, 2019 at 7:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 15, 2019 at 03:19:29AM +0900, Fujii Masao wrote:\n> > + if (strncasecmp(opt_str, \"true\", 4) != 0 &&\n> > + strncasecmp(opt_str, \"false\", 5) != 0)\n> >\n> > Shouldn't we allow also \"on\" and \"off\", \"1\", \"0\" as a valid boolean value,\n> > like VACUUM does?\n>\n> I am wondering, in order to keep this patch simple, if you shouldn't\n> accept any value and just let the parsing logic on the backend side\n> do all the work. That's what we do for other things like the\n> connection parameter replication for example, and there is no need to\n> mimic a boolean parsing equivalent on the frontend with something like\n> check_bool_str() as presented in the patch. The main downside is that\n> the error message gets linked to VACUUM and not vacuumdb.\n\nI might be missing something but if the frontend code doesn't check\narguments and we let the backend parsing logic do all the work then it\nallows user to execute an arbitrary SQL command via vacuumdb.\n\n>\n> Another thing which you may be worth looking at would be to make\n> parse_bool() frontend aware, where pg_strncasecmp() is actually\n> available.\n\nOr how about add a function that parse a boolean string value, as a\ncommon routine among frontend programs, maybe in common.c or fe_utils?\nWe results in having the duplicate code between frontend and backend\nbut it may be less side effects than making parse_bool available on\nfrontend code.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 May 2019 11:36:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 11:36:52 +0900, Masahiko Sawada wrote:\n> I might be missing something but if the frontend code doesn't check\n> arguments and we let the backend parsing logic do all the work then it\n> allows user to execute an arbitrary SQL command via vacuumdb.\n\nBut, so what? The user could just have used psql to do so?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 19:45:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 15, 2019 at 11:45 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-05-15 11:36:52 +0900, Masahiko Sawada wrote:\n> > I might be missing something but if the frontend code doesn't check\n> > arguments and we let the backend parsing logic do all the work then it\n> > allows user to execute an arbitrary SQL command via vacuumdb.\n>\n> But, so what? The user could just have used psql to do so?\n\nIndeed. It shouldn't be a problem and we even now can do that by\nspecifying for example --table=\"t(c1);select 1\" but doesn't work.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 May 2019 13:01:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 15, 2019 at 1:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 15, 2019 at 11:45 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-05-15 11:36:52 +0900, Masahiko Sawada wrote:\n> > > I might be missing something but if the frontend code doesn't check\n> > > arguments and we let the backend parsing logic do all the work then it\n> > > allows user to execute an arbitrary SQL command via vacuumdb.\n> >\n> > But, so what? The user could just have used psql to do so?\n>\n> Indeed. It shouldn't be a problem and we even now can do that by\n> specifying for example --table=\"t(c1);select 1\" but doesn't work.\n>\n\nI've attached new version patch that takes the way to let the backend\nparser do all work.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Wed, 15 May 2019 15:44:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Wed, May 15, 2019 at 03:44:22PM +0900, Masahiko Sawada wrote:\n> I've attached new version patch that takes the way to let the backend\n> parser do all work.\n\nI was wondering how the error handling gets by not having the parsing\non the frontend, and it could be worse:\n$ vacuumdb --index-cleanup=popo\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of table \"pg_catalog.pg_proc\" in database\n\"postgres\" failed: ERROR: index_cleanup requires a Boolean value\n$ vacuumdb --truncate=popo\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of table \"pg_catalog.pg_proc\" in database\n\"postgres\" failed: ERROR: truncate requires a Boolean value\n\nFor TRUNCATE, we actually get to the same error, and INDEX_CLEANUP\njust defers with the separator between the two terms. I think that we\ncould live with that for simplicity's sake. Perhaps others have\ndifferent opinions though.\n\n+ if (vacopts.index_cleanup != NULL)\nChecking directly for NULL-ness here is inconsistent with the previous\ncallers.\n\n+$node->issues_sql_like(\n+ [ 'vacuumdb', '--index-cleanup=true', 'postgres' ],\n+ qr/statement: VACUUM \\(INDEX_CLEANUP true\\).*;/,\n+ 'vacuumdb --index-cleanup=true')\nWe should have a failure test here instead of testing two times the\nsame boolean parameter with opposite values to make sure that we still\ncomplain on invalid input values.\n\n+ Specify that <command>VACUUM</command> should attempt to remove\n+ index entries pointing to dead tuples. If this option is not specified\n+ the behavior depends on <literal>vacuum_index_cleanup</literal> option\n+ for the table to be vacuumed.\nThe description of other commands do not mention directly VACUUM, and\nhave a more straight-forward description about the goal of the option.\nSo the first sentence could be reworked as follows:\nRemoves index entries pointing to dead tuples.\n\nAnd the second for --truncate:\nTruncates any empty pages at the end of the relation.\n\nMy 2c.\n--\nMichael",
"msg_date": "Fri, 17 May 2019 11:09:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Fri, May 17, 2019 at 11:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 15, 2019 at 03:44:22PM +0900, Masahiko Sawada wrote:\n> > I've attached new version patch that takes the way to let the backend\n> > parser do all work.\n>\n> I was wondering how the error handling gets by not having the parsing\n> on the frontend, and it could be worse:\n> $ vacuumdb --index-cleanup=popo\n> vacuumdb: vacuuming database \"postgres\"\n> vacuumdb: error: vacuuming of table \"pg_catalog.pg_proc\" in database\n> \"postgres\" failed: ERROR: index_cleanup requires a Boolean value\n> $ vacuumdb --truncate=popo\n> vacuumdb: vacuuming database \"postgres\"\n> vacuumdb: error: vacuuming of table \"pg_catalog.pg_proc\" in database\n> \"postgres\" failed: ERROR: truncate requires a Boolean value\n>\n> For TRUNCATE, we actually get to the same error, and INDEX_CLEANUP\n> just defers with the separator between the two terms. I think that we\n> could live with that for simplicity's sake.\n\n+1\n\n> Perhaps others have different opinions though.\n\nIs it helpful for user if we show executed SQL when fails as the\nfrontend programs does in executeCommand?\n\n$ vacuumdb --index-cleanup=foo postgres\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: vacuuming of table \"pg_catalog.pg_proc\" in database\n\"postgres\" failed: ERROR: index_cleanup requires a Boolean value\nvacuumdb: query was: VACUUM (INDEX_CLEANUP foo) pg_catalog.pg_proc;\n\n>\n> + if (vacopts.index_cleanup != NULL)\n> Checking directly for NULL-ness here is inconsistent with the previous\n> callers.\n>\n> +$node->issues_sql_like(\n> + [ 'vacuumdb', '--index-cleanup=true', 'postgres' ],\n> + qr/statement: VACUUM \\(INDEX_CLEANUP true\\).*;/,\n> + 'vacuumdb --index-cleanup=true')\n> We should have a failure test here instead of testing two times the\n> same boolean parameter with opposite values to make sure that we still\n> complain on invalid input values.\n\nFixed.\n\n>\n> + Specify that <command>VACUUM</command> should attempt to remove\n> + index entries pointing to dead tuples. If this option is not specified\n> + the behavior depends on <literal>vacuum_index_cleanup</literal> option\n> + for the table to be vacuumed.\n> The description of other commands do not mention directly VACUUM, and\n> have a more straight-forward description about the goal of the option.\n> So the first sentence could be reworked as follows:\n> Removes index entries pointing to dead tuples.\n>\n> And the second for --truncate:\n> Truncates any empty pages at the end of the relation.\n\nFixed.\n\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 17 May 2019 18:10:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 15:44:22 +0900, Masahiko Sawada wrote:\n> From de60d212b50a6412e483c995b83e28c5597089ad Mon Sep 17 00:00:00 2001\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> Date: Thu, 9 May 2019 20:02:05 +0900\n> Subject: [PATCH v3 1/2] Add --index-cleanup option to vacuumdb.\n\n> From 59e3146f585e288d41738daa9a1d18687e2851d1 Mon Sep 17 00:00:00 2001\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\n> Date: Wed, 15 May 2019 15:27:51 +0900\n> Subject: [PATCH v3 2/2] Add --truncate option to vacuumdb.\n> \n\nMy impression is that these are better treated as feature work, to be\ntackled in v13. I see no urgency to push this for v12. There's still\nsome disagreements on how parts of this are implemented, and we've beta1\ncoming up.\n\nIMO we should just register this patch for the next CF, and drop the\nopen item.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 13:11:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Fri, May 17, 2019 at 01:11:53PM -0700, Andres Freund wrote:\n> My impression is that these are better treated as feature work, to be\n> tackled in v13. I see no urgency to push this for v12. There's still\n> some disagreements on how parts of this are implemented, and we've beta1\n> coming up.\n\nIt is true that we have lived without some options in vacuumdb while\nthese were already introduced at the SQL level, so I am in favor of\nwhat you suggest here. Fujii-san, what do you think?\n--\nMichael",
"msg_date": "Sat, 18 May 2019 19:18:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Sat, May 18, 2019 at 7:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, May 17, 2019 at 01:11:53PM -0700, Andres Freund wrote:\n> > My impression is that these are better treated as feature work, to be\n> > tackled in v13. I see no urgency to push this for v12. There's still\n> > some disagreements on how parts of this are implemented, and we've beta1\n> > coming up.\n>\n> It is true that we have lived without some options in vacuumdb while\n> these were already introduced at the SQL level, so I am in favor of\n> what you suggest here. Fujii-san, what do you think?\n\nI'm ok to drop this from open items for v12 because this is not a bug.\nLet's work on this next CommitFest.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 20 May 2019 10:17:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb and new VACUUM options"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:17:31AM +0900, Fujii Masao wrote:\n> I'm ok to drop this from open items for v12 because this is not a bug.\n> Let's work on this next CommitFest.\n\nOkay, I have moved out the item from the list of opened ones.\n--\nMichael",
"msg_date": "Mon, 20 May 2019 10:33:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb and new VACUUM options"
}
] |
[
{
"msg_contents": "After running the core regression tests with installcheck-parallel,\nthe pg_locks view sometimes shows me apparently-orphaned SIReadLock\nentries. They accumulate over repeated test runs. Right now,\nfor example, I see\n\nregression=# select * from pg_locks;\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath \n------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+---------+----------\n relation | 130144 | 12137 | | | | | | | | 3/7977 | 8924 | AccessShareLock | t | t\n virtualxid | | | | | 3/7977 | | | | | 3/7977 | 8924 | ExclusiveLock | t | t\n relation | 130144 | 136814 | | | | | | | | 22/536 | 8076 | SIReadLock | t | f\n relation | 111195 | 118048 | | | | | | | | 19/665 | 6738 | SIReadLock | t | f\n relation | 130144 | 134850 | | | | | | | | 12/3093 | 7984 | SIReadLock | t | f\n(5 rows)\n\nafter having done a couple of installcheck iterations since starting the\npostmaster.\n\nThe PIDs shown as holding those locks don't exist anymore, but digging\nin the postmaster log shows that they were session backends during the\nregression test runs. Furthermore, it seems like they usually were the\nones running either the triggers or portals tests.\n\nI don't see this behavior in v11 (though maybe I just didn't run it\nlong enough). In HEAD, a run adds one or two new entries more often\nthan not.\n\nThis is a pretty bad bug IMO --- quite aside from any ill effects\nof the entries themselves, the leak seems fast enough that it'd run\na production installation out of locktable space before very long.\n\nI'd have to say that my first suspicion falls on bb16aba50 ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 13:46:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "We're leaking predicate locks in HEAD"
},
{
"msg_contents": "On Wed, May 8, 2019 at 5:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After running the core regression tests with installcheck-parallel,\n> the pg_locks view sometimes shows me apparently-orphaned SIReadLock\n> entries. [...]\n\nUgh.\n\n> I'd have to say that my first suspicion falls on bb16aba50 ...\n\nInvestigating.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 May 2019 06:56:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "\nOn 5/7/19 1:46 PM, Tom Lane wrote:\n> After running the core regression tests with installcheck-parallel,\n> the pg_locks view sometimes shows me apparently-orphaned SIReadLock\n> entries. They accumulate over repeated test runs. \n\n\nShould we have a test for that run at/near the end of the regression\ntests? The buildfarm will actually do multiple runs like this if set up\nto do parallel checks and test multiple locales.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 7 May 2019 15:50:26 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 5/7/19 1:46 PM, Tom Lane wrote:\n>> After running the core regression tests with installcheck-parallel,\n>> the pg_locks view sometimes shows me apparently-orphaned SIReadLock\n>> entries. They accumulate over repeated test runs. \n\n> Should we have a test for that run at/near the end of the regression\n> tests? The buildfarm will actually do multiple runs like this if set up\n> to do parallel checks and test multiple locales.\n\nNo, I'm not excited about that idea; I think it'd have all the same\nfragility as the late lamented \"REINDEX pg_class\" test. A given test\nscript has no business assuming that other test scripts aren't\nlegitimately taking out predicate locks, nor assuming that prior test\nscripts are fully cleaned up when it runs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 15:55:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "On Wed, May 8, 2019 at 6:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, May 8, 2019 at 5:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'd have to say that my first suspicion falls on bb16aba50 ...\n>\n> Investigating.\n\nReproduced here. Once the system reaches a state where it's leaking\n(which happens only occasionally for me during installcheck-parallel),\nit keeps leaking for future SSI transactions. The cause is\nSxactGlobalXmin getting stuck. The attached fixes it for me. I can't\nremember why on earth I made that change, but it is quite clearly\nwrong: you have to check every transaction, or you might never advance\nSxactGlobalXmin.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 8 May 2019 15:30:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Reproduced here. Once the system reaches a state where it's leaking\n> (which happens only occasionally for me during installcheck-parallel),\n> it keeps leaking for future SSI transactions. The cause is\n> SxactGlobalXmin getting stuck. The attached fixes it for me. I can't\n> remember why on earth I made that change, but it is quite clearly\n> wrong: you have to check every transaction, or you might never advance\n> SxactGlobalXmin.\n\nHm. So I don't have any opinion about whether this is a correct fix for\nthe leak, but I am quite distressed that the system failed to notice that\nit was leaking predicate locks. Shouldn't there be the same sort of\nleak-detection infrastructure that we have for most types of resources?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 23:53:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "On Wed, May 8, 2019 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Reproduced here. Once the system reaches a state where it's leaking\n> > (which happens only occasionally for me during installcheck-parallel),\n> > it keeps leaking for future SSI transactions. The cause is\n> > SxactGlobalXmin getting stuck. The attached fixes it for me. I can't\n> > remember why on earth I made that change, but it is quite clearly\n> > wrong: you have to check every transaction, or you might never advance\n> > SxactGlobalXmin.\n>\n> Hm. So I don't have any opinion about whether this is a correct fix for\n> the leak, but I am quite distressed that the system failed to notice that\n> it was leaking predicate locks. Shouldn't there be the same sort of\n> leak-detection infrastructure that we have for most types of resources?\n\nWell, it is hooked up the usual release machinery, because it's in\nReleasePredicateLocks(), which is wired into the\nRESOURCE_RELEASE_LOCKS phase of resowner.c. The thing is that lock\nlifetime is linked to the last transaction with the oldest known xmin,\nnot the transaction that created them.\n\nMore analysis: Lock clean-up is deferred until \"... the last\nserializable transaction with the oldest xmin among serializable\ntransactions completes\", but I broke that by excluding read-only\ntransactions from the check so that SxactGlobalXminCount gets out of\nsync. There's a read-only SSI transaction in\nsrc/test/regress/sql/transactions.sql, but I think the reason the\nproblem manifests only intermittently with installcheck-parallel is\nbecause sometimes the read-only optimisation kicks in (effectively\ndropping us to plain old SI because there's no concurrent serializable\nactivity) and it doesn't take any locks at all, and sometimes the\nread-only transaction doesn't have the oldest known xmin among\nserializable transactions. However, if a read-write SSI transaction\nhad already taken a snapshot and has the oldest xmin and then the\nread-only one starts with the same xmin, we get into trouble. When\nthe read-only one releases, we fail to decrement SxactGlobalXminCount,\nand then we'll never call ClearOldPredicateLocks().\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 May 2019 16:50:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
},
{
"msg_contents": "On Wed, May 8, 2019 at 4:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, May 8, 2019 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Reproduced here. Once the system reaches a state where it's leaking\n> > > (which happens only occasionally for me during installcheck-parallel),\n> > > it keeps leaking for future SSI transactions. The cause is\n> > > SxactGlobalXmin getting stuck. The attached fixes it for me. I can't\n> > > remember why on earth I made that change, but it is quite clearly\n> > > wrong: you have to check every transaction, or you might never advance\n> > > SxactGlobalXmin.\n\nI pushed a version of that, thereby reverting the already-analysed\nhunk, and also another similar hunk (probably harmless).\n\nThe second hunk dates from a time in development when I was treating\nthe final clean-up at commit time as a regular commit, but that failed\nin PreCommit_CheckForSerializationFailure() because the DOOMED flag\nwas set by the earlier RO_SAFE partial release. The change was no\nlonger necessary, because final release of a partially released\nread-only transaction is now done with isCommit forced to false.\n(Before bb16aba50, it was done directly at RO_SAFE release time with\nisCommit set to false, but bb16aba50 split the operation into two\nphases, partial and then final, due to the extended object lifetime\nrequirement when sharing the SERIALIZABLEXACT with parallel workers.)\n\nI'll update the open items page.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 May 2019 20:43:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We're leaking predicate locks in HEAD"
}
] |
[
{
"msg_contents": "is_publishable_class has a test \"relid >= FirstNormalObjectId\",\nwhich I think we should drop, for two reasons:\n\n1. It makes the comment claiming that this function tests the same\nthings as check_publication_add_relation a lie.\n\n2. The comment about it claims that the purpose is to reject\ninformation_schema relations, but if that's so, it's ineffective.\nWe consider it supported to drop and recreate information_schema,\nand have indeed recommended doing so for some minor-version\nupgrades. After that, the information_schema relations would no\nlonger have OIDs recognizable to this test.\n\nSo what is the motivation for this test? If there's an important\nreason for it, we need to find a less fragile way to express it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 15:25:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "I wrote:\n> is_publishable_class has a test \"relid >= FirstNormalObjectId\",\n> which I think we should drop, for two reasons:\n\n> 1. It makes the comment claiming that this function tests the same\n> things as check_publication_add_relation a lie.\n\n> 2. The comment about it claims that the purpose is to reject\n> information_schema relations, but if that's so, it's ineffective.\n> We consider it supported to drop and recreate information_schema,\n> and have indeed recommended doing so for some minor-version\n> upgrades. After that, the information_schema relations would no\n> longer have OIDs recognizable to this test.\n\n> So what is the motivation for this test? If there's an important\n> reason for it, we need to find a less fragile way to express it.\n\nAfter further digging around, I wonder whether this test wasn't\nsomehow related to the issue described in\n\nhttps://postgr.es/m/2321.1557263978@sss.pgh.pa.us\n\nThat doesn't completely make sense, since the restriction on\nrelkind should render it moot whether IsCatalogClass thinks\nthat a toast table is a catalog table, but maybe there's a link?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 17:30:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "I wrote:\n> is_publishable_class has a test \"relid >= FirstNormalObjectId\",\n> which I think we should drop, for two reasons:\n> ...\n> So what is the motivation for this test? If there's an important\n> reason for it, we need to find a less fragile way to express it.\n\nI tried removing the FirstNormalObjectId check, and found that the\nreason for it seems to be \"the subscription/t/004_sync.pl test\nfalls over without it\". That's because that test supposes that\nthe *only* entry in pg_subscription_rel will be for the test table\nthat it creates. Without the FirstNormalObjectId check, the\ninformation_schema relations also show up in pg_subscription_rel,\nconfusing the script's simplistic status check.\n\nI'm of two minds what to do about that. One approach is to just\ndefine a \"FOR ALL TABLES\" publication as including the information_schema\ntables, in which case 004_sync.pl is wrong and we should fix it by\nadding a suitable WHERE restriction to its pg_subscription_rel check.\nHowever, possibly that would break some applications that are likewise\nassuming that no built-in tables appear in pg_subscription_rel.\n\nBut, if what we want is the definition that \"information_schema is\nexcluded from publishable tables\", I'm not satisfied with this\nimplementation of that rule. Dropping/recreating information_schema\nwould cause the behavior to change. We could, at the cost of an\nadditional syscache lookup, check the name of the schema that a\npotentially publishable table belongs to and exclude information_schema\nby name. I don't have much idea about how performance-critical\nis_publishable_class is, so I don't know how acceptable that seems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 22:37:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "On 2019-05-09 04:37, Tom Lane wrote:\n> I tried removing the FirstNormalObjectId check, and found that the\n> reason for it seems to be \"the subscription/t/004_sync.pl test\n> falls over without it\". That's because that test supposes that\n> the *only* entry in pg_subscription_rel will be for the test table\n> that it creates. Without the FirstNormalObjectId check, the\n> information_schema relations also show up in pg_subscription_rel,\n> confusing the script's simplistic status check.\n\nright\n\n> I'm of two minds what to do about that. One approach is to just\n> define a \"FOR ALL TABLES\" publication as including the information_schema\n> tables,\n\ncertainly not\n\n> But, if what we want is the definition that \"information_schema is\n> excluded from publishable tables\", I'm not satisfied with this\n> implementation of that rule. Dropping/recreating information_schema\n> would cause the behavior to change. We could, at the cost of an\n> additional syscache lookup, check the name of the schema that a\n> potentially publishable table belongs to and exclude information_schema\n> by name. I don't have much idea about how performance-critical\n> is_publishable_class is, so I don't know how acceptable that seems.\n\nI would classify the tables in information_schema on the side of being a\nsystem catalog, meaning that they are not replicated and they are\ncovered by whatever REINDEX SYSTEM thinks it should cover.\n\nIt would also make sense to integrate both of these concepts more\nconsistently with the user_catalog_table feature. Perhaps the\ninformation_schema tables could be made user catalogs. Really we should\njust have a single flag in pg_class that says \"I'm a catalog\",\napplicable both to built-in catalogs and to user-defined catalogs.\n\nI think we can get rid of the ability to reload the information_schema\nafter initdb. That was interesting in the early phase of its\ndevelopment, but now it just creates complications.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 May 2019 09:30:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "\n\nOn 09/05/2019 04:37, Tom Lane wrote:\n> I wrote:\n>> is_publishable_class has a test \"relid >= FirstNormalObjectId\",\n>> which I think we should drop, for two reasons:\n>> ...\n>> So what is the motivation for this test? If there's an important\n>> reason for it, we need to find a less fragile way to express it.\n> \n> I tried removing the FirstNormalObjectId check, and found that the\n> reason for it seems to be \"the subscription/t/004_sync.pl test\n> falls over without it\". That's because that test supposes that\n> the *only* entry in pg_subscription_rel will be for the test table\n> that it creates. Without the FirstNormalObjectId check, the\n> information_schema relations also show up in pg_subscription_rel,\n> confusing the script's simplistic status check.\n> \n> I'm of two minds what to do about that. One approach is to just\n> define a \"FOR ALL TABLES\" publication as including the information_schema\n> tables, in which case 004_sync.pl is wrong and we should fix it by\n> adding a suitable WHERE restriction to its pg_subscription_rel check.\n> However, possibly that would break some applications that are likewise\n> assuming that no built-in tables appear in pg_subscription_rel.\n> \n\nI was and still am worried that including information_schema in \"FOR ALL\nTABLES\" will result in breakage or at least unexpected behavior in case\nuser adjusts anything in the information_schema catalogs.\n\nIMHO only user created tables should be part of \"FOR ALL TABLES\" hence\nthe FirstNormalObjectId check.\n\nThe fact that information_schema can be recreated and is not considered\nsystem catalog by some commands but kind of is by others is more problem\nof how we added information_schema and it's definitely not ideal, we\nshould either consider it system schema like pg_catalog is or consider\nit everywhere an user catalog. For me the latter makes little sense\ngiven that it comes with the database.\n\n> But, if what we want is the definition that \"information_schema is\n> excluded from publishable tables\", I'm not satisfied with this\n> implementation of that rule. Dropping/recreating information_schema\n> would cause the behavior to change. We could, at the cost of an\n> additional syscache lookup, check the name of the schema that a\n> potentially publishable table belongs to and exclude information_schema\n> by name. I don't have much idea about how performance-critical\n> is_publishable_class is, so I don't know how acceptable that seems.\n> \n\nI think we need a better way of identifying what's part of system and\nwhat's user created in general. The FirstNormalObjectId seems somewhat\nokay approximation, but then we have plenty of other ways for checking,\nmaybe it's time to consolidate it into some extra column in pg_class?\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Thu, 9 May 2019 14:31:10 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-05-09 04:37, Tom Lane wrote:\n>> But, if what we want is the definition that \"information_schema is\n>> excluded from publishable tables\", I'm not satisfied with this\n>> implementation of that rule.\n\n> ... It would also make sense to integrate both of these concepts more\n> consistently with the user_catalog_table feature. Perhaps the\n> information_schema tables could be made user catalogs. Really we should\n> just have a single flag in pg_class that says \"I'm a catalog\",\n> applicable both to built-in catalogs and to user-defined catalogs.\n\nI do not want to go there because (a) it means that you can't tell a\ncatalog from a non-catalog without a catalog lookup, which has got\nobvious circularity problems, and (b) the idea that a user can add\na catalog without hacking the C code is silly on its face. I would\nsay that the actual important functional distinction between a catalog\nand a user table is whether the C code knows about it.\n\nPerhaps, for replication purposes, there's some value in having a\nthird category of tables that are treated more nearly like catalogs\nthan user tables in whether-to-replicate decisions. But let's not\nfuzz the issue by calling them catalogs. I think just calling it\na \"NO REPLICATE\" property would be less confusing.\n\n> I think we can get rid of the ability to reload the information_schema\n> after initdb. That was interesting in the early phase of its\n> development, but now it just creates complications.\n\nWe've relied on that more than once to allow minor-release updates of\ninformation_schema views, so I think losing the ability to do it is\na bad idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 09:41:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "Petr Jelinek <petr.jelinek@2ndquadrant.com> writes:\n> I think we need a better way of identifying what's part of system and\n> what's user created in general. The FirstNormalObjectId seems somewhat\n> okay approximation, but then we have plenty of other ways for checking,\n> maybe it's time to consolidate it into some extra column in pg_class?\n\nI'd be on board with adding \"bool relpublishable\" or the like to pg_class.\nWe'd also need infrastructure for setting that, of course, so it's not\na five-minute fix. In the meantime I guess we have to leave the\nis_publishable_class test like it is.\n\nI am thinking though that the replication code's tests of type OIDs\nagainst FirstNormalObjectId are broken. The essential property that\nthose are after, IIUC, is \"will the remote server certainly have the\nsame definition of this type as the local one?\" That is *absolutely\nnot guaranteed* for types defined in information_schema, because\ntheir OIDs aren't locked down and could plausibly be different across\ninstallations. I forget whether we load collations before or after\ninformation_schema, so this might or might not be a live bug today,\nbut it's certainly something waiting to bite us on the rear.\n\nActually --- that's for logical replication, isn't it? And we allow\nlogical replication across versions, don't we? If so, it is a live\nbug. Only hand-assigned type OIDs should be trusted to hold still\nacross major versions.\n\nIn short I think we'd better s/FirstNormalObjectId/FirstGenbkiObjectId/\nin logical/relation.c and pgoutput/pgoutput.c, and I think that's\nprobably a back-patchable bug fix of some urgency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 10:20:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-09 09:30:50 +0200, Peter Eisentraut wrote:\n> On 2019-05-09 04:37, Tom Lane wrote:\n> > I'm of two minds what to do about that. One approach is to just\n> > define a \"FOR ALL TABLES\" publication as including the information_schema\n> > tables,\n> \n> certainly not\n\nYea, that strikes me as a bad idea too.\n\n\n> It would also make sense to integrate both of these concepts more\n> consistently with the user_catalog_table feature. Perhaps the\n> information_schema tables could be made user catalogs. Really we should\n> just have a single flag in pg_class that says \"I'm a catalog\",\n> applicable both to built-in catalogs and to user-defined catalogs.\n\nHm - I'm not convinced by that. There's some lower-level reasons why we\ncan't easily replicate changes to system catalogs, but those don't exist\nfor user catalog tables. And in fact, they can be replicated today.\n\n\n> I think we can get rid of the ability to reload the information_schema\n> after initdb. That was interesting in the early phase of its\n> development, but now it just creates complications.\n\nYea, I'm far from convinced it's worth having that available. I wonder\nif we at least could have the reordering instructions not drop\ninformation_schema, so we'd have a stable oid for that. Or use some\npg_upgrade style logic to recreate it. Or have NamespaceCreate() just\nhardcode the relevant oid for information_schema.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 May 2019 07:43:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "On Thu, May 9, 2019 at 9:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think we can get rid of the ability to reload the information_schema\n> > after initdb. That was interesting in the early phase of its\n> > development, but now it just creates complications.\n>\n> We've relied on that more than once to allow minor-release updates of\n> information_schema views, so I think losing the ability to do it is\n> a bad idea.\n\n+1\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 May 2019 11:32:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
},
{
"msg_contents": "On 2019-05-09 15:41, Tom Lane wrote:\n>> I think we can get rid of the ability to reload the information_schema\n>> after initdb. That was interesting in the early phase of its\n>> development, but now it just creates complications.\n> We've relied on that more than once to allow minor-release updates of\n> information_schema views, so I think losing the ability to do it is\n> a bad idea.\n\nIn those cases we used CREATE OR REPLACE VIEW, which preserves OIDs.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 15:13:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fuzzy thinking in is_publishable_class"
}
] |
[
{
"msg_contents": "Hi,\n\nI was expecting the plans generated by standard_join_search to have lower costs\nthan the plans from GEQO. But after the results I have from a join order\nbenchmark show that GEQO produces plans with lower costs most of the time!\n\nI wonder what is causing this observation? From my understanding,\nstandard_join_search is doing a complete search. So I'm not sure how the GEQO\nmanaged to do better than that.\n\nThank you,\nDonald Dong\n\n",
"msg_date": "Tue, 7 May 2019 16:29:04 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> I was expecting the plans generated by standard_join_search to have lower costs\n> than the plans from GEQO. But after the results I have from a join order\n> benchmark show that GEQO produces plans with lower costs most of the time!\n\n> I wonder what is causing this observation? From my understanding,\n> standard_join_search is doing a complete search. So I'm not sure how the GEQO\n> managed to do better than that.\n\nstandard_join_search is *not* exhaustive; there's a heuristic that causes\nit not to consider clauseless joins unless it has to.\n\nFor the most part, GEQO uses the same heuristic (cf desirable_join()),\nbut given the right sort of query shape you can probably trick it into\nsituations where it will be forced to use a clauseless join when the\ncore code wouldn't. It'd still be surprising for that to come out with\na lower cost estimate than a join order that obeys the heuristic,\nthough. Clauseless joins are generally pretty awful.\n\nI'm a tad suspicious about the representativeness of your benchmark\nqueries if you find this is happening \"most of the time\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 May 2019 19:44:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Hi,\n\nThank you very much for the explanation! I think the join order\nbenchmark I used [1] is somewhat representative, however, I probably\ndidn't use the most accurate cost estimation.\n\nI find the cost from cheapest_total_path->total_cost is different\nfrom the cost from queryDesc->planstate->total_cost. What I saw was\nthat GEQO tends to form paths with lower\ncheapest_total_path->total_cost (aka the fitness of the children).\nHowever, standard_join_search is more likely to produce a lower\nqueryDesc->planstate->total_cost, which is the cost we get using\nexplain.\n\nI wonder why those two total costs are different? If the total_cost\nfrom the planstate is more accurate, could we use that instead as the\nfitness in geqo_eval?\n\n[1] https://github.com/gregrahn/join-order-benchmark\n\nRegards,\nDonald Dong\n\n> On May 7, 2019, at 4:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Donald Dong <xdong@csumb.edu> writes:\n>> I was expecting the plans generated by standard_join_search to have lower costs\n>> than the plans from GEQO. But after the results I have from a join order\n>> benchmark show that GEQO produces plans with lower costs most of the time!\n> \n>> I wonder what is causing this observation? From my understanding,\n>> standard_join_search is doing a complete search. So I'm not sure how the GEQO\n>> managed to do better than that.\n> \n> standard_join_search is *not* exhaustive; there's a heuristic that causes\n> it not to consider clauseless joins unless it has to.\n> \n> For the most part, GEQO uses the same heuristic (cf desirable_join()),\n> but given the right sort of query shape you can probably trick it into\n> situations where it will be forced to use a clauseless join when the\n> core code wouldn't. It'd still be surprising for that to come out with\n> a lower cost estimate than a join order that obeys the heuristic,\n> though. Clauseless joins are generally pretty awful.\n> \n> I'm a tad suspicious about the representativeness of your benchmark\n> queries if you find this is happening \"most of the time\".\n> \n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Wed, 22 May 2019 11:35:07 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> I find the cost from cheapest_total_path->total_cost is different\n> from the cost from queryDesc->planstate->total_cost. What I saw was\n> that GEQO tends to form paths with lower\n> cheapest_total_path->total_cost (aka the fitness of the children).\n> However, standard_join_search is more likely to produce a lower\n> queryDesc->planstate->total_cost, which is the cost we get using\n> explain.\n\n> I wonder why those two total costs are different? If the total_cost\n> from the planstate is more accurate, could we use that instead as the\n> fitness in geqo_eval?\n\nYou're still asking us to answer hypothetical questions unsupported\nby evidence. In what case does that really happen?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 14:42:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "On May 22, 2019, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Donald Dong <xdong@csumb.edu> writes:\n>> I find the cost from cheapest_total_path->total_cost is different\n>> from the cost from queryDesc->planstate->total_cost. What I saw was\n>> that GEQO tends to form paths with lower\n>> cheapest_total_path->total_cost (aka the fitness of the children).\n>> However, standard_join_search is more likely to produce a lower\n>> queryDesc->planstate->total_cost, which is the cost we get using\n>> explain.\n> \n>> I wonder why those two total costs are different? If the total_cost\n>> from the planstate is more accurate, could we use that instead as the\n>> fitness in geqo_eval?\n> \n> You're still asking us to answer hypothetical questions unsupported\n> by evidence. In what case does that really happen?\n\nHi,\n\nMy apologies if this is not the minimal necessary set up. But here's\nmore information about what I saw using the following query\n(JOB/1a.sql):\n\nSELECT MIN(mc.note) AS production_note,\n MIN(t.title) AS movie_title,\n MIN(t.production_year) AS movie_year\nFROM company_type AS ct,\n info_type AS it,\n movie_companies AS mc,\n movie_info_idx AS mi_idx,\n title AS t\nWHERE ct.kind = 'production companies'\n AND it.info = 'top 250 rank'\n AND mc.note NOT LIKE '%(as Metro-Goldwyn-Mayer Pictures)%'\n AND (mc.note LIKE '%(co-production)%'\n OR mc.note LIKE '%(presents)%')\n AND ct.id = mc.company_type_id\n AND t.id = mc.movie_id\n AND t.id = mi_idx.movie_id\n AND mc.movie_id = mi_idx.movie_id\n AND it.id = mi_idx.info_type_id;\n\nI attached the query plan and debug_print_rel output for GEQO and\nstandard_join_search.\n\n\t\t\tplanstate->total_cost\tcheapest_total_path\nGEQO\t\t54190.13\t\t\t\t54239.03\nSTD\t\t\t54179.02\t\t\t\t54273.73\n\nHere I observe GEQO produces a lower\ncheapest_total_path->total_cost, but its planstate->total_cost is higher\nthan what standard_join_search produces.\n\nRegards,\nDonald Dong",
"msg_date": "Wed, 22 May 2019 12:53:56 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Fwiw, I had an intern do some testing on the JOB last year, and he reported that geqo sometimes produced plans of lower cost than the standard planner (we were on PG10 at the time). I filed it under \"unexplained things that we need to investigate when we have time\", but alas...\r\n\r\nIn any case, Donald isn't the only one who has noticed this behavior. \r\n\r\nOn 5/22/19, 3:54 PM, \"Donald Dong\" <xdong@csumb.edu> wrote:\r\n\r\n On May 22, 2019, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n > \r\n > Donald Dong <xdong@csumb.edu> writes:\r\n >> I find the cost from cheapest_total_path->total_cost is different\r\n >> from the cost from queryDesc->planstate->total_cost. What I saw was\r\n >> that GEQO tends to form paths with lower\r\n >> cheapest_total_path->total_cost (aka the fitness of the children).\r\n >> However, standard_join_search is more likely to produce a lower\r\n >> queryDesc->planstate->total_cost, which is the cost we get using\r\n >> explain.\r\n > \r\n >> I wonder why those two total costs are different? If the total_cost\r\n >> from the planstate is more accurate, could we use that instead as the\r\n >> fitness in geqo_eval?\r\n > \r\n > You're still asking us to answer hypothetical questions unsupported\r\n > by evidence. In what case does that really happen?\r\n \r\n Hi,\r\n \r\n My apologies if this is not the minimal necessary set up. But here's\r\n more information about what I saw using the following query\r\n (JOB/1a.sql):\r\n \r\n SELECT MIN(mc.note) AS production_note,\r\n MIN(t.title) AS movie_title,\r\n MIN(t.production_year) AS movie_year\r\n FROM company_type AS ct,\r\n info_type AS it,\r\n movie_companies AS mc,\r\n movie_info_idx AS mi_idx,\r\n title AS t\r\n WHERE ct.kind = 'production companies'\r\n AND it.info = 'top 250 rank'\r\n AND mc.note NOT LIKE '%(as Metro-Goldwyn-Mayer Pictures)%'\r\n AND (mc.note LIKE '%(co-production)%'\r\n OR mc.note LIKE '%(presents)%')\r\n AND ct.id = mc.company_type_id\r\n AND t.id = mc.movie_id\r\n AND t.id = mi_idx.movie_id\r\n AND mc.movie_id = mi_idx.movie_id\r\n AND it.id = mi_idx.info_type_id;\r\n \r\n I attached the query plan and debug_print_rel output for GEQO and\r\n standard_join_search.\r\n \r\n \t\t\tplanstate->total_cost\tcheapest_total_path\r\n GEQO\t\t54190.13\t\t\t\t54239.03\r\n STD\t\t\t54179.02\t\t\t\t54273.73\r\n \r\n Here I observe GEQO produces a lower\r\n cheapest_total_path->total_cost, but its planstate->total_cost is higher\r\n than what standard_join_search produces.\r\n \r\n Regards,\r\n Donald Dong\r\n \r\n \r\n\r\n",
"msg_date": "Wed, 22 May 2019 21:03:08 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": ">>>>> \"Finnerty\" == Finnerty, Jim <jfinnert@amazon.com> writes:\n\n Finnerty> planstate-> total_cost\tcheapest_total_path\n Finnerty> GEQO\t\t54190.13\t\t54239.03\n Finnerty> STD\t\t54179.02\t\t54273.73\n\nThese differences aren't significant - the standard join search has a\n\"fuzz factor\" built into it, such that paths have to be more than 1%\nbetter in cost in order to actually be considered as being better than\nan existing path.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 23 May 2019 09:57:18 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> On May 22, 2019, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You're still asking us to answer hypothetical questions unsupported\n>> by evidence. In what case does that really happen?\n\n> I attached the query plan and debug_print_rel output for GEQO and\n> standard_join_search.\n\n> \t\t\tplanstate->total_cost\tcheapest_total_path\n> GEQO\t\t54190.13\t\t\t\t54239.03\n> STD\t\t\t54179.02\t\t\t\t54273.73\n\n> Here I observe GEQO produces a lower\n> cheapest_total_path->total_cost, but its planstate->total_cost is higher\n> than what standard_join_search produces.\n\nWell,\n\n(1) the plan selected by GEQO is in fact more expensive than\nthe one found by the standard search. Not by much --- as Andrew\nobserves, this difference is less than what the planner considers\n\"fuzzily the same\" --- but nonetheless 54190.13 > 54179.02.\n\n(2) the paths you show do not correspond to the finally selected\nplans --- they aren't even the same shape. (The Gathers are in\ndifferent places, to start with.) I'm not sure where you were\ncapturing the path data, but it looks like you missed top-level\nparallel-aggregation planning, and that managed to find some\nplans that were marginally cheaper than the ones you captured.\nKeep in mind that GEQO only considers join planning, not\ngrouping/aggregation.\n\nAndrew's point about fuzzy cost comparison is also a good one,\nthough we needn't invoke it to explain these particular numbers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 12:02:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "On May 23, 2019, at 9:02 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Donald Dong <xdong@csumb.edu> writes:\n>> On May 22, 2019, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> You're still asking us to answer hypothetical questions unsupported\n>>> by evidence. In what case does that really happen?\n> \n>> I attached the query plan and debug_print_rel output for GEQO and\n>> standard_join_search.\n> \n>> \t\t\tplanstate->total_cost\tcheapest_total_path\n>> GEQO\t\t54190.13\t\t\t\t54239.03\n>> STD\t\t\t54179.02\t\t\t\t54273.73\n> \n>> Here I observe GEQO produces a lower\n>> cheapest_total_path->total_cost, but its planstate->total_cost is higher\n>> than what standard_join_search produces.\n> \n> Well,\n> \n> (1) the plan selected by GEQO is in fact more expensive than\n> the one found by the standard search. Not by much --- as Andrew\n> observes, this difference is less than what the planner considers\n> \"fuzzily the same\" --- but nonetheless 54190.13 > 54179.02.\n> \n> (2) the paths you show do not correspond to the finally selected\n> plans --- they aren't even the same shape. (The Gathers are in\n> different places, to start with.) I'm not sure where you were\n> capturing the path data, but it looks like you missed top-level\n> parallel-aggregation planning, and that managed to find some\n> plans that were marginally cheaper than the ones you captured.\n> Keep in mind that GEQO only considers join planning, not\n> grouping/aggregation.\n> \n> Andrew's point about fuzzy cost comparison is also a good one,\n> though we needn't invoke it to explain these particular numbers.\n\nOh, that's very good to know! I captured the path at the end of the\njoin_search_hook. If I understood correctly, top-level\nparallel-aggregation will be applied later, so GEQO is not taking it\ninto consideration during the join searching?\n\nBy looking at the captured costs, I thought GEQO found a better join\norder than the standard_join_search. However, the final plan using\nthe join order produced by GEQO turns out to be more expansive. Would\nthat imply if GEQO sees a join order which is identical to the one\nproduced by standard_join_search, it will discard it since the\ncheapest_total_path has a higher cost, though the final plan may be\ncheaper?\n\nHere is another query (JOB/27a.sql) which has more significant cost\ndifferences:\n\n\t\tplanstate->total_cost\tcheapest_total_path\nGEQO\t343016.77\t\t\t343016.75\nSTD\t\t342179.13\t\t\t344137.33\n\nRegards,\nDonald Dong\n\n\n",
"msg_date": "Thu, 23 May 2019 10:05:29 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> On May 23, 2019, at 9:02 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (2) the paths you show do not correspond to the finally selected\n>> plans --- they aren't even the same shape. (The Gathers are in\n>> different places, to start with.) I'm not sure where you were\n>> capturing the path data, but it looks like you missed top-level\n>> parallel-aggregation planning, and that managed to find some\n>> plans that were marginally cheaper than the ones you captured.\n>> Keep in mind that GEQO only considers join planning, not\n>> grouping/aggregation.\n\n> By looking at the captured costs, I thought GEQO found a better join\n> order than the standard_join_search. However, the final plan using\n> the join order produced by GEQO turns out to be more expansive. Would\n> that imply if GEQO sees a join order which is identical to the one\n> produced by standard_join_search, it will discard it since the\n> cheapest_total_path has a higher cost, though the final plan may be\n> cheaper?\n\nI suspect what's really going on is that you're looking at the wrong\npaths. The planner remembers more paths for each rel than just the\ncheapest-total-cost one, the reason being that total cost is not the\nonly figure of merit. The plan that is winning in the end, it looks\nlike, is parallelized aggregation on top of a non-parallel join plan,\nbut the cheapest_total_path uses up the opportunity for a Gather on\na parallelized scan/join. If we were just doing a scan/join and\nno aggregation, that path would have been the basis for the final\nplan, but it's evidently not being chosen here; the planner is going\nto some other scan/join path that is not parallelized.\n\nI haven't looked closely at whether the parallel-query hacking has\npaid any attention to GEQO. It's entirely likely that GEQO is still\nchoosing its join order on the basis of cheapest-total scan/join cost\nwithout regard to parallelizability, which would lead to an apparently\nbetter cost for the cheapest_total_path even though the path that\nwill end up being used is some other one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 13:43:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "On May 23, 2019, at 10:43 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Donald Dong <xdong@csumb.edu> writes:\n>> On May 23, 2019, at 9:02 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> (2) the paths you show do not correspond to the finally selected\n>>> plans --- they aren't even the same shape. (The Gathers are in\n>>> different places, to start with.) I'm not sure where you were\n>>> capturing the path data, but it looks like you missed top-level\n>>> parallel-aggregation planning, and that managed to find some\n>>> plans that were marginally cheaper than the ones you captured.\n>>> Keep in mind that GEQO only considers join planning, not\n>>> grouping/aggregation.\n> \n>> By looking at the captured costs, I thought GEQO found a better join\n>> order than the standard_join_search. However, the final plan using\n>> the join order produced by GEQO turns out to be more expansive. Would\n>> that imply if GEQO sees a join order which is identical to the one\n>> produced by standard_join_search, it will discard it since the\n>> cheapest_total_path has a higher cost, though the final plan may be\n>> cheaper?\n> \n> I suspect what's really going on is that you're looking at the wrong\n> paths. The planner remembers more paths for each rel than just the\n> cheapest-total-cost one, the reason being that total cost is not the\n> only figure of merit. The plan that is winning in the end, it looks\n> like, is parallelized aggregation on top of a non-parallel join plan,\n> but the cheapest_total_path uses up the opportunity for a Gather on\n> a parallelized scan/join. If we were just doing a scan/join and\n> no aggregation, that path would have been the basis for the final\n> plan, but it's evidently not being chosen here; the planner is going\n> to some other scan/join path that is not parallelized.\n\nSeems the paths in the final rel (path list, cheapest parameterized\npaths, cheapest startup path, and cheapest total path) are the same\nidentical path for this particular query (JOB/1a.sql). Am I missing\nanything?\n\nSince the total cost of the cheapest-total-path is what GEQO is\ncurrently using to evaluate the fitness (minimizing), I'm expecting\nthe cheapest-total-cost to measure how good is a join order. So a\njoin order from standard_join_search, with higher\ncheapest-total-cost, ends up to be better is pretty surprising to me.\n\nPerhaps the cheapest-total-cost should not be the best/only choice\nfor fitness?\n\nRegards,\nDonald Dong\n\n\n",
"msg_date": "Thu, 23 May 2019 13:15:02 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> Perhaps the cheapest-total-cost should not be the best/only choice\n> for fitness?\n\nWell, really the GEQO code should be thrown out and rewritten from\nthe ground up ... but that hasn't quite gotten done yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 18:33:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why could GEQO produce plans with lower costs than the\n standard_join_search?"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nOn another thread, lots of undo log-related patches have been traded.\nBuried deep in the stack is one that I'd like to highlight and discuss\nin a separate thread, because it relates to a parallel thread of\ndevelopment and it'd be good to get feedback on it.\n\nIn commit 3eb77eba, Shawn Debnath and I extended the checkpointer\nfsync machinery to support more kinds of files. Next, we'd like to\nteach the buffer pool to deal with more kinds of buffers. The context\nfor this collaboration is that he's working on putting things like\nCLOG into shared buffers, and my EDB colleagues and I are working on\nputting undo logs into shared buffers. We want a simple way to put\nany block-structured stuff into shared buffers, not just plain\n\"relations\".\n\nThe questions are: how should buffer tags distinguish different kinds\nof buffers, and how should SMGR direct IO traffic to the right place\nwhen it needs to schlepp pages in and out?\n\nIn earlier prototype code, I'd been using a special database number\nfor undo logs. In a recent thread[1], Tom and others didn't like that\nidea much, and Shawn mentioned his colleague's idea of stealing unused\nbits from the fork number so that there is no net change in tag size,\nbut we have entirely separate namespaces for each kind of buffered\ndata.\n\nHere's a patch that does that, and then makes changes in the main\nplaces I have found so far that need to be aware of the new SMGR ID\nfield.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BDE0mmiBZMtZyvwWtgv1sZCniSVhXYsXkvJ_Wo%2B83vvw%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 8 May 2019 18:31:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Wed, May 08, 2019 at 06:31:04PM +1200, Thomas Munro wrote:\n\n> The questions are: how should buffer tags distinguish different kinds\n> of buffers, and how should SMGR direct IO traffic to the right place\n> when it needs to schlepp pages in and out?\n> \n> In earlier prototype code, I'd been using a special database number\n> for undo logs. In a recent thread[1], Tom and others didn't like that\n> idea much, and Shawn mentioned his colleague's idea of stealing unused\n> bits from the fork number so that there is no net change in tag size,\n> but we have entirely separate namespaces for each kind of buffered\n> data.\n> \n> Here's a patch that does that, and then makes changes in the main\n> places I have found so far that need to be aware of the new SMGR ID\n> field.\n\nLooks good to me. Minor nit: update the comment for XLogRecGetBlockTag:\n\ndiff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\nindex 9196aa3aae..9ee086f00b 100644\n--- a/src/backend/access/transam/xlogreader.c\n+++ b/src/backend/access/transam/xlogreader.c\n@@ -1349,12 +1353,13 @@ err:\n /*\n * Returns information about the block that a block reference refers to.\n *\n- * If the WAL record contains a block reference with the given ID, *rnode,\n+ * If the WAL record contains a block reference with the given ID, *smgrid, *rnode,\n * *forknum, and *blknum are filled in (if not NULL), and returns true.\n * Otherwise returns false.\n */\n bool\n XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,\n+ SmgrId *smgrid,\n RelFileNode *rnode, ForkNumber *forknum, BlockNumber *blknum)\n {\n DecodedBkpBlock *bkpb;\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Thu, 9 May 2019 13:54:49 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Fri, May 10, 2019 at 8:54 AM Shawn Debnath <sdn@amazon.com> wrote:\n> On Wed, May 08, 2019 at 06:31:04PM +1200, Thomas Munro wrote:\n> Looks good to me. Minor nit: update the comment for XLogRecGetBlockTag:\n\nFixed. Also fixed broken upgrade scripts for pg_buffercache\nextension, as pointed out by Robert[1] on the main thread where undo\nstuff is being discussed. Attempts to keep subtopics separated have so\nfar failed, so the thread ostensibly about orphaned file cleanup is\nnow about undo work allocation, but I figured it'd be useful to\nhighlight this patch separately as it'll be the first to go in, and\nit's needed by your work Shawn. So I hope we're still on the same\npage with this refactoring patch.\n\nOne thing I'm not sure about is the TODO message in parsexlog.c's\nextractPageInfo() function.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmob4htT-9Tq7eHG3wS%3DdpKFbQZOyqgSr1iWmV_65Duz6Pw%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Fri, 12 Jul 2019 10:16:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 10:16:21AM +1200, Thomas Munro wrote:\n> Attempts to keep subtopics separated have so\n> far failed, so the thread ostensibly about orphaned file cleanup is\n> now about undo work allocation, but I figured it'd be useful to\n> highlight this patch separately as it'll be the first to go in, and\n> it's needed by your work Shawn. So I hope we're still on the same\n> page with this refactoring patch.\n\nThanks for reminding me about this thread - I will revisit this again, \nhad some more feedback after doing my PoC for the pgCon. Need to find \nthat too...\n\n> One thing I'm not sure about is the TODO message in parsexlog.c's\n> extractPageInfo() function.\n> \n> [1] https://www.postgresql.org/message-id/CA%2BTgmob4htT-9Tq7eHG3wS%3DdpKFbQZOyqgSr1iWmV_65Duz6Pw%40mail.gmail.com\n\n+\n+ /* TODO: How should we handle other smgr IDs? */\n+ if (smgrid != SMGR_MD)\n continue;\n\nAll files are copied verbatim from source to target except for relation \nfiles. So this would include slru data and undo data. From what I read \nin the docs, I do not believe we need any special handling for either \nnew SMGRs and your current code should suffice.\n\nprocess_block_change() is very relation specific so if different \nhandling is required by different SMGRs, it would make sense to call on \nsmgr specific functions instead.\n\nCan't wait for the SMGR_MD to SMGR_REL change :-) It will make \nunderstanding this code a tad bit easier.\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Thu, 11 Jul 2019 16:19:19 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 11:19 AM Shawn Debnath <sdn@amazon.com> wrote:\n> On Fri, Jul 12, 2019 at 10:16:21AM +1200, Thomas Munro wrote:\n> +\n> + /* TODO: How should we handle other smgr IDs? */\n> + if (smgrid != SMGR_MD)\n> continue;\n>\n> All files are copied verbatim from source to target except for relation\n> files. So this would include slru data and undo data. From what I read\n> in the docs, I do not believe we need any special handling for either\n> new SMGRs and your current code should suffice.\n>\n> process_block_change() is very relation specific so if different\n> handling is required by different SMGRs, it would make sense to call on\n> smgr specific functions instead.\n\nRight. And since undo and slru etc data will be WAL-logged with block\nreferences, it's entirely possible to teach it to scan them properly,\nthough it's not clear whether it's worth doing that. Ok, good, TODO\nremoved.\n\n> Can't wait for the SMGR_MD to SMGR_REL change :-) It will make\n> understanding this code a tad bit easier.\n\nOr could we retrofit different words that start with M and D?\n\nHere's a new version of the patch set (ie the first 3 patches in the\nundo patch set, and the part that I think you need for slru work),\nthis time with the pg_buffercache changes as a separate commit since\nit's somewhat independent and has a different (partial) reviewer.\n\nI was starting to think about whether I might be able to commit these,\nbut now I see that this increase in WAL size is probably not\nacceptable:\n\n@@ -727,6 +734,8 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n }\n if (!samerel)\n {\n+ memcpy(scratch, ®buf->smgrid, sizeof(SmgrId));\n+ scratch += sizeof(SmgrId);\n memcpy(scratch, ®buf->rnode, sizeof(RelFileNode));\n scratch += sizeof(RelFileNode);\n }\n\n@@ -1220,8 +1221,10 @@ DecodeXLogRecord(XLogReaderState *state,\nXLogRecord *record, char **errormsg)\n }\n if (!(fork_flags & BKPBLOCK_SAME_REL))\n {\n+ COPY_HEADER_FIELD(&blk->smgrid, sizeof(SmgrId));\n COPY_HEADER_FIELD(&blk->rnode,\nsizeof(RelFileNode));\n rnode = &blk->rnode;\n+ smgrid = blk->smgrid;\n }\n\nThat's an enum, so it works out to a word per record. The obvious way\nto avoid increasing the size is shove the SMGR ID into the same space\nthat holds the forknum. Unlike BufferTag, where forknum currently\nswims in 32 bits which this patch chops in half, XLogRecorBlockHeader\nis already crammed into a uint8 fork_flags of which it has only the\nlower nibble, and the upper nibble is used for eg BKP_BLOCK_xxx flag\nbits, and there isn't even a spare bit to say 'has non-zero SMGR ID'.\nRats. I suppose I could change it to a byte. I wonder if one extra\nbyte per WAL record is acceptable. Anyone?\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Mon, 15 Jul 2019 22:58:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 6:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> That's an enum, so it works out to a word per record. The obvious way\n> to avoid increasing the size is shove the SMGR ID into the same space\n> that holds the forknum. Unlike BufferTag, where forknum currently\n> swims in 32 bits which this patch chops in half, XLogRecorBlockHeader\n> is already crammed into a uint8 fork_flags of which it has only the\n> lower nibble, and the upper nibble is used for eg BKP_BLOCK_xxx flag\n> bits, and there isn't even a spare bit to say 'has non-zero SMGR ID'.\n> Rats. I suppose I could change it to a byte. I wonder if one extra\n> byte per WAL record is acceptable. Anyone?\n\nOK, I'll bite: I don't like it. I think this patch is more about how\npeople feel about things than it is about a technically necessary\nchange, and I'm absolutely OK with that up to the point where it\nstarts to inflict measurable costs on our users. Making WAL records\nbigger in common cases, even by 1 byte, is a measurable cost. And\nthere are a few other minor costs too: we whack around a bunch of\ninternal APIs, and we force a pg_buffercache version bump. And I am\nof the opinion that none of those costs, big or small, are buying us\nanything technically. I am OK with being convinced otherwise, but\nright now I am not convinced.\n\nTo set forth my argument: I think magic database OIDs are just fine.\nThe contrary arguments as I understand them are (1) stuff might break\nif there's no matching entry in pg_database, or if there is, and (2)\nsome hypothetical smgr might need the database OID as a discriminator.\nMy counter-arguments are (1) we can fix that by writing the\nappropriate code and it doesn't even seem very hard and (2) tough\nnoogies. To expand on (2) slightly, the proposals on the table do not\nneed that, the existing smgr does not need that, and there's no reason\nto suppose that future proposals would require that either, because\n2^32 relfilenodes of up to 2^32 blocks each is a lot, and you\nshouldn't need another 2^32 bits. If someone does come up with a\nproposal that needs those bits, perhaps because it lives within a\ndatabase rather than being a global object like SLRU or undo data,\nmaybe it should be a new kind of AM rather than a new smgr. And if\nnot, then maybe we should leave it to that hypothetical patch to solve\nthat hypothetical problem, because right now we're just speculating\nthat another 32 bits will fix it, which we can't really know, because\nif we're hypothesizing the existence of a patch that needs more bits,\nwe could also hypothesize that it needs more than 32 of them.\n\nIf we absolutely have to keep driving down this course, you could\nprobably steal a bit from the fork number nibble to indicate a\nnon-default smgr. Even if there are only 2 bits there, you could use\n1 for non-default smgr and 1 for non-default fork number, and then in\nthe common case of references to the default block of the default\nsmgr, you wouldn't be spending anything additional ... assuming you\ndon't count the CPU cycles to encode and decode a more complex WAL\nrecord format.\n\nBut how about just using a magic database OID?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 09:49:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 1:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> [long form -1]\n>\n> But how about just using a magic database OID?\n\nThis patch was just an experiment based on discussion here:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BDE0mmiBZMtZyvwWtgv1sZCniSVhXYsXkvJ_Wo%2B83vvw%40mail.gmail.com\n\nI learned some things. The main one is that you don't just need space\nthe buffer tag (which has plenty of spare bits) but also in WAL block\nreferences, and that does seem to be a strike against the idea. I\ndon't want lack of agreement here to hold up other work. So here's\nwhat I propose:\n\nI'll go and commit the simple refactoring bits of this work, which\njust move some stuff belonging to md.c out of smgr.c (see attached).\nI'll go back to using a magic database OID for the undo log patch set\nfor now. We could always reconsider the SMGR discriminator later.\nFor now I'm not going to consider this question a blocker for the\nlater undo code when it's eventually ready for commit.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 16 Jul 2019 10:49:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'll go and commit the simple refactoring bits of this work, which\n> just move some stuff belonging to md.c out of smgr.c (see attached).\n\nPushed. The rest of that earlier patch set is hereby abandoned (at\nleast for now). I'll be posting a new-and-improved undo log patch set\nsoon, now a couple of patches smaller but back to magic database 9. I\nthink I'll probably do that with a new catalog header file that\ndefines pseudo-database OIDs.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:01:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:49:39AM +1200, Thomas Munro wrote:\n> On Tue, Jul 16, 2019 at 1:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > [long form -1]\n> >\n> > But how about just using a magic database OID?\n> \n> This patch was just an experiment based on discussion here:\n> \n> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BDE0mmiBZMtZyvwWtgv1sZCniSVhXYsXkvJ_Wo%2B83vvw%40mail.gmail.com\n> \n> I learned some things. The main one is that you don't just need space\n> the buffer tag (which has plenty of spare bits) but also in WAL block\n> references, and that does seem to be a strike against the idea. I\n> don't want lack of agreement here to hold up other work. So here's\n> what I propose:\n> \n> I'll go and commit the simple refactoring bits of this work, which\n> just move some stuff belonging to md.c out of smgr.c (see attached).\n> I'll go back to using a magic database OID for the undo log patch set\n> for now. We could always reconsider the SMGR discriminator later.\n> For now I'm not going to consider this question a blocker for the\n> later undo code when it's eventually ready for commit.\n\nAgree that we should move on at this point. The magic OIDs do not block \nus from moving to this model later if needed.\n\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Wed, 17 Jul 2019 09:50:11 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 03:01:47PM +1200, Thomas Munro wrote:\n> On Tue, Jul 16, 2019 at 10:49 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I'll go and commit the simple refactoring bits of this work, which\n> > just move some stuff belonging to md.c out of smgr.c (see attached).\n> \n> Pushed. The rest of that earlier patch set is hereby abandoned (at\n> least for now). I'll be posting a new-and-improved undo log patch set\n> soon, now a couple of patches smaller but back to magic database 9. I\n> think I'll probably do that with a new catalog header file that\n> defines pseudo-database OIDs.\n\nOne suggestion, let's expose the magic oids via a dedicated catalog \npg_smgr so that they can be reserved and accounted for via the scripts \nas discussed in [1]. There were suggestions in the thread to use pg_am, \nbut with the revised pg_am [2], it seems we will be stretching the \nmeaning of access methods quite a bit, in my opinion, incorrectly.\nThe benefit of having a dedicated catalog is that we can expose data \nparticular to smgrs that do not fit in the access methods scope.\n\n\n[1] \nhttps://www.postgresql.org/message-id/20180821184835.GA1032%4060f81dc409fc.ant.amazon.com\n[2] https://www.postgresql.org/docs/devel/catalog-pg-am.html\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Wed, 17 Jul 2019 10:02:43 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding SMGR discriminator to buffer tags"
}
] |
[
{
"msg_contents": "In commit d50d172e51, which adds support for FINAL relation pushdown\nin postgres_fdw, I forgot to update the FDW documentation about\nGetForeignUpperPaths to mention that the extra parameter of that\nfunction points to a FinalPathExtraData structure introduced by that\ncommit in the case of FINAL relation pushdown. Attached is a patch\nfor that.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 8 May 2019 15:51:25 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing FDW documentation about GetForeignUpperPaths"
},
{
"msg_contents": "On Wed, May 8, 2019 at 3:51 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> In commit d50d172e51, which adds support for FINAL relation pushdown\n> in postgres_fdw, I forgot to update the FDW documentation about\n> GetForeignUpperPaths to mention that the extra parameter of that\n> function points to a FinalPathExtraData structure introduced by that\n> commit in the case of FINAL relation pushdown. Attached is a patch\n> for that.\n\nThere seems to be no objections, so I've committed the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 9 May 2019 20:09:33 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing FDW documentation about GetForeignUpperPaths"
}
] |
[
{
"msg_contents": "The general theme for table function names seem to be\n\"table_<am_callback_name>\". For example table_scan_getnextslot() and its\ncorresponding callback scan_getnextslot(). Most of the table functions and\ncallbacks follow mentioned convention except following ones\n\n table_beginscan\n table_endscan\n table_rescan\n table_fetch_row_version\n table_get_latest_tid\n table_insert\n table_insert_speculative\n table_complete_speculative\n table_delete\n table_update\n table_lock_tuple\n\nthe corresponding callback names for them are\n\n scan_begin\n scan_end\n scan_rescan\n tuple_fetch_row_version\n tuple_get_latest_tid\n tuple_insert\n tuple_insert_speculative\n tuple_delete\n tuple_update\n tuple_lock\n\nIt confuses while browsing through the code and hence I would like to\npropose we make them consistent. Either fix the callback names or table\nfunctions but all should follow the same convention, makes it easy to\nbrowse around and refer to as well. Personally, I would say fix the table\nfunction names as callback names seem fine. So, for example, make it\ntable_scan_begin().\n\nAlso, some of these table function names read little odd\n\ntable_relation_set_new_filenode\ntable_relation_nontransactional_truncate\ntable_relation_copy_data\ntable_relation_copy_for_cluster\ntable_relation_vacuum\ntable_relation_estimate_size\n\nCan we drop relation word from callback names and as a result from these\nfunction names as well? Just have callback names as set_new_filenode,\ncopy_data, estimate_size?\n\nAlso, a question about comments. Currently, redundant comments are written\nabove callback functions as also above table functions. They differ\nsometimes a little bit on descriptions but majority content still being the\nsame. Should we have only one place finalized to have the comments to keep\nthem in sync and also know which one to refer to?\n\nPlus, file name amapi.h now seems very broad, if possible should be renamed\nto indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy\naround header file renames.\n\nThe general theme for table function names seem to be \"table_<am_callback_name>\". For example table_scan_getnextslot() and its corresponding callback scan_getnextslot(). Most of the table functions and callbacks follow mentioned convention except following ones table_beginscan table_endscan table_rescan table_fetch_row_version table_get_latest_tid table_insert table_insert_speculative table_complete_speculative table_delete table_update table_lock_tuplethe corresponding callback names for them are scan_begin scan_end scan_rescan tuple_fetch_row_version tuple_get_latest_tid tuple_insert tuple_insert_speculative tuple_delete tuple_update tuple_lockIt confuses while browsing through the code and hence I would like to propose we make them consistent. Either fix the callback names or table functions but all should follow the same convention, makes it easy to browse around and refer to as well. Personally, I would say fix the table function names as callback names seem fine. So, for example, make it table_scan_begin().Also, some of these table function names read little oddtable_relation_set_new_filenodetable_relation_nontransactional_truncatetable_relation_copy_datatable_relation_copy_for_clustertable_relation_vacuumtable_relation_estimate_sizeCan we drop relation word from callback names and as a result from these function names as well? Just have callback names as set_new_filenode, copy_data, estimate_size?Also, a question about comments. Currently, redundant comments are written above callback functions as also above table functions. They differ sometimes a little bit on descriptions but majority content still being the same. Should we have only one place finalized to have the comments to keep them in sync and also know which one to refer to?Plus, file name amapi.h now seems very broad, if possible should be renamed to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy around header file renames.",
"msg_date": "Wed, 8 May 2019 00:32:22 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:\n> The general theme for table function names seem to be\n> \"table_<am_callback_name>\". For example table_scan_getnextslot() and its\n> corresponding callback scan_getnextslot(). Most of the table functions and\n> callbacks follow mentioned convention except following ones\n> \n> table_beginscan\n> table_endscan\n> table_rescan\n> table_fetch_row_version\n> table_get_latest_tid\n> table_insert\n> table_insert_speculative\n> table_complete_speculative\n> table_delete\n> table_update\n> table_lock_tuple\n> \n> the corresponding callback names for them are\n> \n> scan_begin\n> scan_end\n> scan_rescan\n\nThe mismatch here is just due of backward compat with the existing\nfunction names.\n\n\n> tuple_fetch_row_version\n> tuple_get_latest_tid\n\nHm, I'd not object to adding a tuple_ to the wrapper.\n\n\n> tuple_insert\n> tuple_insert_speculative\n> tuple_delete\n> tuple_update\n> tuple_lock\n\nThat again is to keep the naming similar to the existing functions.\n\n\n\n> Also, some of these table function names read little odd\n> \n> table_relation_set_new_filenode\n> table_relation_nontransactional_truncate\n> table_relation_copy_data\n> table_relation_copy_for_cluster\n> table_relation_vacuum\n> table_relation_estimate_size\n> \n> Can we drop relation word from callback names and as a result from these\n> function names as well? Just have callback names as set_new_filenode,\n> copy_data, estimate_size?\n\nI'm strongly against that. These all work on a full relation size,\nrather than on individual tuples, and that seems worth pointing out.\n\n\n> Also, a question about comments. Currently, redundant comments are written\n> above callback functions as also above table functions. They differ\n> sometimes a little bit on descriptions but majority content still being the\n> same. Should we have only one place finalized to have the comments to keep\n> them in sync and also know which one to refer to?\n\nNote that the non-differing comments usually just refer to the other\nplace. And there's legitimate differences in most of the ones that are\nboth at the callback and the external functions - since the audience of\nboth are difference, that IMO makes sense.\n\n\n> Plus, file name amapi.h now seems very broad, if possible should be renamed\n> to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy\n> around header file renames.\n\nWe probably should rename it, but not in 12...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 May 2019 14:51:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Wed, May 8, 2019 at 2:51 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:\n> > The general theme for table function names seem to be\n> > \"table_<am_callback_name>\". For example table_scan_getnextslot() and its\n> > corresponding callback scan_getnextslot(). Most of the table functions and\n> > callbacks follow mentioned convention except following ones\n> >\n> > table_beginscan\n> > table_endscan\n> > table_rescan\n> > table_fetch_row_version\n> > table_get_latest_tid\n> > table_insert\n> > table_insert_speculative\n> > table_complete_speculative\n> > table_delete\n> > table_update\n> > table_lock_tuple\n> >\n> > the corresponding callback names for them are\n> >\n> > scan_begin\n> > scan_end\n> > scan_rescan\n>\n> The mismatch here is just due of backward compat with the existing\n> function names.\n\nI am missing something here, would like to know more. table_ seem all\nnew fresh naming. Hence IMO having consistency with surrounding and\nrelated code carries more weight as I don't know backward compat\nserving what purpose. Heap function names can continue to call with\nsame old names for backward compat if required.\n\n\n> > Also, a question about comments. Currently, redundant comments are written\n> > above callback functions as also above table functions. They differ\n> > sometimes a little bit on descriptions but majority content still being the\n> > same. Should we have only one place finalized to have the comments to keep\n> > them in sync and also know which one to refer to?\n>\n> Note that the non-differing comments usually just refer to the other\n> place. And there's legitimate differences in most of the ones that are\n> both at the callback and the external functions - since the audience of\n> both are difference, that IMO makes sense.\n>\n\nNot having consistency is the main aspect I wish to bring to\nattention. Like for some callback functions the comment is\n\n /* see table_insert() for reference about parameters */\n void (*tuple_insert) (Relation rel, TupleTableSlot *slot,\n CommandId cid, int options,\n struct BulkInsertStateData *bistate);\n\n /* see table_insert_speculative() for reference about parameters\n*/\n void (*tuple_insert_speculative) (Relation rel,\n TupleTableSlot *slot,\n CommandId cid,\n int options,\n struct\nBulkInsertStateData *bistate,\n uint32 specToken);\n\nWhereas for some other callbacks the parameter explanation exist in\nboth the places. Seems we should be consistent.\nI feel in long run becomes pain to keep them in sync as comments\nevolve. Like for example\n\n /*\n * Estimate the size of shared memory needed for a parallel scan\nof this\n * relation. The snapshot does not need to be accounted for.\n */\n Size (*parallelscan_estimate) (Relation rel);\n\nparallescan_estimate is not having snapshot argument passed to it, but\ntable_parallescan_estimate does. So, this way chances are high they\ngoing out of sync and missing to modify in both the places. Agree\nthough on audience being different for both. So, seems going with the\nrefer XXX for parameters seems fine approach for all the callbacks and\nthen only specific things to flag out for the AM layer to be aware can\nlive above the callback function.\n\n> > Plus, file name amapi.h now seems very broad, if possible should be renamed\n> > to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy\n> > around header file renames.\n>\n> We probably should rename it, but not in 12...\n\nOkay good to know.\n\n\n",
"msg_date": "Wed, 8 May 2019 17:05:07 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-08 17:05:07 -0700, Ashwin Agrawal wrote:\n> On Wed, May 8, 2019 at 2:51 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:\n> > > The general theme for table function names seem to be\n> > > \"table_<am_callback_name>\". For example table_scan_getnextslot() and its\n> > > corresponding callback scan_getnextslot(). Most of the table functions and\n> > > callbacks follow mentioned convention except following ones\n> > >\n> > > table_beginscan\n> > > table_endscan\n> > > table_rescan\n> > > table_fetch_row_version\n> > > table_get_latest_tid\n> > > table_insert\n> > > table_insert_speculative\n> > > table_complete_speculative\n> > > table_delete\n> > > table_update\n> > > table_lock_tuple\n> > >\n> > > the corresponding callback names for them are\n> > >\n> > > scan_begin\n> > > scan_end\n> > > scan_rescan\n> >\n> > The mismatch here is just due of backward compat with the existing\n> > function names.\n> \n> I am missing something here, would like to know more. table_ seem all\n> new fresh naming. Hence IMO having consistency with surrounding and\n> related code carries more weight as I don't know backward compat\n> serving what purpose. Heap function names can continue to call with\n> same old names for backward compat if required.\n\nThe changes necessary for tableam were already huge. Changing naming\nschemes for functions that are used all over the backend (e.g. ~80 calls\nto table_beginscan), and where there's other wrapper functions that also\nwidely used (237 calls to systable_beginscan) which didn't have to be\ntouched, at the same time would have made it even harder to review.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 May 2019 07:34:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> The changes necessary for tableam were already huge. Changing naming\n> schemes for functions that are used all over the backend (e.g. ~80 calls\n> to table_beginscan), and where there's other wrapper functions that also\n> widely used (237 calls to systable_beginscan) which didn't have to be\n> touched, at the same time would have made it even harder to review.\n\nIf there are no objections to renaming now, as separate independent\npatch, I am happy to do the same and send it across. Will rename to\nmake it consistent as mentioned at start of the thread leaving\ntable_relation_xxx() ones as is today.\n\n\n",
"msg_date": "Fri, 10 May 2019 10:43:44 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:\n> On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > The changes necessary for tableam were already huge. Changing naming\n> > schemes for functions that are used all over the backend (e.g. ~80 calls\n> > to table_beginscan), and where there's other wrapper functions that also\n> > widely used (237 calls to systable_beginscan) which didn't have to be\n> > touched, at the same time would have made it even harder to review.\n> \n> If there are no objections to renaming now, as separate independent\n> patch, I am happy to do the same and send it across. Will rename to\n> make it consistent as mentioned at start of the thread leaving\n> table_relation_xxx() ones as is today.\n\nWhat would you want to rename precisely? Don't think it's useful to\nstart sending patches before we agree on something concrete. I'm not on\nboard with patching hundreds systable_beginscan calls (that'll break a\nlot of external code, besides the churn of in-core code), nor with the\nAPIs around that having a diverging name scheme.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 10:51:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Fri, May 10, 2019 at 10:51 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:\n> > On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > The changes necessary for tableam were already huge. Changing naming\n> > > schemes for functions that are used all over the backend (e.g. ~80 calls\n> > > to table_beginscan), and where there's other wrapper functions that also\n> > > widely used (237 calls to systable_beginscan) which didn't have to be\n> > > touched, at the same time would have made it even harder to review.\n> >\n> > If there are no objections to renaming now, as separate independent\n> > patch, I am happy to do the same and send it across. Will rename to\n> > make it consistent as mentioned at start of the thread leaving\n> > table_relation_xxx() ones as is today.\n>\n> What would you want to rename precisely? Don't think it's useful to\n> start sending patches before we agree on something concrete. I'm not on\n> board with patching hundreds systable_beginscan calls (that'll break a\n> lot of external code, besides the churn of in-core code), nor with the\n> APIs around that having a diverging name scheme.\n\nMeant to stick the question mark in that email, somehow missed. Yes\nnot planning to spend any time on it if objections. Here is the list\nof renames I wish to perform.\n\nLets start with low hanging ones.\n\ntable_rescan -> table_scan_rescan\ngit grep table_rescan | wc -l\n6\n\ntable_insert -> table_tuple_insert\ngit grep tuple_insert | wc -l\n13\n\ntable_insert_speculative -> table_tuple_insert_speculative\ngit grep tuple_insert_speculative | wc -l\n5\n\ntable_delete -> table_tuple_delete (table_delete reads incorrect as\nnot deleting the table)\ngit grep tuple_delete | wc -l\n8\n\ntable_update -> table_tuple_update\ngit grep tuple_update | wc -l\n5\n\ntable_lock_tuple -> table_tuple_lock\ngit grep tuple_lock | wc -l\n26\n\n\nBelow two you already mentioned no objections to rename\ntable_fetch_row_version -> table_tuple_fetch_row_version\ntable_get_latest_tid -> table_tuple_get_latest_tid\n\n\nNow, table_beginscan and table_endscan are the ones which are\nwide-spread. Desire seems we should keep it consistent with\nsystable_beginscan. Understand the constraints and churn aspect, given\nthat diverging naming scheme is unavoidable. Why not leave\nsystable_beginscan as it is and only rename table_beginscan and its\ncounterparts table_beginscan_xxx() atleast?\n\nIndex interfaces and table interfaces have some diverged naming scheme\nalready like index_getnext_slot and table_scan_getnextslot. Not\nproposing to change them. But at least reducing wherever possible\nsooner would be helpful.\n\n\n",
"msg_date": "Fri, 10 May 2019 12:43:06 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 12:43:06 -0700, Ashwin Agrawal wrote:\n> On Fri, May 10, 2019 at 10:51 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:\n> > > On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > The changes necessary for tableam were already huge. Changing naming\n> > > > schemes for functions that are used all over the backend (e.g. ~80 calls\n> > > > to table_beginscan), and where there's other wrapper functions that also\n> > > > widely used (237 calls to systable_beginscan) which didn't have to be\n> > > > touched, at the same time would have made it even harder to review.\n> > >\n> > > If there are no objections to renaming now, as separate independent\n> > > patch, I am happy to do the same and send it across. Will rename to\n> > > make it consistent as mentioned at start of the thread leaving\n> > > table_relation_xxx() ones as is today.\n> >\n> > What would you want to rename precisely? Don't think it's useful to\n> > start sending patches before we agree on something concrete. I'm not on\n> > board with patching hundreds systable_beginscan calls (that'll break a\n> > lot of external code, besides the churn of in-core code), nor with the\n> > APIs around that having a diverging name scheme.\n> \n> Meant to stick the question mark in that email, somehow missed. Yes\n> not planning to spend any time on it if objections. Here is the list\n> of renames I wish to perform.\n> \n> Lets start with low hanging ones.\n> \n> table_rescan -> table_scan_rescan\n> git grep table_rescan | wc -l\n> 6\n> \n> table_insert -> table_tuple_insert\n> git grep tuple_insert | wc -l\n> 13\n> \n> table_insert_speculative -> table_tuple_insert_speculative\n> git grep tuple_insert_speculative | wc -l\n> 5\n> \n> table_delete -> table_tuple_delete (table_delete reads incorrect as\n> not deleting the table)\n> git grep tuple_delete | wc -l\n> 8\n> \n> table_update -> table_tuple_update\n> git grep tuple_update | wc -l\n> 5\n> \n> table_lock_tuple -> table_tuple_lock\n> git grep tuple_lock | wc -l\n> 26\n> \n> \n> Below two you already mentioned no objections to rename\n> table_fetch_row_version -> table_tuple_fetch_row_version\n> table_get_latest_tid -> table_tuple_get_latest_tid\n> \n> \n> Now, table_beginscan and table_endscan are the ones which are\n> wide-spread. Desire seems we should keep it consistent with\n> systable_beginscan. Understand the constraints and churn aspect, given\n> that diverging naming scheme is unavoidable. Why not leave\n> systable_beginscan as it is and only rename table_beginscan and its\n> counterparts table_beginscan_xxx() atleast?\n> \n> Index interfaces and table interfaces have some diverged naming scheme\n> already like index_getnext_slot and table_scan_getnextslot. Not\n> proposing to change them. But at least reducing wherever possible\n> sooner would be helpful.\n\nMy personal opinion is that this is more churn than I think is useful to\ntackle after feature freeze, with not sufficient benefits. If others\nchime in, voting to do this, I'm OK with doing that, but otherwise I\nthink there's more important stuff to do.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 12:50:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On 2019-May-10, Andres Freund wrote:\n\n> My personal opinion is that this is more churn than I think is useful to\n> tackle after feature freeze, with not sufficient benefits. If others\n> chime in, voting to do this, I'm OK with doing that, but otherwise I\n> think there's more important stuff to do.\n\nOne issue is that if we don't change things now, we can never change it\nafterwards, so we should make some effort to ensure that naming is\nsensible. And we already changed the names of the whole interface.\n\nI'm not voting to accept all of Ashwin's proposals right away, only to\nhave the names considered.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 16:18:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 16:18:32 -0400, Alvaro Herrera wrote:\n> On 2019-May-10, Andres Freund wrote:\n> \n> > My personal opinion is that this is more churn than I think is useful to\n> > tackle after feature freeze, with not sufficient benefits. If others\n> > chime in, voting to do this, I'm OK with doing that, but otherwise I\n> > think there's more important stuff to do.\n> \n> One issue is that if we don't change things now, we can never change it\n> afterwards, so we should make some effort to ensure that naming is\n> sensible. And we already changed the names of the whole interface.\n\nWell, the point is that there's symmetry with a lot of similar functions\nthat were *not* affected by the tableam changes. Cf. systable_beginscan\net al. We could add wrappers etc to make it less painful, but then\nthere's no urgency either.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 13:26:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-10, Andres Freund wrote:\n>> My personal opinion is that this is more churn than I think is useful to\n>> tackle after feature freeze, with not sufficient benefits. If others\n>> chime in, voting to do this, I'm OK with doing that, but otherwise I\n>> think there's more important stuff to do.\n\n> One issue is that if we don't change things now, we can never change it\n> afterwards, so we should make some effort to ensure that naming is\n> sensible. And we already changed the names of the whole interface.\n\nYeah. I do not have an opinion on whether these changes are actually\nimprovements, but renaming right now is way less painful than it would\nbe to rename post-v12. Let's try to get it right the first time,\nespecially with functions we already renamed in this cycle.\n\nI do think that the \"too much churn\" argument has merit for places\nthat were *not* already changed in v12. In particular I'd vote against\nrenaming the systable_xxx functions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 16:28:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Fri, May 10, 2019 at 3:43 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Meant to stick the question mark in that email, somehow missed. Yes\n> not planning to spend any time on it if objections. Here is the list\n> of renames I wish to perform.\n>\n> Lets start with low hanging ones.\n>\n> table_rescan -> table_scan_rescan\n> table_insert -> table_tuple_insert\n> table_insert_speculative -> table_tuple_insert_speculative\n> table_delete -> table_tuple_delete\n> table_update -> table_tuple_update\n> table_lock_tuple -> table_tuple_lock\n>\n> Below two you already mentioned no objections to rename\n> table_fetch_row_version -> table_tuple_fetch_row_version\n> table_get_latest_tid -> table_tuple_get_latest_tid\n>\n> Now, table_beginscan and table_endscan are the ones which are\n> wide-spread.\n\nI vote to rename all the ones where the new name would contain \"tuple\"\nand to leave the others alone. i.e. leave table_beginscan,\ntable_endscan, and table_rescan as they are. I think that there's\nlittle benefit in standardizing table_rescan but not the other two,\nand we seem to agree that tinkering with the other two gets into a\npainful amount of churn.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 May 2019 15:50:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Mon, May 13, 2019 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, May 10, 2019 at 3:43 PM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> > Meant to stick the question mark in that email, somehow missed. Yes\n> > not planning to spend any time on it if objections. Here is the list\n> > of renames I wish to perform.\n> >\n> > Lets start with low hanging ones.\n> >\n> > table_rescan -> table_scan_rescan\n> > table_insert -> table_tuple_insert\n> > table_insert_speculative -> table_tuple_insert_speculative\n> > table_delete -> table_tuple_delete\n> > table_update -> table_tuple_update\n> > table_lock_tuple -> table_tuple_lock\n> >\n> > Below two you already mentioned no objections to rename\n> > table_fetch_row_version -> table_tuple_fetch_row_version\n> > table_get_latest_tid -> table_tuple_get_latest_tid\n> >\n> > Now, table_beginscan and table_endscan are the ones which are\n> > wide-spread.\n>\n> I vote to rename all the ones where the new name would contain \"tuple\"\n> and to leave the others alone. i.e. leave table_beginscan,\n> table_endscan, and table_rescan as they are. I think that there's\n> little benefit in standardizing table_rescan but not the other two,\n> and we seem to agree that tinkering with the other two gets into a\n> painful amount of churn.\n>\n\nThank you. Please find the patch to rename the agreed functions. It would\nbe good to make all consistent instead of applying exception to three\nfunctions but seems no consensus on it. Given table_ api are new, we could\nmodify them leaving systable_ ones as is, but as objections left that as is.",
"msg_date": "Tue, 14 May 2019 12:11:46 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Wed, May 8, 2019 at 5:05 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n> Not having consistency is the main aspect I wish to bring to\n> attention. Like for some callback functions the comment is\n>\n> /* see table_insert() for reference about parameters */\n> void (*tuple_insert) (Relation rel, TupleTableSlot *slot,\n> CommandId cid, int options,\n> struct BulkInsertStateData *bistate);\n>\n> /* see table_insert_speculative() for reference about parameters\n> */\n> void (*tuple_insert_speculative) (Relation rel,\n> TupleTableSlot *slot,\n> CommandId cid,\n> int options,\n> struct\n> BulkInsertStateData *bistate,\n> uint32 specToken);\n>\n> Whereas for some other callbacks the parameter explanation exist in\n> both the places. Seems we should be consistent.\n> I feel in long run becomes pain to keep them in sync as comments\n> evolve. Like for example\n>\n> /*\n> * Estimate the size of shared memory needed for a parallel scan\n> of this\n> * relation. The snapshot does not need to be accounted for.\n> */\n> Size (*parallelscan_estimate) (Relation rel);\n>\n> parallescan_estimate is not having snapshot argument passed to it, but\n> table_parallescan_estimate does. So, this way chances are high they\n> going out of sync and missing to modify in both the places. Agree\n> though on audience being different for both. So, seems going with the\n> refer XXX for parameters seems fine approach for all the callbacks and\n> then only specific things to flag out for the AM layer to be aware can\n> live above the callback function.\n>\n\nThe topic of consistency for comment style for all wrappers seems got lost\nwith other discussion, would like to seek opinion on the same.\n\nOn Wed, May 8, 2019 at 5:05 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\nNot having consistency is the main aspect I wish to bring to\nattention. Like for some callback functions the comment is\n\n /* see table_insert() for reference about parameters */\n void (*tuple_insert) (Relation rel, TupleTableSlot *slot,\n CommandId cid, int options,\n struct BulkInsertStateData *bistate);\n\n /* see table_insert_speculative() for reference about parameters\n*/\n void (*tuple_insert_speculative) (Relation rel,\n TupleTableSlot *slot,\n CommandId cid,\n int options,\n struct\nBulkInsertStateData *bistate,\n uint32 specToken);\n\nWhereas for some other callbacks the parameter explanation exist in\nboth the places. Seems we should be consistent.\nI feel in long run becomes pain to keep them in sync as comments\nevolve. Like for example\n\n /*\n * Estimate the size of shared memory needed for a parallel scan\nof this\n * relation. The snapshot does not need to be accounted for.\n */\n Size (*parallelscan_estimate) (Relation rel);\n\nparallescan_estimate is not having snapshot argument passed to it, but\ntable_parallescan_estimate does. So, this way chances are high they\ngoing out of sync and missing to modify in both the places. Agree\nthough on audience being different for both. So, seems going with the\nrefer XXX for parameters seems fine approach for all the callbacks and\nthen only specific things to flag out for the AM layer to be aware can\nlive above the callback function.The topic of consistency for comment style for all wrappers seems got lost with other discussion, would like to seek opinion on the same.",
"msg_date": "Tue, 14 May 2019 12:17:35 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On 2019-May-14, Ashwin Agrawal wrote:\n\n> Thank you. Please find the patch to rename the agreed functions. It would\n> be good to make all consistent instead of applying exception to three\n> functions but seems no consensus on it. Given table_ api are new, we could\n> modify them leaving systable_ ones as is, but as objections left that as is.\n\nHmm .. I'm surprised to find out that we only have one caller of\nsimple_table_insert, simple_table_delete, simple_table_update. I'm not\nsure I agree to the new names those got in the renaming patch, since\nthey're not really part of table AM proper ... do we really want to\noffer those as separate entry points? Why not just remove those routines?\n\nSomewhat related: it's strange that CatalogTupleUpdate etc use\nsimple_heap_update instead of the tableam variants wrappers (I suppose\nthat's either because of bootstrapping concerns, or performance). Would\nit be too strange to have CatalogTupleInsert call heap_insert()\ndirectly, and do away with simple_heap_insert? (Equivalently for\nupdate, delete). I think those wrappers made perfect sense when we had\nsimple_heap_insert all around the place ... but now that we introduced\nthe CatalogTupleFoo wrappers, I don't think it does any longer.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 May 2019 16:27:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 16:27:47 -0400, Alvaro Herrera wrote:\n> On 2019-May-14, Ashwin Agrawal wrote:\n> \n> > Thank you. Please find the patch to rename the agreed functions. It would\n> > be good to make all consistent instead of applying exception to three\n> > functions but seems no consensus on it. Given table_ api are new, we could\n> > modify them leaving systable_ ones as is, but as objections left that as is.\n> \n> Hmm .. I'm surprised to find out that we only have one caller of\n> simple_table_insert, simple_table_delete, simple_table_update. I'm not\n> sure I agree to the new names those got in the renaming patch, since\n> they're not really part of table AM proper ... do we really want to\n> offer those as separate entry points? Why not just remove those routines?\n\nI don't think it'd be better if execReplication.c has them inline - we'd\njust have the exact same code inline. There's plenty extension out there\nthat use simple_heap_*, and without such wrappers, they'll all have to\ncopy the contents of simple_table_* too. Also we'll probably want to\nswitch CatalogTuple* over to them at some point.\n\n\n> Somewhat related: it's strange that CatalogTupleUpdate etc use\n> simple_heap_update instead of the tableam variants wrappers (I suppose\n> that's either because of bootstrapping concerns, or performance).\n\nIt's because the callers currently expect to work with heap tuples,\nrather than slots. And changing that would have been a *LOT* of work (as\nin: prohibitively much for v12). I didn't want to create a slot for\neach insertion, because that'd make them slower. But as Robert said on\nIM (discussing something else), we already create a slot in most cases,\nvia CatalogIndexInsert(). Not sure if HOT updates and deletes are\ncommon enough to make the slot creation in those cases measurable.\n\n\n> Would it be too strange to have CatalogTupleInsert call heap_insert()\n> directly, and do away with simple_heap_insert? (Equivalently for\n> update, delete). I think those wrappers made perfect sense when we had\n> simple_heap_insert all around the place ... but now that we introduced\n> the CatalogTupleFoo wrappers, I don't think it does any longer.\n\nI don't really see the advantage. Won't that just break a lot of code\nthat will continue to work otherwise, as long as you just use heap\ntables? With the sole benefit of moving a bit of code from one place to\nanother?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 13:37:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 12:11:46 -0700, Ashwin Agrawal wrote:\n> Thank you. Please find the patch to rename the agreed functions. It would\n> be good to make all consistent instead of applying exception to three\n> functions but seems no consensus on it. Given table_ api are new, we could\n> modify them leaving systable_ ones as is, but as objections left that as is.\n\nI've pushed a slightly modified version (rebase, some additional\nnewlines due to the longer function names) now. Thanks for the patch!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 16:32:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
},
{
"msg_contents": "On Thu, May 23, 2019 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-14 12:11:46 -0700, Ashwin Agrawal wrote:\n> > Thank you. Please find the patch to rename the agreed functions. It would\n> > be good to make all consistent instead of applying exception to three\n> > functions but seems no consensus on it. Given table_ api are new, we\n> could\n> > modify them leaving systable_ ones as is, but as objections left that as\n> is.\n>\n> I've pushed a slightly modified version (rebase, some additional\n> newlines due to the longer function names) now. Thanks for the patch!\n>\n\nThanks a lot Andres. With pg_intend run before the patch on master, I can\nimagine possibly generated additional work for you on this.\n\nOn Thu, May 23, 2019 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-05-14 12:11:46 -0700, Ashwin Agrawal wrote:\n> Thank you. Please find the patch to rename the agreed functions. It would\n> be good to make all consistent instead of applying exception to three\n> functions but seems no consensus on it. Given table_ api are new, we could\n> modify them leaving systable_ ones as is, but as objections left that as is.\n\nI've pushed a slightly modified version (rebase, some additional\nnewlines due to the longer function names) now. Thanks for the patch!Thanks a lot Andres. With pg_intend run before the patch on master, I can imagine possibly generated additional work for you on this.",
"msg_date": "Fri, 24 May 2019 09:40:46 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between table am callback and table function names"
}
] |
[
{
"msg_contents": "I can get the following log randomly and I am not which commit caused it.\nI spend one day but failed at last.\n\n\n2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index\ninfo: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a18\n2019-05-08 21:37:46.692 CST [60110] WARNING: idx: 2 problem in alloc set\nindex info: bad single-chunk 0x2a33a78 in block 0x2a33a18, chsize: 1408,\nchunkLimit: 1024, chunkHeaderSize: 24, block_used: 768 request size: 2481\n2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index\ninfo: found inconsistent memory block 0x2a33a18\n\n it looks like the memory which is managed by \"index info\" memory context\nis written by some other wrong codes.\n\nI didn't change any AllocSetXXX related code and I think I just use it\nwrong in some way.\n\nThanks\n\nI can get the following log randomly and I am not which commit caused it. I spend one day but failed at last.2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index info: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a182019-05-08 21:37:46.692 CST [60110] WARNING: idx: 2 problem in alloc set index info: bad single-chunk 0x2a33a78 in block 0x2a33a18, chsize: 1408, chunkLimit: 1024, chunkHeaderSize: 24, block_used: 768 request size: 24812019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index info: found inconsistent memory block 0x2a33a18 it looks like the memory which is managed by \"index info\" memory context is written by some other wrong codes. I didn't change any AllocSetXXX related code and I think I just use it wrong in some way.Thanks",
"msg_date": "Wed, 8 May 2019 21:53:21 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "any suggestions to detect memory corruption"
},
{
"msg_contents": "Alex <zhihui.fan1213@gmail.com> writes:\n> I can get the following log randomly and I am not which commit caused it.\n\n> 2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index\n> info: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a18\n\nI've had success in finding memory stomp causes fairly quickly by setting\na hardware watchpoint in gdb on the affected location. Then you just let\nit run to see when the value changes, and check whether that's a \"legit\"\nor \"not legit\" modification point.\n\nThe hard part of that, of course, is to know in advance where the affected\nlocation is. You may be able to make things sufficiently repeatable by\ndoing the problem query in a fresh session each time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 10:34:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: any suggestions to detect memory corruption"
},
{
"msg_contents": "On Wed, May 8, 2019 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alex <zhihui.fan1213@gmail.com> writes:\n> > I can get the following log randomly and I am not which commit caused it.\n>\n> > 2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index\n> > info: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a18\n>\n> I've had success in finding memory stomp causes fairly quickly by setting\n> a hardware watchpoint in gdb on the affected location. Then you just let\n> it run to see when the value changes, and check whether that's a \"legit\"\n> or \"not legit\" modification point.\n>\n> The hard part of that, of course, is to know in advance where the affected\n> location is. You may be able to make things sufficiently repeatable by\n> doing the problem query in a fresh session each time.\n\nvalgrind might also be a possibility, although that has a lot of overhead.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 May 2019 13:21:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: any suggestions to detect memory corruption"
},
{
"msg_contents": "Thanks you Tom and Robert! I tried valgrind, and looks it help me fix\nthe issue.\n\nSomeone add some code during backend init which used palloc. but at that\ntime, the CurrentMemoryContext is PostmasterContext. at the end of\nbackend initialization, the PostmasterContext is deleted, then the error\nhappens. the reason why it happens randomly is before the palloc, there\nare some other if clause which may skip the palloc.\n\nI still can't explain why PostmasterContext may have impact \"index info\"\nMemoryContext sometime, but now I just can't reproduce it (before the\nfix, it may happen in 30% cases).\n\nOn Thu, May 9, 2019 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, May 8, 2019 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alex <zhihui.fan1213@gmail.com> writes:\n> > > I can get the following log randomly and I am not which commit caused\n> it.\n> >\n> > > 2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set\n> index\n> > > info: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a18\n> >\n> > I've had success in finding memory stomp causes fairly quickly by setting\n> > a hardware watchpoint in gdb on the affected location. Then you just let\n> > it run to see when the value changes, and check whether that's a \"legit\"\n> > or \"not legit\" modification point.\n> >\n> > The hard part of that, of course, is to know in advance where the\n> affected\n> > location is. You may be able to make things sufficiently repeatable by\n> > doing the problem query in a fresh session each time.\n>\n> valgrind might also be a possibility, although that has a lot of overhead.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks you Tom and Robert! I tried valgrind, and looks it help me fix the issue. Someone add some code during backend init which used palloc. but at that time, the CurrentMemoryContext is PostmasterContext. at the end of backend initialization, the PostmasterContext is deleted, then the error happens. the reason why it happens randomly is before the palloc, there are some other if clause which may skip the palloc.I still can't explain why PostmasterContext may have impact \"index info\" MemoryContext sometime, but now I just can't reproduce it (before the fix, it may happen in 30% cases). On Thu, May 9, 2019 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, May 8, 2019 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alex <zhihui.fan1213@gmail.com> writes:\n> > I can get the following log randomly and I am not which commit caused it.\n>\n> > 2019-05-08 21:37:46.692 CST [60110] WARNING: problem in alloc set index\n> > info: req size > alloc size for chunk 0x2a33a78 in block 0x2a33a18\n>\n> I've had success in finding memory stomp causes fairly quickly by setting\n> a hardware watchpoint in gdb on the affected location. Then you just let\n> it run to see when the value changes, and check whether that's a \"legit\"\n> or \"not legit\" modification point.\n>\n> The hard part of that, of course, is to know in advance where the affected\n> location is. You may be able to make things sufficiently repeatable by\n> doing the problem query in a fresh session each time.\n\nvalgrind might also be a possibility, although that has a lot of overhead.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 9 May 2019 13:48:49 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: any suggestions to detect memory corruption"
},
{
"msg_contents": "Alex <zhihui.fan1213@gmail.com> writes:\n> Someone add some code during backend init which used palloc. but at that\n> time, the CurrentMemoryContext is PostmasterContext. at the end of\n> backend initialization, the PostmasterContext is deleted, then the error\n> happens. the reason why it happens randomly is before the palloc, there\n> are some other if clause which may skip the palloc.\n\n> I still can't explain why PostmasterContext may have impact \"index info\"\n> MemoryContext sometime, but now I just can't reproduce it (before the\n> fix, it may happen in 30% cases).\n\nWell, once the context is deleted, that memory is available for reuse.\nEverything will seem fine until it *is* reused, and then boom!\n\nThe error would have been a lot more obvious if you'd enabled\nMEMORY_CONTEXT_CHECKING, which would overwrite freed data with garbage.\nThat is normally turned on in --enable-cassert builds. Anybody who's been\nhacking Postgres for more than a week does backend code development in\n--enable-cassert mode as a matter of course; it turns on a *lot* of\nhelpful cross-checks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 09:30:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: any suggestions to detect memory corruption"
},
{
"msg_contents": "On Thu, May 9, 2019 at 9:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alex <zhihui.fan1213@gmail.com> writes:\n> > Someone add some code during backend init which used palloc. but at that\n> > time, the CurrentMemoryContext is PostmasterContext. at the end of\n> > backend initialization, the PostmasterContext is deleted, then the error\n> > happens. the reason why it happens randomly is before the palloc, there\n> > are some other if clause which may skip the palloc.\n>\n> > I still can't explain why PostmasterContext may have impact \"index info\"\n> > MemoryContext sometime, but now I just can't reproduce it (before the\n> > fix, it may happen in 30% cases).\n>\n> Well, once the context is deleted, that memory is available for reuse.\n> Everything will seem fine until it *is* reused, and then boom!\n>\n> The error would have been a lot more obvious if you'd enabled\n> MEMORY_CONTEXT_CHECKING, which would overwrite freed data with garbage.\n>\n\nThanks! I didn't know this before and \" once the context is deleted,\nthat memory is available for reuse.\nEverything will seem fine until it *is* reused\". I have enabled\n enable-cassert now.\n\nThat is normally turned on in --enable-cassert builds. Anybody who's been\n> hacking Postgres for more than a week does backend code development in\n> --enable-cassert mode as a matter of course; it turns on a *lot* of\n> helpful cross-checks.\n>\n>\n\n\n> regards, tom lane\n>\n\nOn Thu, May 9, 2019 at 9:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alex <zhihui.fan1213@gmail.com> writes:\n> Someone add some code during backend init which used palloc. but at that\n> time, the CurrentMemoryContext is PostmasterContext. at the end of\n> backend initialization, the PostmasterContext is deleted, then the error\n> happens. the reason why it happens randomly is before the palloc, there\n> are some other if clause which may skip the palloc.\n\n> I still can't explain why PostmasterContext may have impact \"index info\"\n> MemoryContext sometime, but now I just can't reproduce it (before the\n> fix, it may happen in 30% cases).\n\nWell, once the context is deleted, that memory is available for reuse.\nEverything will seem fine until it *is* reused, and then boom!\n\nThe error would have been a lot more obvious if you'd enabled\nMEMORY_CONTEXT_CHECKING, which would overwrite freed data with garbage.Thanks! I didn't know this before and \" once the context is deleted, that memory is available for reuse.Everything will seem fine until it *is* reused\". I have enabled enable-cassert now. \nThat is normally turned on in --enable-cassert builds. Anybody who's been\nhacking Postgres for more than a week does backend code development in\n--enable-cassert mode as a matter of course; it turns on a *lot* of\nhelpful cross-checks.\n \n regards, tom lane",
"msg_date": "Fri, 10 May 2019 08:52:45 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: any suggestions to detect memory corruption"
}
] |
[
{
"msg_contents": "Just now, while running a parallel check-world on HEAD according to the\nsame script I've been using for quite some time, one of the TAP tests\ndied during initdb:\n\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default timezone ... America/New_York\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... 2019-05-08 13:59:19.963 EDT [18351] FATAL: pre-existing shared memory block (key 5440004, ID 1734475802) is still in use\n2019-05-08 13:59:19.963 EDT [18351] HINT: Terminate any old server processes associated with data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\".\nchild process exited with exit code 1\ninitdb: removing data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\"\nBail out! system initdb failed\n\nI have never seen this happen before in the TAP tests.\n\nI think the odds are very high that this implies something wrong with\ncommit c09850992.\n\nMy immediate guess after eyeballing that patch quickly is that it was\nnot a good idea to redefine the rules used by bootstrap/standalone\nbackends. In particular, it seems somewhat plausible that the bootstrap\nprocess hadn't yet completely died when the standalone backend for the\npost-bootstrap phase came along and decided there was a conflict (which\nit never would have before).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 14:32:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "On Wed, May 08, 2019 at 02:32:46PM -0400, Tom Lane wrote:\n> Just now, while running a parallel check-world on HEAD according to the\n> same script I've been using for quite some time, one of the TAP tests\n> died during initdb:\n> \n> selecting dynamic shared memory implementation ... posix\n> selecting default max_connections ... 100\n> selecting default shared_buffers ... 128MB\n> selecting default timezone ... America/New_York\n> creating configuration files ... ok\n> running bootstrap script ... ok\n> performing post-bootstrap initialization ... 2019-05-08 13:59:19.963 EDT [18351] FATAL: pre-existing shared memory block (key 5440004, ID 1734475802) is still in use\n> 2019-05-08 13:59:19.963 EDT [18351] HINT: Terminate any old server processes associated with data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\".\n> child process exited with exit code 1\n> initdb: removing data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\"\n> Bail out! system initdb failed\n> \n> I have never seen this happen before in the TAP tests.\n> \n> I think the odds are very high that this implies something wrong with\n> commit c09850992.\n\nThe odds are very high that you would not have gotten that error before that\ncommit. But if the cause matches your guess, it's not something wrong with\nthe commit ...\n\n> My immediate guess after eyeballing that patch quickly is that it was\n> not a good idea to redefine the rules used by bootstrap/standalone\n> backends. In particular, it seems somewhat plausible that the bootstrap\n> process hadn't yet completely died when the standalone backend for the\n> post-bootstrap phase came along and decided there was a conflict (which\n> it never would have before).\n\nIf so, I would sure try to fix the initdb sequence to not let that happen. I\nwould not trust such a conflict to be harmless.\n\nWhat OS, OS version, and filesystem?\n\n\n",
"msg_date": "Wed, 8 May 2019 22:54:14 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "At Wed, 8 May 2019 22:54:14 -0700, Noah Misch <noah@leadboat.com> wrote in <20190509055414.GB1066859@rfd.leadboat.com>\n> On Wed, May 08, 2019 at 02:32:46PM -0400, Tom Lane wrote:\n> > Just now, while running a parallel check-world on HEAD according to the\n> > same script I've been using for quite some time, one of the TAP tests\n> > died during initdb:\n> > \n> > selecting dynamic shared memory implementation ... posix\n> > selecting default max_connections ... 100\n> > selecting default shared_buffers ... 128MB\n> > selecting default timezone ... America/New_York\n> > creating configuration files ... ok\n> > running bootstrap script ... ok\n> > performing post-bootstrap initialization ... 2019-05-08 13:59:19.963 EDT [18351] FATAL: pre-existing shared memory block (key 5440004, ID 1734475802) is still in use\n> > 2019-05-08 13:59:19.963 EDT [18351] HINT: Terminate any old server processes associated with data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\".\n> > child process exited with exit code 1\n> > initdb: removing data directory \"/home/postgres/pgsql/src/test/subscription/tmp_check/t_004_sync_publisher_data/pgdata\"\n> > Bail out! system initdb failed\n> > \n> > I have never seen this happen before in the TAP tests.\n> > \n> > I think the odds are very high that this implies something wrong with\n> > commit c09850992.\n> \n> The odds are very high that you would not have gotten that error before that\n> commit. But if the cause matches your guess, it's not something wrong with\n> the commit ...\n> \n> > My immediate guess after eyeballing that patch quickly is that it was\n> > not a good idea to redefine the rules used by bootstrap/standalone\n> > backends. In particular, it seems somewhat plausible that the bootstrap\n> > process hadn't yet completely died when the standalone backend for the\n> > post-bootstrap phase came along and decided there was a conflict (which\n> > it never would have before).\n> \n> If so, I would sure try to fix the initdb sequence to not let that happen. I\n> would not trust such a conflict to be harmless.\n> \n> What OS, OS version, and filesystem?\n\nPGSharedMemoryCreate shows the error in SHMSTATE_ANALYSYS_FAILURE\ncase. PGSharedMemoryAttach returns the code when, for example,\nshmat failed with ENOMEM. I'm afraid that the message is not\nshown from SHMSTATE_ATTACHED..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Thu, 09 May 2019 15:28:36 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Wed, May 08, 2019 at 02:32:46PM -0400, Tom Lane wrote:\n>> Just now, while running a parallel check-world on HEAD according to the\n>> same script I've been using for quite some time, one of the TAP tests\n>> died during initdb:\n>> performing post-bootstrap initialization ... 2019-05-08 13:59:19.963 EDT [18351] FATAL: pre-existing shared memory block (key 5440004, ID 1734475802) is still in use\n\n> The odds are very high that you would not have gotten that error before that\n> commit. But if the cause matches your guess, it's not something wrong with\n> the commit ...\n\nFair point.\n\n> What OS, OS version, and filesystem?\n\nUp-to-date RHEL6 (kernel 2.6.32-754.12.1.el6.x86_64), ext4 over LVM\non spinning rust with an LSI MegaRAID controller in front of it.\n\nSince complaining, I've done half a dozen more parallel check-worlds\nwithout issue, so the error was and still is rare. This matches the\nfact that we've not seen it in the buildfarm :-(.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 09:47:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> The odds are very high that you would not have gotten that error before that\n>> commit. But if the cause matches your guess, it's not something wrong with\n>> the commit ...\n\n> Fair point.\n\nAfter more study and testing, I no longer believe my original thought\nabout a bootstrap-to-standalone-backend race condition. The bootstrap\nprocess definitely kills its SysV shmem segment before exiting.\n\nHowever, I have a new theory, after noticing that c09850992 moved the\ncheck for shm_nattch == 0. Previously, if a shmem segment had zero attach\ncount, it was unconditionally considered not-a-threat. Now, we'll try\nshmat() anyway, and if that fails for any reason other than EACCES, we say\nSHMSTATE_ANALYSIS_FAILURE which leads to the described error report.\nSo I suspect that what we hit was a race condition whereby some other\nparallel test was using the same shmem ID and we managed to see its\nsegment successfully in shmctl but then it was gone by the time we did\nshmat. This leads me to think that EINVAL and EIDRM failures from\nshmat had better be considered SHMSTATE_ENOENT not\nSHMSTATE_ANALYSIS_FAILURE.\n\nIn principle this is a longstanding race condition, but I wonder\nwhether we made it more probable by moving the shm_nattch check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 17:59:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I wrote:\n> However, I have a new theory, after noticing that c09850992 moved the\n> check for shm_nattch == 0. Previously, if a shmem segment had zero attach\n> count, it was unconditionally considered not-a-threat. Now, we'll try\n> shmat() anyway, and if that fails for any reason other than EACCES, we say\n> SHMSTATE_ANALYSIS_FAILURE which leads to the described error report.\n> So I suspect that what we hit was a race condition whereby some other\n> parallel test was using the same shmem ID and we managed to see its\n> segment successfully in shmctl but then it was gone by the time we did\n> shmat. This leads me to think that EINVAL and EIDRM failures from\n> shmat had better be considered SHMSTATE_ENOENT not\n> SHMSTATE_ANALYSIS_FAILURE.\n> In principle this is a longstanding race condition, but I wonder\n> whether we made it more probable by moving the shm_nattch check.\n\nHah --- this is a real race condition, and I can demonstrate it very\neasily by inserting a sleep right there, as in the attached\nfor-testing-only patch.\n\nThe particular parallelism level I use is\n\nmake -s check-world -j4 PROVE_FLAGS='-j4 --quiet --nocolor --nocount'\n\non a dual-socket 4-cores-per-socket Xeon machine. With that command and\nthis patch, I frequently get multiple failures per run, and they all\nreport either EINVAL or EIDRM.\n\nThe patch generally reports that nattch had been 1, so my thought that\nthat change might've made it worse seems unfounded. But we have\nabsolutely got a hittable race condition here. The real fix should\nbe on the order of\n\n\t\tif (errno == EACCES)\n\t\t\treturn SHMSTATE_FOREIGN;\n+\t\telse if (errno == EINVAL || errno == EIDRM)\n+\t\t\treturn SHMSTATE_ENOENT;\n\t\telse\n\t\t\treturn SHMSTATE_ANALYSIS_FAILURE;\n\n(plus comments of course).\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 09 May 2019 18:47:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "On Thu, May 09, 2019 at 06:47:58PM -0400, Tom Lane wrote:\n> I wrote:\n> > However, I have a new theory, after noticing that c09850992 moved the\n> > check for shm_nattch == 0. Previously, if a shmem segment had zero attach\n> > count, it was unconditionally considered not-a-threat. Now, we'll try\n> > shmat() anyway, and if that fails for any reason other than EACCES, we say\n> > SHMSTATE_ANALYSIS_FAILURE which leads to the described error report.\n> > So I suspect that what we hit was a race condition whereby some other\n> > parallel test was using the same shmem ID and we managed to see its\n> > segment successfully in shmctl but then it was gone by the time we did\n> > shmat. This leads me to think that EINVAL and EIDRM failures from\n> > shmat had better be considered SHMSTATE_ENOENT not\n> > SHMSTATE_ANALYSIS_FAILURE.\n> > In principle this is a longstanding race condition, but I wonder\n> > whether we made it more probable by moving the shm_nattch check.\n> \n> Hah --- this is a real race condition, and I can demonstrate it very\n> easily by inserting a sleep right there, as in the attached\n> for-testing-only patch.\n> \n> The particular parallelism level I use is\n> \n> make -s check-world -j4 PROVE_FLAGS='-j4 --quiet --nocolor --nocount'\n> \n> on a dual-socket 4-cores-per-socket Xeon machine. With that command and\n> this patch, I frequently get multiple failures per run, and they all\n> report either EINVAL or EIDRM.\n> \n> The patch generally reports that nattch had been 1, so my thought that\n> that change might've made it worse seems unfounded. But we have\n> absolutely got a hittable race condition here. The real fix should\n> be on the order of\n> \n> \t\tif (errno == EACCES)\n> \t\t\treturn SHMSTATE_FOREIGN;\n> +\t\telse if (errno == EINVAL || errno == EIDRM)\n> +\t\t\treturn SHMSTATE_ENOENT;\n> \t\telse\n> \t\t\treturn SHMSTATE_ANALYSIS_FAILURE;\n> \n> (plus comments of course).\n\nLooks good. That is basically a defect in commit c09850992; the race passed\nfrom irrelevance to relevance when that commit subjected more segments to the\ntest. Thanks for diagnosing it.\n\n\n",
"msg_date": "Fri, 10 May 2019 00:22:13 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Looks good. That is basically a defect in commit c09850992; the race passed\n> from irrelevance to relevance when that commit subjected more segments to the\n> test. Thanks for diagnosing it.\n\nThe bug's far older than that, surely, since before c09850992 we treated\n*any* shmat failure as meaning we'd better fail. I think you're right\nthat c09850992 might've made it slightly more probable, but most likely\nthe bottom line here is just that we haven't been doing parallel\ncheck-worlds a lot until relatively recently. The buildfarm would be\nkind of unlikely to hit this I think --- AFAIK it doesn't launch multiple\npostmasters using the same port number concurrently. But parallel\ninvocation of TAP test scripts makes the hazard real.\n\nWill go fix/backpatch in a minute.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 10:55:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I wrote:\n> Will go fix/backpatch in a minute.\n\nDone now, but while thinking more about the issue, I had an idea: why is\nit that we base the shmem key on the postmaster's port number, and not\non the data directory's inode number? Using the port number not only\nincreases the risk of collisions (though admittedly only in testing\nsituations), but it *decreases* our ability to detect real conflicts.\nConsider case where DBA wants to change the installation's port number,\nand he edits postgresql.conf, but then uses \"kill -9 && rm postmaster.pid\"\nrather than some saner way of stopping the old postmaster. When he\nstarts the new one, it won't detect any remaining children of the old\npostmaster because it'll be looking in the wrong range of shmem keys.\nIt seems like something tied to the data directory's identity would\nbe much more trustworthy.\n\nI think the reason for doing it this way originally was to allow\none to identify which shmem segment is which in \"ipcs -m\" output.\nBut that was back when having to clean up shmem segments manually\nwas still a common task. It's been a long time since I can remember\nneeding to figure out which was which.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 16:46:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "On Fri, May 10, 2019 at 04:46:40PM -0400, Tom Lane wrote:\n> I wrote:\n> > Will go fix/backpatch in a minute.\n> \n> Done now, but while thinking more about the issue, I had an idea: why is\n> it that we base the shmem key on the postmaster's port number, and not\n> on the data directory's inode number? Using the port number not only\n> increases the risk of collisions (though admittedly only in testing\n> situations), but it *decreases* our ability to detect real conflicts.\n> Consider case where DBA wants to change the installation's port number,\n> and he edits postgresql.conf, but then uses \"kill -9 && rm postmaster.pid\"\n> rather than some saner way of stopping the old postmaster. When he\n> starts the new one, it won't detect any remaining children of the old\n> postmaster because it'll be looking in the wrong range of shmem keys.\n> It seems like something tied to the data directory's identity would\n> be much more trustworthy.\n\nGood point. Since we now ignore (SHMSTATE_FOREIGN) any segment that bears\n(st_dev,st_ino) not matching $PGDATA, the change you describe couldn't make us\nfail to detect a real conflict or miss a cleanup opportunity. It would reduce\nthe ability to test sysv_shmem.c; I suppose one could add a debug GUC to\noverride the start of the key space.\n\n> I think the reason for doing it this way originally was to allow\n> one to identify which shmem segment is which in \"ipcs -m\" output.\n> But that was back when having to clean up shmem segments manually\n> was still a common task. It's been a long time since I can remember\n> needing to figure out which was which.\n\nI don't see that presenting a problem these days, agreed.\n\n\n",
"msg_date": "Sat, 11 May 2019 12:07:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, May 10, 2019 at 04:46:40PM -0400, Tom Lane wrote:\n>> Done now, but while thinking more about the issue, I had an idea: why is\n>> it that we base the shmem key on the postmaster's port number, and not\n>> on the data directory's inode number? Using the port number not only\n>> increases the risk of collisions (though admittedly only in testing\n>> situations), but it *decreases* our ability to detect real conflicts.\n>> Consider case where DBA wants to change the installation's port number,\n>> and he edits postgresql.conf, but then uses \"kill -9 && rm postmaster.pid\"\n>> rather than some saner way of stopping the old postmaster. When he\n>> starts the new one, it won't detect any remaining children of the old\n>> postmaster because it'll be looking in the wrong range of shmem keys.\n>> It seems like something tied to the data directory's identity would\n>> be much more trustworthy.\n\n> Good point. Since we now ignore (SHMSTATE_FOREIGN) any segment that bears\n> (st_dev,st_ino) not matching $PGDATA, the change you describe couldn't make us\n> fail to detect a real conflict or miss a cleanup opportunity. It would reduce\n> the ability to test sysv_shmem.c; I suppose one could add a debug GUC to\n> override the start of the key space.\n\nAttached is a draft patch to change both shmem and sema key selection\nto be based on data directory inode rather than port.\n\nI considered using \"st_ino ^ st_dev\", or some such, but decided that\nthat would largely just make it harder to manually correlate IPC\nkeys with running postmasters. It's generally easy to find out the\ndata directory inode number with \"ls\", but the extra work to find out\nand XOR in the device number is not so easy, and it's not clear what\nit'd buy us in typical scenarios.\n\nThe Windows code seems fine as-is: it's already using data directory\nname, not port, to set up shmem, and it doesn't need anything for\nsemaphores.\n\nI'm not quite sure what's going on in src/test/recovery/t/017_shm.pl.\nAs expected, the test for port number non-collision no longer sees\na failure. After fixing that, the test passes, but it takes a\nridiculously long time (minutes); apparently each postmaster start/stop\ncycle takes much longer than it ought to. I suppose this patch is\nbreaking its assumptions, but I've not studied it. We'd have to do\nsomething about that before this would be committable.\n\nI'll add this to the next commitfest.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Aug 2019 19:22:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "On 2019-08-14 01:22, Tom Lane wrote:\n> Attached is a draft patch to change both shmem and sema key selection\n> to be based on data directory inode rather than port.\n> \n> I considered using \"st_ino ^ st_dev\", or some such, but decided that\n> that would largely just make it harder to manually correlate IPC\n> keys with running postmasters. It's generally easy to find out the\n> data directory inode number with \"ls\", but the extra work to find out\n> and XOR in the device number is not so easy, and it's not clear what\n> it'd buy us in typical scenarios.\n\nFor the POSIX APIs where the numbers are just converted to a string, why\nnot use both -- or forget about the inodes and use the actual data\ndirectory string.\n\nFor the SYSV APIs, the scenario that came to my mind is if someone\nstarts a bunch of servers each on their own mount, it could happen that\nthe inodes of the data directories are very similar.\n\nThere is also the issue that AFAICT the key_t in the SYSV APIs is always\n32-bit whereas inodes are 64-bit. Probably not a big deal, but it might\nprevent an exact one-to-one mapping.\n\nOf course, ftok() is also available here as an existing solution.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 16 Aug 2019 15:09:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-08-14 01:22, Tom Lane wrote:\n>> Attached is a draft patch to change both shmem and sema key selection\n>> to be based on data directory inode rather than port.\n\n> For the POSIX APIs where the numbers are just converted to a string, why\n> not use both -- or forget about the inodes and use the actual data\n> directory string.\n\nConsidering that we still need an operation equivalent to \"nextSemKey++\"\n(in case of a key collision), I'm not really sure how working with strings\nrather than ints would make life better.\n\n> For the SYSV APIs, the scenario that came to my mind is if someone\n> starts a bunch of servers each on their own mount, it could happen that\n> the inodes of the data directories are very similar.\n\nSure. That's why I didn't throw away any of the duplicate-key-handling\nlogic, and why we're still checking for st_dev match when inspecting\nparticular shmem blocks. (It also seems likely that somebody\nwho's doing that would be using similar pathnames on the different\nmounts, so that string-based approaches wouldn't exactly be free of\ncollision problems either.)\n\n> There is also the issue that AFAICT the key_t in the SYSV APIs is always\n> 32-bit whereas inodes are 64-bit. Probably not a big deal, but it might\n> prevent an exact one-to-one mapping.\n\nTrue, although the width of inode numbers is probably pretty platform-\nand filesystem-dependent. We could consider trying some more complicated\nmapping like xor'ing high and low halves, but I don't entirely see what\nit buys us.\n\n> Of course, ftok() is also available here as an existing solution.\n\nI looked at that briefly, but I don't really see what it'd buy us either,\nexcept for opacity which doesn't seem useful. The Linux man page pretty\nmuch says in so many words that it's a wrapper for st_ino and st_dev;\nand how does it help us if other platforms do it differently?\n\n(Actually, if Linux does it the way the man page suggests, it'd really\nbe a net negative, because there'd only be 24 bits of key variation\nnot 32.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Aug 2019 10:18:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I agree with this patch and the reasons for it.\n\nA related point, perhaps we should change the key printed into\npostmaster.pid to be in hexadecimal format (\"0x08x\") so that it matches\nwhat ipcs prints.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 13:36:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I agree with this patch and the reasons for it.\n\nOK, thanks for reviewing.\n\n> A related point, perhaps we should change the key printed into\n> postmaster.pid to be in hexadecimal format (\"0x08x\") so that it matches\n> what ipcs prints.\n\nHmm, that depends on whose ipcs you use :-(. A quick survey\nof my machines says it's\n\n key shmid\n\nLinux: hex decimal\nFreeBSD: decimal decimal\nNetBSD: decimal decimal\nOpenBSD: decimal decimal\nmacOS: hex decimal\nHPUX: hex (not printed)\n\nThere's certainly room to argue that hex+decimal is most popular,\nbut I'm not sure that that outweighs possible compatibility issues\nfrom changing postmaster.pid contents. (Admittedly, it's not real\nclear that anything would be paying attention to the shmem key,\nso maybe there's no compatibility issue.)\n\nIf we did want to assume that we could change postmaster.pid,\nit might be best to print the key both ways?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Sep 2019 10:59:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I wrote:\n> Attached is a draft patch to change both shmem and sema key selection\n> to be based on data directory inode rather than port.\n> ...\n> I'm not quite sure what's going on in src/test/recovery/t/017_shm.pl.\n> As expected, the test for port number non-collision no longer sees\n> a failure. After fixing that, the test passes, but it takes a\n> ridiculously long time (minutes); apparently each postmaster start/stop\n> cycle takes much longer than it ought to. I suppose this patch is\n> breaking its assumptions, but I've not studied it.\n\nAfter looking closer, the problem is pretty obvious: the initial\nloop is trying to create a cluster whose shmem key matches its\nport-number-based expectation. With this code, that will never\nhappen except by unlikely accident, so it wastes time with repeated\ninitdb/start/stop attempts. After 100 tries it gives up and presses\non with the test, resulting in the apparent pass with long runtime.\n\nI now understand the point you made upthread that this test could\nonly be preserved if we invent some way to force the choice of shmem\nkey. While it wouldn't be hard to do that (say, invent a magic\nenvironment variable), I really don't want to do so. In the field,\nsuch a behavior would have no positive use, and it could destroy our\nnewly-improved guarantees about detecting conflicting old processes.\n\nHowever, there's another way to skin this cat. We can have the\nPerl test script create a conflicting shmem segment directly,\nas in the attached second-draft patch. I simplified the test\nscript quite a bit, since I don't see any particular value in\ncreating more than one test postmaster with this approach.\n\nThis still isn't committable as-is, since the test will just curl up\nand die on machines lacking IPC::SharedMem. (It could be rewritten\nto rely only on the lower-level IPC::SysV module, but I doubt that's\nworth the trouble, since either way it'd fail on Windows.) I'm not\nsure whether we should just not bother to run the test at all, or\nif we should run it but skip the IPC-related parts; and my Perl-fu\nisn't really up to implementing either behavior.\n\nAnother thing that might be interesting is to do more than just create\nthe conflicting segment, ie, try to put some data into it that would\nfool the postmaster. I'm not excited about that at all, but maybe\nsomeone else is?\n\nThe attached patch is identical to the previous one except for the\nchanges in src/test/recovery/t/017_shm.pl.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 04 Sep 2019 18:27:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "I wrote:\n> This still isn't committable as-is, since the test will just curl up\n> and die on machines lacking IPC::SharedMem.\n\nAfter a bit of research, here's a version that takes a stab at fixing\nthat. There may be cleaner ways to do it, but this successfully skips\nthe test if it can't import the needed IPC modules.\n\nThis also fixes a problem that the previous script had with leaking\na shmem segment. That's due to something that could be considered\na pre-existing bug, which is that if we use shmem key X+1, and the\npostmaster crashes, and the next start is able to get shmem key X,\nwe don't clean up the shmem segment at X+1. In principle, we could\nnote from the contents of postmaster.pid that X+1 was used before and\ntry to remove it. In practice, I doubt this is worth worrying about\ngiven how small the shmem segments are now, and the very low probability\nof key collisions in the new regime. Anyway it would be material for\na different patch.\n\nI think this could be considered committable, but if anyone wants to\nimprove the test script, step right up.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 04 Sep 2019 20:25:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
},
{
"msg_contents": "On 2019-09-04 16:59, Tom Lane wrote:\n>> A related point, perhaps we should change the key printed into\n>> postmaster.pid to be in hexadecimal format (\"0x08x\") so that it matches\n>> what ipcs prints.\n> Hmm, that depends on whose ipcs you use :-(. A quick survey\n> of my machines says it's\n> \n> key shmid\n> \n> Linux: hex decimal\n> FreeBSD: decimal decimal\n> NetBSD: decimal decimal\n> OpenBSD: decimal decimal\n> macOS: hex decimal\n> HPUX: hex (not printed)\n> \n> There's certainly room to argue that hex+decimal is most popular,\n> but I'm not sure that that outweighs possible compatibility issues\n> from changing postmaster.pid contents. (Admittedly, it's not real\n> clear that anything would be paying attention to the shmem key,\n> so maybe there's no compatibility issue.)\n\nLet's just leave it decimal then. At least then it's easier to compare\nit to ls -i output.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 5 Sep 2019 14:28:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected \"shared memory block is still in use\""
}
] |
[
{
"msg_contents": "I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\nthanks for writing them.\n\nI reviewed notes; find proposed changes attached+included.\n\nI think these should also be mentioned?\n\nf7cb284 Add plan_cache_mode setting\na6da004 Add index_get_partition convenience function\n387a5cf Add pg_dump --on-conflict-do-nothing option.\n17f206f Set pg_class.relhassubclass for partitioned indexes\n\ndiff --git a/doc/src/sgml/release-12.sgml b/doc/src/sgml/release-12.sgml\nindex 88bdcbd..ab4d1b3 100644\n--- a/doc/src/sgml/release-12.sgml\n+++ b/doc/src/sgml/release-12.sgml\n@@ -61,8 +61,8 @@ Remove the special behavior of OID columns (Andres Freund, John Naylor)\n \n <para>\n Previously, a normally-invisible OID column could be specified during table creation using WITH OIDS; that ability has been removed. Columns can still be explicitly\n-specified as type OID. pg_dump and pg_upgrade operations on databases using WITH OIDS will need adjustment. Many system tables now have an 'oid' column that will be\n-expanded with SELECT * by default.\n+specified as type OID. pg_dump and pg_upgrade operations on databases using WITH OIDS will need adjustment. The 'oid' column of many system tables will be\n+shown by default with SELECT *.\n </para>\n </listitem>\n \n@@ -115,7 +115,7 @@ Do not allow multiple different recovery_target* specifications (Peter Eisentrau\n </para>\n \n <para>\n-Previously multiple different recovery_target* variables could be specified, and last one specified was honored. Now, only one can be specified, though the same one can\n+Previously multiple different recovery_target* variables could be specified, and the last one specified was honored. Now, only one can be specified, though the same one can\n be specified multiple times and the last specification is honored.\n </para>\n </listitem>\n@@ -405,7 +405,7 @@ Author: Robert Haas <rhaas@postgresql.org>\n -->\n \n <para>\n-Allow ATTACH PARTITION to be performed with reduced locking requirements (Robert Haas)\n+ATTACH PARTITION is performed with lower locking requirement (Robert Haas)\n </para>\n </listitem>\n \n@@ -617,7 +617,7 @@ Have new btree indexes sort duplicate index entries in heap-storage order (Peter\n </para>\n \n <para>\n-Btree indexes pg_upgraded from previous releases will not have this ordering. This does slightly reduce the maximum length of indexed values.\n+Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum permitted length of indexed values.\n </para>\n </listitem>\n \n@@ -676,7 +676,7 @@ Allow CREATE STATISTICS to create most-common-value statistics for multiple colu\n </para>\n \n <para>\n-This improve optimization for columns with non-uniform distributions that often appear in WHERE clauses.\n+This improves query plans for columns with non-uniform distributions that often appear in WHERE clauses.\n </para>\n </listitem>\n \n@@ -954,21 +954,6 @@ This dramatically speeds up processing of floating-point values. Users who wish\n \n <listitem>\n <!--\n-Author: Amit Kapila <akapila@postgresql.org>\n-2019-02-04 [b0eaa4c51] Avoid creation of the free space map for small heap rela\n--->\n-\n-<para>\n-Avoid creation of the free space map files for small table (John Naylor, Amit Kapila)\n-</para>\n-\n-<para>\n-Such files are not useful.\n-</para>\n-</listitem>\n-\n-<listitem>\n-<!--\n Author: Thomas Munro <tmunro@postgresql.org>\n 2018-11-07 [3fd2a7932] Provide pg_pread() and pg_pwrite() for random I/O.\n Author: Thomas Munro <tmunro@postgresql.org>\n@@ -1018,7 +1003,7 @@ Allow logging of only a percentage of statements and transactions meeting log_mi\n </para>\n \n <para>\n-The parameters log_statement_sample_rate and log_transaction_sample_rate controls this.\n+The parameters log_statement_sample_rate and log_transaction_sample_rate control this.\n </para>\n </listitem>\n \n@@ -1231,7 +1216,7 @@ Author: Tom Lane <tgl@sss.pgh.pa.us>\n -->\n \n <para>\n-Allow more comparisons with information_schema text columns use indexes (Tom Lane)\n+Allow more comparisons with information_schema text columns to use indexes (Tom Lane)\n </para>\n </listitem>\n \n@@ -1310,7 +1295,7 @@ Author: Thomas Munro <tmunro@postgresql.org>\n -->\n \n <para>\n-Allow discovery of the LDAP server using DNS (Thomas Munro)\n+Allow discovery of the LDAP server using DNS SRV records (Thomas Munro)\n </para>\n \n <para>\n@@ -1446,7 +1431,7 @@ Add wal_recycle and wal_init_zero server variables to avoid WAL file recycling (\n </para>\n \n <para>\n-This can be beneficial on copy-on-write file systems like ZFS.\n+This can be beneficial on copy-on-write filesystems like ZFS.\n </para>\n </listitem>\n \n@@ -1502,7 +1487,7 @@ Add server variable to control the type of shared memory to use (Andres Freund)\n </para>\n \n <para>\n-The variable is shared_memory_type. It purpose is to allow selection of System V shared memory, if desired.\n+The variable is shared_memory_type. Its purpose is to allow selection of System V shared memory, if desired.\n </para>\n </listitem>",
"msg_date": "Wed, 8 May 2019 15:32:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg12 release notes"
},
{
"msg_contents": "On 2019-May-08, Justin Pryzby wrote:\n\n> I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\n> thanks for writing them.\n> \n> I reviewed notes; find proposed changes attached+included.\n> \n> I think these should also be mentioned?\n> \n> a6da004 Add index_get_partition convenience function\n\nI don't disagree with the other three you suggest, but this one is a C\nfunction and I think it's below the level of usefulness that we publish\nin relnotes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 May 2019 16:39:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 8:32 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\n> thanks for writing them.\n\n+1\n\n> I reviewed notes; find proposed changes attached+included.\n\n+1 to all the corrections shown.\n\nOne more: \"Allow parallel query when in SERIALIZABLE ISOLATION MODE\n(Thomas Munro)\"\n\nOnly SERIALIZABLE should be in all-caps IMHO.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 May 2019 09:52:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Wed, May 8, 2019 at 03:32:04PM -0500, Justin Pryzby wrote:\n> I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\n> thanks for writing them.\n\n> -This improve optimization for columns with non-uniform distributions that often appear in WHERE clauses.\n> +This improves query plans for columns with non-uniform distributions that often appear in WHERE clauses.\n\nI think \"query plans\" is less clear.\n\n> -Author: Amit Kapila <akapila@postgresql.org>\n> -2019-02-04 [b0eaa4c51] Avoid creation of the free space map for small heap rela\n> --->\n> -\n> -<para>\n> -Avoid creation of the free space map files for small table (John Naylor, Amit Kapila)\n> -</para>\n> -\n> -<para>\n> -Such files are not useful.\n> -</para>\n> -</listitem>\n> -\n> -<listitem>\n> -<!--\n\nAlready removed.\n\n> -This can be beneficial on copy-on-write file systems like ZFS.\n> +This can be beneficial on copy-on-write filesystems like ZFS.\n\nWe usually spell it \"file systems\" in our docs.\n\nI have made your other changes, with adjustment, patch attached.\n\nThe results are here:\n\n\thttp://momjian.us/pgsql_docs/release-12.html\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 9 May 2019 19:35:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Wed, May 8, 2019 at 03:32:04PM -0500, Justin Pryzby wrote:\n> I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\n> thanks for writing them.\n> \n> I reviewed notes; find proposed changes attached+included.\n> \n> I think these should also be mentioned?\n> \n> f7cb284 Add plan_cache_mode setting\n> 387a5cf Add pg_dump --on-conflict-do-nothing option.\n\nI am confused how I missed these. There is only one commit between\nthem, and I suspect I deleted them by accident. I hope those are the\nonly ones.\n\n> a6da004 Add index_get_partition convenience function\n\nA C function just isn't normally mentioned in the release notes.\n\n> 17f206f Set pg_class.relhassubclass for partitioned indexes\n\nI need help with this one. I know the system column existed in previous\nreleases, so how is it different now? Do we document system table\nchanges that are implementation-behavior in the release notes? Usually\nwe don't.\n\nApplied patch attached, docs updated:\n\n\thttp://momjian.us/pgsql_docs/release-12.html\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 9 May 2019 20:08:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 09, 2019 at 07:35:18PM -0400, Bruce Momjian wrote:\n> I have made your other changes, with adjustment, patch attached.\n> \n> The results are here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n\nThanks\n\n> -Many system tables now have an 'oid' column that will be expanded with\n> -SELECT * by default.\n> +The many system tables with such columns will now display those columns\n> +with SELECT * by default.\n\nI think \"The many\" is hard to parse but YMMV.\n\nFind attached additional changes from another pass I've made.\n\n\n From 97ddf06bc9221153c52613b8c840409ee698bbad Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 9 May 2019 14:15:57 -0500\nSubject: [PATCH v2 1/2] pg12 relnotes v2\n\n---\n doc/src/sgml/release-12.sgml | 18 +++++++++++-------\n 1 file changed, 11 insertions(+), 7 deletions(-)\n\ndiff --git a/doc/src/sgml/release-12.sgml b/doc/src/sgml/release-12.sgml\nindex ab4d1b3..cc48960 100644\n--- a/doc/src/sgml/release-12.sgml\n+++ b/doc/src/sgml/release-12.sgml\n@@ -127,7 +127,7 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n -->\n \n <para>\n-Cause recovery to recover to the latest timeline by default (Peter Eisentraut)\n+Cause recovery to advance to the latest timeline by default (Peter Eisentraut)\n </para>\n \n <para>\n@@ -205,7 +205,8 @@ Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n -->\n \n <para>\n-Require pg_restore to use \"-f -\" to output the dump contents to stdout (Euler Taveira)\n+Require specification of \"-f -\" to cause pg_restore to write to stdout (Euler\n+Taveira)\n </para>\n \n <para>\n@@ -1003,7 +1004,7 @@ Allow logging of only a percentage of statements and transactions meeting log_mi\n </para>\n \n <para>\n-The parameters log_statement_sample_rate and log_transaction_sample_rate control this.\n+This is controlled by the new parameters log_statement_sample_rate and log_transaction_sample_rate.\n </para>\n </listitem>\n \n@@ -1076,7 +1077,7 @@ Add tracking of global objects in system view pg_stat_database (Julien Rouhaud)\n </para>\n \n <para>\n-The system view row's datoid is reported as zero.\n+The system wide objects are shown with a datoid of zero.\n </para>\n </listitem>\n \n@@ -1132,7 +1133,8 @@ Author: Peter Eisentraut <peter@eisentraut.org>\n -->\n \n <para>\n-Allow viewers of pg_stat_ssl to only see their own rows (Peter Eisentraut)\n+Restrict visibility of rows in pg_stat_ssl by unprivileged users (Peter\n+Eisentraut)\n </para>\n </listitem>\n \n@@ -1216,7 +1218,8 @@ Author: Tom Lane <tgl@sss.pgh.pa.us>\n -->\n \n <para>\n-Allow more comparisons with information_schema text columns to use indexes (Tom Lane)\n+Allow use of indexes for more comparisons with text columns of\n+information_schema (Tom Lane)\n </para>\n </listitem>\n \n@@ -1280,7 +1283,8 @@ Author: Magnus Hagander <magnus@hagander.net>\n -->\n \n <para>\n-Allow the clientcert pg_hba.conf option to check the database user name matches the certificate common name (Julian Markwort, Marius Timmer)\n+Allow the clientcert pg_hba.conf option to check that the database user name\n+matches the certificate common name (Julian Markwort, Marius Timmer)\n </para>\n \n <para>\n\n\n\n From f9d2ee1232090d9087f110d3299bdfae3ed2eab9 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 9 May 2019 18:50:42 -0500\nSubject: [PATCH v2 2/2] Add commas: \"Previously,\"\n\n---\n doc/src/sgml/release-12.sgml | 18 +++++++++---------\n 1 file changed, 9 insertions(+), 9 deletions(-)\n\ndiff --git a/doc/src/sgml/release-12.sgml b/doc/src/sgml/release-12.sgml\nindex cc48960..3f9893e 100644\n--- a/doc/src/sgml/release-12.sgml\n+++ b/doc/src/sgml/release-12.sgml\n@@ -115,7 +115,7 @@ Do not allow multiple different recovery_target* specifications (Peter Eisentrau\n </para>\n \n <para>\n-Previously multiple different recovery_target* variables could be specified, and the last one specified was honored. Now, only one can be specified, though the same one can\n+Previously, multiple different recovery_target* variables could be specified, and the last one specified was honored. Now, only one can be specified, though the same one can\n be specified multiple times and the last specification is honored.\n </para>\n </listitem>\n@@ -183,7 +183,7 @@ Change XML functions like xpath() to never pretty-print their output (Tom Lane)\n </para>\n \n <para>\n-Previously this happened in some rare cases. ACCURATE? HOW TO GET PRETTY PRINT OUTPUT?\n+Previously, this happened in some rare cases. ACCURATE? HOW TO GET PRETTY PRINT OUTPUT?\n </para>\n </listitem>\n \n@@ -384,7 +384,7 @@ Allow partitions bounds to be any expression (Kyotaro Horiguchi, Tom Lane, Amit\n </para>\n \n <para>\n-Expressions are evaluated at table partitioned table creation time. Previously only constants were allowed as partitions bounds.\n+Expressions are evaluated at table partitioned table creation time. Previously, only constants were allowed as partitions bounds.\n </para>\n </listitem>\n \n@@ -515,7 +515,7 @@ Allow parallel query when in SERIALIZABLE ISOLATION MODE (Thomas Munro)\n </para>\n \n <para>\n-Previously parallelism was disabled when in this mode.\n+Previously, parallelism was disabled when in this mode.\n </para>\n </listitem>\n \n@@ -793,7 +793,7 @@ Store statistics using the collation defined for each column (Tom Lane)\n </para>\n \n <para>\n-Previously the default collation was used for all statistics storage. This potentially gives better optimizer behavior for columns with non-default collations.\n+Previously, the default collation was used for all statistics storage. This potentially gives better optimizer behavior for columns with non-default collations.\n </para>\n </listitem>\n \n@@ -1532,7 +1532,7 @@ Allow the streaming replication timeout to be set per connection (Tsunakawa Taka\n </para>\n \n <para>\n-Previously this could only be set cluster-wide.\n+Previously, this could only be set cluster-wide.\n </para>\n </listitem>\n \n@@ -1825,7 +1825,7 @@ Use all column names when creating default foreign key constraint names (Peter E\n </para>\n \n <para>\n-Previously only the first column name was used.\n+Previously, only the first column name was used.\n </para>\n </listitem>\n \n@@ -2329,7 +2329,7 @@ Allow control of log file rotation via pg_ctl (Kyotaro Horiguchi, Alexander Kuzm\n </para>\n \n <para>\n-Previously this was only possible via an SQL function or a process signal.\n+Previously, this was only possible via an SQL function or a process signal.\n </para>\n </listitem>\n \n@@ -2697,7 +2697,7 @@ Properly honor WITH CHECK OPTION on views that reference postgres_fdw tables (Et\n \n <para>\n While CHECK OPTIONs on postgres_fdw tables are ignored (because the reference is foreign), views on such tables are considered local, so this release enforces CHECK\n-OPTIONs on them. Previously only INSERTs and UPDATEs with RETURNING clauses that returned CHECK OPTION values were validated.\n+OPTIONs on them. Previously, only INSERTs and UPDATEs with RETURNING clauses that returned CHECK OPTION values were validated.\n </para>\n </listitem>",
"msg_date": "Thu, 9 May 2019 19:13:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 09, 2019 at 08:08:53PM -0400, Bruce Momjian wrote:\n> On Wed, May 8, 2019 at 03:32:04PM -0500, Justin Pryzby wrote:\n> > I think these should also be mentioned?\n> > \n> > f7cb284 Add plan_cache_mode setting\n> > 387a5cf Add pg_dump --on-conflict-do-nothing option.\n> \n> Applied patch attached, docs updated:\n\nThanks, comments:\n\n> +Author: Peter Eisentraut <peter_e@gmx.net>\n> +2018-07-16 [f7cb2842b] Add plan_cache_mode setting\n> +-->\n> +\n> +<para>\n> +Allow contol over when generic plans are used for prepared statements (Pavel Stehule)\n\ncontrol\n\n> +<para>\n> +The server variable plan_cache_mode enables this control.\n\n\"This is controlled by the plan_cache_mode parameter\".\n\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2018-12-30 [b5415e3c2] Support parameterized TidPaths.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> @@ -2456,6 +2471,21 @@ Add --exclude-database option to pg_dumpall (Andrew Dunstan)\n> \n> <listitem>\n> <!--\n> +Author: Thomas Munro <tmunro@postgresql.org>\n> +2018-07-13 [387a5cfb9] Add pg_dump - -on-conflict-do-nothing option.\n> +-->\n> +\n> +<para>\n> +Allow restore of INSERT statements to skip rows which would cause conflicts (Surafel Temesgen)\n> +</para>\n\nrestore *using* INSERT statements ?\ncause \"unique index\" conflicts ?\n\nJustin\n\n\n",
"msg_date": "Thu, 9 May 2019 19:19:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 07:19:59PM -0500, Justin Pryzby wrote:\n> > +Author: Thomas Munro <tmunro@postgresql.org>\n> > +2018-07-13 [387a5cfb9] Add pg_dump - -on-conflict-do-nothing option.\n> > +-->\n> > +\n> > +<para>\n> > +Allow restore of INSERT statements to skip rows which would cause conflicts (Surafel Temesgen)\n> > +</para>\n> \n> restore *using* INSERT statements ?\n> cause \"unique index\" conflicts ?\n\nI am not sure \"unique index\" helps since most people think of ON\nCONFLICT and not its implementation.\n\nApplied patch attached, URL updated:\n\n\thttp://momjian.us/pgsql_docs/build.html\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 9 May 2019 20:38:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 07:13:35PM -0500, Justin Pryzby wrote:\n> On Thu, May 09, 2019 at 07:35:18PM -0400, Bruce Momjian wrote:\n> > I have made your other changes, with adjustment, patch attached.\n> > \n> > The results are here:\n> > \n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> Thanks\n\nThese were all very helpful. I adjusted your changes to create the\nattached applied patch. URL updated:\n\n\thttp://momjian.us/pgsql_docs/build.html\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 9 May 2019 21:02:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 09, 2019 at 09:02:51PM -0400, Bruce Momjian wrote:\n> These were all very helpful. I adjusted your changes to create the\n> attached applied patch. URL updated:\n\nThanks.\n\n> -Allow more comparisons with information_schema text columns to use indexes (Tom Lane)\n> +Allow more use of indexes for text columns comparisons with information_schema columns (Tom Lane)\n\nI think \"columns\" should not be plural..but it could be better still:\n\n|Allow use of indexes for more comparisons with text columns of information_schema (Tom Lane)\n\nRegarding this proposed change of mine:\n-Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum length of indexed values.\n+Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum permitted length of indexed values.\n\nI think \"permitted\" is important, since otherwise it sounds like maybe for\nwhatever values are being indexed, their maximum length is reduced by this\npatch. If you overthink it, you could decide that maybe that's happening due\nto use of prefix/suffix truncation or something ..\n\nShould this one be listed twice ? I tried to tell if it was intentional but\ncouldn't decide..\n249d64999 Add support TCP user timeout in libpq and the backend se\n\nJustin\n\n\n",
"msg_date": "Thu, 9 May 2019 20:34:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 6:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> These were all very helpful. I adjusted your changes to create the\n> attached applied patch. URL updated:\n\nI noticed that the compatibility note for Andrew Gierth's RYU floating\npoint patch seems to simply say why the feature is useful. Shouldn't\nit be listed separately, and its impact on users upgrading be listed\nhere instead?\n\nSeparately, please find attached suggested changes for items I was\ninvolved in. I have attempted to explain them in a way that makes\ntheir relevance to users clearer. Even if you don't end up using my\nwording, you should still change the attribution along the same lines\nas the patch.\n\nAlso, I added a compatibility note for the new version of nbtree,\nwhich revises the \"1/3 of a page\" restriction downwards very slightly\n(by 8 bytes). FWIW, I think it's very unlikely that anyone will be\naffected, because tuples that are that wide are already compressed in\nalmost all cases -- it seems like it would be hard to be just at the\nedge of the limit already.\n\nThanks\n-- \nPeter Geoghegan",
"msg_date": "Thu, 9 May 2019 19:10:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, 10 May 2019 at 12:08, Bruce Momjian <bruce@momjian.us> wrote:\n> > 17f206f Set pg_class.relhassubclass for partitioned indexes\n>\n> I need help with this one. I know the system column existed in previous\n> releases, so how is it different now? Do we document system table\n> changes that are implementation-behavior in the release notes? Usually\n> we don't.\n\nThis appears to be fixing something that likely should have been done\nin PG11, where partitioned indexes were added. Originally the column\nwas for inheritance parent tables, then later used for partitioned\ntables. It seems partitioned indexes just overlooked setting it to\ntrue in PG11 and this commit fixed that. Of course, backpacking that\nfix wouldn't be very useful for partitioned indexes that were already\ncreated, so it was a master only change.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 15:18:59 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, 10 May 2019 at 13:03, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, May 9, 2019 at 07:13:35PM -0500, Justin Pryzby wrote:\n> > On Thu, May 09, 2019 at 07:35:18PM -0400, Bruce Momjian wrote:\n> > > I have made your other changes, with adjustment, patch attached.\n> > >\n> > > The results are here:\n> > >\n> > > http://momjian.us/pgsql_docs/release-12.html\n\nHi Bruce,\n\nJust a question about the item: \"Allow IN comparisons with arrays to\nuse IS NOT NULL partial indexes more frequently (Tom Lane)\"\n\n From what I can tell this must refer to 65ce07e0202f. If so, I think\nJames Coleman should be the author.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 15:34:15 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Just a question about the item: \"Allow IN comparisons with arrays to\n> use IS NOT NULL partial indexes more frequently (Tom Lane)\"\n\n> From what I can tell this must refer to 65ce07e0202f.\n\nYou can tell for sure by looking into the SGML comments in\nrelease-12.sgml:\n\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-02-20 [e04a3905e] Improve planner's understanding of strictness of type co\n-->\n\n<para>\nAllow IS NOT NULL with mis-matching types to use partial indexes more frequently (Tom Lane)\n</para>\n\n> If so, I think James Coleman should be the author.\n\n... and yeah, James should get the credit. But there's more wrong with\nthe summary than that, because I don't think this was about mismatched\ntypes particularly. The real motivation was to avoid failing to prove\nthe usability of WHERE-x-IS-NOT-NULL partial indexes for IN clauses with\nmore than 100 elements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 23:45:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On 2019/05/10 12:18, David Rowley wrote:\n> On Fri, 10 May 2019 at 12:08, Bruce Momjian <bruce@momjian.us> wrote:\n>>> 17f206f Set pg_class.relhassubclass for partitioned indexes\n>>\n>> I need help with this one. I know the system column existed in previous\n>> releases, so how is it different now? Do we document system table\n>> changes that are implementation-behavior in the release notes? Usually\n>> we don't.\n> \n> This appears to be fixing something that likely should have been done\n> in PG11, where partitioned indexes were added.\n\nThat's true. We (Michael and I) felt the need to do this change, because\nit allowed the pg_partition_tree() code (which is also new in v12) to use\nthe same infrastructure for both partitioned tables and indexes; checking\nthe relhassubclass flag allows to short-circuit scanning pg_inherits to\nfind out that there are no children.\n\n> Originally the column\n> was for inheritance parent tables, then later used for partitioned\n> tables. It seems partitioned indexes just overlooked setting it to\n> true in PG11 and this commit fixed that. Of course, backpacking that\n> fix wouldn't be very useful for partitioned indexes that were already\n> created, so it was a master only change.\n\nThere was no discussion on whether or not to back-patch this to v11, but\nthe above makes sense.\n\nRegarding whether or not this commit needs a release note mention, I'm not\nthat sure but maybe we should if Justin thinks it's useful information.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 10 May 2019 12:45:54 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, 10 May 2019 at 15:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-02-20 [e04a3905e] Improve planner's understanding of strictness of type co\n> -->\n>\n> <para>\n> Allow IS NOT NULL with mis-matching types to use partial indexes more frequently (Tom Lane)\n> </para>\n>\n> > If so, I think James Coleman should be the author.\n>\n> ... and yeah, James should get the credit. But there's more wrong with\n> the summary than that, because I don't think this was about mismatched\n> types particularly. The real motivation was to avoid failing to prove\n> the usability of WHERE-x-IS-NOT-NULL partial indexes for IN clauses with\n> more than 100 elements.\n\nI think you might be mixing up two items. I'm talking about:\n\n<para>\nAllow IN comparisons with arrays to use <literal>IS NOT NULL</literal>\npartial indexes more frequently (Tom Lane)\n</para>\n\nto which the sgml references 65ce07e02.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 15:57:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I think you might be mixing up two items. I'm talking about:\n\n> <para>\n> Allow IN comparisons with arrays to use <literal>IS NOT NULL</literal>\n> partial indexes more frequently (Tom Lane)\n> </para>\n\n> to which the sgml references 65ce07e02.\n\nWups, sorry, I was talking about 65ce07e02 also, but I managed to\ncopy-and-paste the wrong bit of release-12.sgml first :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 00:02:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 09, 2019 at 08:08:53PM -0400, Bruce Momjian wrote:\n> On Wed, May 8, 2019 at 03:32:04PM -0500, Justin Pryzby wrote:\n> > I noticed you added release notes at bdf595adbca195fa54a909c74a5233ebc30641a1,\n> > thanks for writing them.\n> > \n> > I reviewed notes; find proposed changes attached+included.\n> > \n> > I think these should also be mentioned?\n> > \n> > f7cb284 Add plan_cache_mode setting\n> > 387a5cf Add pg_dump --on-conflict-do-nothing option.\n> \n> I am confused how I missed these. There is only one commit between\n> them, and I suspect I deleted them by accident. I hope those are the\n> only ones.\n\nI'm rechecking my list from last month. What about these ?\n\n> c076f3d Remove pg_constraint.conincluding\n> bd09503 Increase the default vacuum_cost_limit from 200 to 2000\n\nI don't think this one is needed but please check:\n> 1a990b2\n> The API of the function BufFileSize() is changed by this commit, despite\n\nJustin\n\n\n",
"msg_date": "Thu, 9 May 2019 23:52:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 12:45:54PM +0900, Amit Langote wrote:\n> On 2019/05/10 12:18, David Rowley wrote:\n> > On Fri, 10 May 2019 at 12:08, Bruce Momjian <bruce@momjian.us> wrote:\n> >>> 17f206f Set pg_class.relhassubclass for partitioned indexes\n> >>\n> >> I need help with this one. I know the system column existed in previous\n> >> releases, so how is it different now? Do we document system table\n> >> changes that are implementation-behavior in the release notes? Usually\n> >> we don't.\n> > \n> > This appears to be fixing something that likely should have been done\n> > in PG11, where partitioned indexes were added.\n...\n\n> > Originally the column\n> > was for inheritance parent tables, then later used for partitioned\n> > tables. It seems partitioned indexes just overlooked setting it to\n> > true in PG11 and this commit fixed that. Of course, backpacking that\n> > fix wouldn't be very useful for partitioned indexes that were already\n> > created, so it was a master only change.\n...\n\n> Regarding whether or not this commit needs a release note mention, I'm not\n> that sure but maybe we should if Justin thinks it's useful information.\n\nI don't know for sure and I don't feel strongly either way.\n\nLast month, I looked through the list of commits to master ([0] rather than\nusing pgsql/src/tools/git_changelog), and made a list of commits I thought\nshould probably be mentioned. I sent to Bruce, in case he could make use of\nit, and just now triple checked that he'd included all the stuff I could see\nwas important. Added/changed/removed interfaces (programs, libraries, etc),\nGUCs, catalogs were all on my list (which is what caused me to include\nindex_get_partition, which I agree shouldn't actually be in the relnotes).\n\nBehavior changes should sometimes to be there too, but there's\ninternal/implementation changes which shouldn't.\n\nJustin\n\n[0] git log --oneline --cherry-pick origin/REL_11_STABLE...origin/master\n\nOn Fri, Apr 12, 2019 at 02:55:38AM -0500, Justin Pryzby wrote:\n> I was thinking of starting to create release notes ; is it reasonable and\n> helpful if I put together an initial, 0th order notes document ?\n> \n> I just spent a good while identifying the interesting commits from\n> here...although I'm sure I've missed some.\n> git log --oneline --cherry-pick origin/REL_11_STABLE...origin/master\n> \n> Highlights:\n> 428b260 Speed up planning when partitions can be pruned at plan time.\n> f56f8f8 Support foreign keys that reference partitioned tables\n> 7300a69 Add support for multivariate MCV lists\n> Progress reporting:\n> - 03f9e5c Report progress of REINDEX operations\n> - ab0dfc9 Report progress of CREATE INDEX operations\n> - 280e5f1 Add progress reporting to pg_checksums\n> - 6f97457 Add progress reporting for CLUSTER and VACUUM FULL.\n> \n> Features:\n> \n> \\psql:\n> 1c5d927 psql \\dP: list partitioned tables and indexes\n> 27f3dea psql: Add documentation URL to \\help output\n> 1af25ca Improve psql's \\d display of foreign key constraints\n> d3a5fc1 Show table access methods as such in psql's \\dA.\n> \n> \\GUCs:\n> 799e220 Log all statements from a sample of transactions\n> 88bdbd3 Add log_statement_sample_rate parameter\n> 475861b Add wal_recycle and wal_init_zero GUCs.\n> f1bebef Add shared_memory_type GUC.\n> 475861b Add wal_recycle and wal_init_zero GUCs.\n> f7cb284 Add plan_cache_mode setting\n> 1a83a80 Allow fractional input values for integer GUCs, and improve rounding logic.\n> \n> \\Other:\n> 119dcfa Add vacuum_truncate reloption.\n> 7bac3ac Add a \"SQLSTATE-only\" error verbosity option to libpq and psql.\n> ea569d6 Add SETTINGS option to EXPLAIN, to print modified settings.\n> b0b39f7 GSSAPI encryption support\n> fc22b66 Generated columns\n> 5dc92b8 REINDEX CONCURRENTLY\n> 8edd0e7 Suppress Append and MergeAppend plan nodes that have a single child.\n> 280a408 Transaction chaining\n> ed308d7 Add options to enable and disable checksums in pg_checksums\n> 0f086f8 Add DNS SRV support for LDAP server discovery.\n> dd299df Make heap TID a tiebreaker nbtree index column.\n> => and others\n> 01bde4f Implement OR REPLACE option for CREATE AGGREGATE.\n> 72b6460 Partial implementation of SQL/JSON path language\n> c6c9474 Use condition variables to wait for checkpoints.\n> f2e4038 Support for INCLUDE attributes in GiST indexes\n> 6b9e875 Track block level checksum failures in pg_stat_database\n> 898e5e3 Allow ATTACH PARTITION with only ShareUpdateExclusiveLock.\n> 7e413a0 pg_dump: allow multiple rows per insert\n> 8586bf7 tableam: introduce table AM infrastructure.\n> ac88d29 Avoid creation of the free space map for small heap relations.\n> 31f3817 Allow COPY FROM to filter data using WHERE conditions\n> 6260cc5 pgbench: add \\cset and \\gset commands\n> ca41030 Fix tablespace handling for partitioned tables\n> aa2ba50 Add CSV table output mode in psql.\n> 2dedf4d Integrate recovery.conf into postgresql.conf\n> 578b229 Remove WITH OIDS support, change oid catalog column visibility.\n> 9ccdd7f PANIC on fsync() failure.\n> 803b130 Add option SKIP_LOCKED to VACUUM and ANALYZE\n> ec74369 Implement \"pg_ctl logrotate\" command\n> \n> Interfaces:\n> b96f6b1 pg_partition_ancestors\n> a6da004 Add index_get_partition convenience function\n> f1d85aa Add support for hyperbolic functions, as well as log10().\n> 3677a0b Add pg_partition_root to display top-most parent of a partition tree\n> d5eec4e Add pg_partition_tree to display information about partitions\n> 1007465 Add pg_promote function\n> c481016 Add pg_ls_archive_statusdir function\n> 9cd92d1 Add pg_ls_tmpdir function\n> \n> 2970afa Add PQresultMemorySize function to report allocated size of a PGresult.\n> \n> 00d1e88 Add --min-xid-age and --min-mxid-age options to vacuumdb\n> 354e95d Add --disable-page-skipping and --skip-locked to vacuumdb\n> 2d34ad8 Add a --socketdir option to pg_upgrade.\n> 387a5cf Add pg_dump --on-conflict-do-nothing option.\n> 8a00b96 Add pg_rewind --no-sync\n> e0090c8 Add option -N/--no-sync to pg_checksums\n> f092de0 Add --exclude-database option to pg_dumpall\n> \n> 17f206f Set pg_class.relhassubclass for partitioned indexes\n> f94cec6 Include partitioned indexes to system view pg_indexes\n> b13a913 Add BKI_DEFAULT to pg_class.relrewrite\n> \n> 7fee252 Add timestamp of last received message from standby to pg_stat_replication\n> f60a0e9 Add more columns to pg_stat_ssl\n> 43cbeda Extend pg_stat_statements_reset to reset statistics specific to a particular user/db/query.\n> \n> grep for pg_ Add|remove|move|Compat|Break|Release|pq|--|\\\n> API pg_\n> \\.c \\.h\n> v1\n> \n> Compat:\n> 6dd263c Rename pg_verify_checksums to pg_checksums\n> 342cb65 Don't log incomplete startup packet if it's empty\n> 6ae578a Set fallback_application_name for a walreceiver to cluster_name\n> 689d15e Log PostgreSQL version number on startup\n> XXX 478cacb Ensure consistent name matching behavior in processSQLNamePattern().\n> bcbd940 Remove dynamic_shared_memory_type=none\n> cda6a8d Remove deprecated abstime, reltime, tinterval datatypes.\n> 2d10def Remove timetravel extension.\n> 96b00c4 Remove obsolete pg_constraint.consrc column\n> fe50382 Remove obsolete pg_attrdef.adsrc column\n> c076f3d Remove pg_constraint.conincluding\n> dd299df\n> The maximum allowed size of new tuples is reduced by an amount equal to\n> the space required to store an extra MAXALIGN()'d TID in a new high key\n> during leaf page splits. The user-facing definition of the \"1/3 of a\n> page\" restriction is already imprecise, and so does not need to be\n> revised. However, there should be a compatibility note in the v12\n> release notes.\n> 1a990b2\n> The API of the function BufFileSize() is changed by this commit, despite\n> cbccac3 Reduce the default value of autovacuum_vacuum_cost_delay to 2ms.\n> bd09503 Increase the default vacuum_cost_limit from 200 to 2000\n> 413ccaa pg_restore: Require \"-f -\" to mean stdout\n> \n> Perform:\n> fe28069 Scan GiST indexes in physical order during VACUUM.\n> c24dcd0 Use pg_pread() and pg_pwrite() for data files and WAL.\n> 0d5f05c Allow multi-inserts during COPY into a partitioned table\n> 959d00e Use Append rather than MergeAppend for scanning ordered partitions.\n> bbb96c3 Allow ALTER TABLE .. SET NOT NULL to skip provably unnecessary scans.\n> 3c59263 Avoid some table rewrites for ALTER TABLE .. SET DATA TYPE timestamp.\n> 3a769d8 pg_upgrade: Allow use of file cloning\n> bb16aba Enable parallel query with SERIALIZABLE isolation.\n> 2a63683 Add support for nearest-neighbor (KNN) searches to SP-GiST\n> \n> Source:\n> d9dd406 Require C99 (and thus MSCV 2013 upwards).\n> \n> Bugs:\n> 036f7d3 Avoid counting transaction stats for parallel worker cooperating transaction.\n> ? 251cf2e Fix minor deficiencies in XMLTABLE, xpath(), xmlexists()\n\n\n",
"msg_date": "Thu, 9 May 2019 23:57:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, 10 May 2019 at 16:52, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm rechecking my list from last month. What about these ?\n>\n> > c076f3d Remove pg_constraint.conincluding\n> > bd09503 Increase the default vacuum_cost_limit from 200 to 2000\n\nbd09503 was reverted in 52985e4fea7 and replaced by cbccac371, which\nis documented already by:\n\n<para>\nReduce the default value of\n<varname>autovacuum_vacuum_cost_delay</varname> to 2ms (Tom Lane)\n</para>\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 17:10:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, 10 May 2019 at 16:57, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > 8edd0e7 Suppress Append and MergeAppend plan nodes that have a single child.\n\nYou could say that I'm biased, but I think this should get a mention.\nIt's not just a matter of tidying up the plan by getting rid of nodes\nthat are not requires, it allows plan shapes that were not possible\nbefore, for example, a parallel index scan on the index of a partition\nand the ability to not needlessly include a Materialize node in a\nMerge Join or Nested Loop Join to a partitioned table, when only 1\npartition survives pruning.\n\nI'd say wording along the lines of:\n\n* Allow the optimizer to no longer generate plans containing a single\nsub-node Append/MergeAppend node.\n\nThis allows more plan types to be considered.\n\n[...]\n\n\n> > Perform:\n> > 959d00e Use Append rather than MergeAppend for scanning ordered partitions.\n\nI also think this is worth a mention. The speedup can be quite large\nwhen the query involves a LIMIT clause, and I think it will apply\nfairly often. Most of the times I've seen partitioned table the wild\nthey were RANGE partitioned by a timestamp, or at least they were\ninheritance based tables partitioned by timestamp that could one day\nbe changed to a RANGE partitioned table.\n\nI'd say something like:\n\n* Allow the optimizer to exploit the ordering of RANGE and LIST\npartitioned tables when generating paths for partitioned tables.\n\nThis saves the optimizer from using MergeAppend node to scan a\npartitioned table in order when an Append node will do.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 17:51:09 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "Hi David,\n\nOn 2019/05/10 14:51, David Rowley wrote:\n> On Fri, 10 May 2019 at 16:57, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> 959d00e Use Append rather than MergeAppend for scanning ordered partitions.\n> \n> I also think this is worth a mention. The speedup can be quite large\n> when the query involves a LIMIT clause, and I think it will apply\n> fairly often. Most of the times I've seen partitioned table the wild\n> they were RANGE partitioned by a timestamp, or at least they were\n> inheritance based tables partitioned by timestamp that could one day\n> be changed to a RANGE partitioned table.\n> \n> I'd say something like:\n> \n> * Allow the optimizer to exploit the ordering of RANGE and LIST\n> partitioned tables when generating paths for partitioned tables.\n> \n> This saves the optimizer from using MergeAppend node to scan a\n> partitioned table in order when an Append node will do.\n\nFWIW, I've asked [1] Bruce to mention this commit in its own release note\nitem. Currently, it's buried under pruning performance improvement item,\nlike this.\n\n<listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2018-11-07 [c6e4133fa] Postpone calculating total_table_pages until after\npruni\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2018-11-15 [34c9e455d] Improve performance of partition pruning remapping\na lit\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n2018-11-16 [3f2393ede] Redesign initialization of partition routing structures\nAuthor: Robert Haas <rhaas@postgresql.org>\n2019-02-21 [9eefba181] Delay lock acquisition for partitions until we\nroute a t\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-30 [428b260f8] Speed up planning when partitions can be pruned at\nplan\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-04-05 [959d00e9d] Use Append rather than MergeAppend for scanning\nordered\n-->\n\n<para>\nImprove performance of pruning many partitions (Amit Langote, David\nRowley, Tom Lane, Álvaro Herrera)\n</para>\n\n<para>\nNow thousands of partitions can be pruned efficiently.\n</para>\n</listitem>\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/3f0333be-fd32-55f2-9817-5853a6bbd233%40lab.ntt.co.jp\n\n\n\n",
"msg_date": "Fri, 10 May 2019 14:59:11 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On 2019-May-10, Amit Langote wrote:\n\n> On 2019/05/10 12:18, David Rowley wrote:\n> > On Fri, 10 May 2019 at 12:08, Bruce Momjian <bruce@momjian.us> wrote:\n> >>> 17f206f Set pg_class.relhassubclass for partitioned indexes\n> >>\n> >> I need help with this one. I know the system column existed in previous\n> >> releases, so how is it different now? Do we document system table\n> >> changes that are implementation-behavior in the release notes? Usually\n> >> we don't.\n\nI very much doubt that the change is relevant to the release notes.\n\n> > This appears to be fixing something that likely should have been done\n> > in PG11, where partitioned indexes were added.\n> \n> That's true. We (Michael and I) felt the need to do this change, because\n> it allowed the pg_partition_tree() code (which is also new in v12) to use\n> the same infrastructure for both partitioned tables and indexes; checking\n> the relhassubclass flag allows to short-circuit scanning pg_inherits to\n> find out that there are no children.\n\nI'm of two minds about backpatching that fix. In pg12 it makes sense to\nhave the fix, to allow the SRF functions to work correctly. Since we\ndon't use that flag for partitioned indexes anywhere in the backend in\npg11, it seems pretty useless to have it there. On the other hand, if\nany user tool is inspecting catalogs, it might fail to point out\ndescendants for partitioned indexes, if the user asked for a report...\nbut frankly I doubt there are any tools that do that, or users that care\nfor such a report, or even that that report exist ... and even if that\nreport existed, I doubt that it would optimize out the read of\npg_inherits by checking relhassubclass beforehand. \n\nStill, it's an inconsistency in pg11. I vote -0 to getting it\nbackpatched, mostly because it seems like more work than is warranted.\n(I think the work consists not only of testing that the backpatched\ncommit works correctly, but also documenting for 11.4 release notes how\nto fix existing catalogs after upgrading.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 09:59:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Still, it's an inconsistency in pg11. I vote -0 to getting it\n> backpatched, mostly because it seems like more work than is warranted.\n> (I think the work consists not only of testing that the backpatched\n> commit works correctly, but also documenting for 11.4 release notes how\n> to fix existing catalogs after upgrading.)\n\nYeah, I agree. Even if we back-patched a code change, nothing could\nrely on relhassubclass for this purpose in a v11 database, because\nof not knowing whether the user actually bothered to manually update\npre-existing indexes' entries. Better to know that it doesn't work\nthan to be unsure if it does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 10:18:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 08:34:49PM -0500, Justin Pryzby wrote:\n> On Thu, May 09, 2019 at 09:02:51PM -0400, Bruce Momjian wrote:\n> > These were all very helpful. I adjusted your changes to create the\n> > attached applied patch. URL updated:\n> \n> Thanks.\n> \n> > -Allow more comparisons with information_schema text columns to use indexes (Tom Lane)\n> > +Allow more use of indexes for text columns comparisons with information_schema columns (Tom Lane)\n> \n> I think \"columns\" should not be plural..but it could be better still:\n\nI now realize \"columns\" is not necessary, so I removed it.\n\n> |Allow use of indexes for more comparisons with text columns of information_schema (Tom Lane)\n> \n> Regarding this proposed change of mine:\n> -Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum length of indexed values.\n> +Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum permitted length of indexed values.\n> \n> I think \"permitted\" is important, since otherwise it sounds like maybe for\n> whatever values are being indexed, their maximum length is reduced by this\n> patch. If you overthink it, you could decide that maybe that's happening due\n> to use of prefix/suffix truncation or something ..\n\nAgreed. I changed it to \"maximum-allowed length\". I also reordered the\nparagraph.\n\n> Should this one be listed twice ? I tried to tell if it was intentional but\n> couldn't decide..\n> 249d64999 Add support TCP user timeout in libpq and the backend se\n\nOne is a server variable, the other a libpq option.\n\nApplied patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Fri, 10 May 2019 20:32:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Thu, May 9, 2019 at 07:10:43PM -0700, Peter Geoghegan wrote:\n> On Thu, May 9, 2019 at 6:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > These were all very helpful. I adjusted your changes to create the\n> > attached applied patch. URL updated:\n> \n> I noticed that the compatibility note for Andrew Gierth's RYU floating\n> point patch seems to simply say why the feature is useful. Shouldn't\n> it be listed separately, and its impact on users upgrading be listed\n> here instead?\n\nThe text is now in the incompatibility section:\n\n\thttp://momjian.us/pgsql_docs/release-12.html\n\n\tAvoid performing unnecessary rounding of REAL and DOUBLE PRECISION\n\tvalues (Andrew Gierth)\n\t\n\tThis dramatically speeds up processing of floating-point values, though\n\ttrailing digits are display slightly differently. Users who wish to have\n\t------------------------------------------------\n\toutput that is rounded can set extra_float_digits=0. \n\nDo I need more?\n\n> Separately, please find attached suggested changes for items I was\n> involved in. I have attempted to explain them in a way that makes\n> their relevance to users clearer. Even if you don't end up using my\n> wording, you should still change the attribution along the same lines\n> as the patch.\n> \n> Also, I added a compatibility note for the new version of nbtree,\n> which revises the \"1/3 of a page\" restriction downwards very slightly\n> (by 8 bytes). FWIW, I think it's very unlikely that anyone will be\n> affected, because tuples that are that wide are already compressed in\n> almost all cases -- it seems like it would be hard to be just at the\n> edge of the limit already.\n\nI have that:\n\n\tHave new btree indexes sort duplicate index entries in heap-storage\n\torder (Peter Geoghegan)\n\t\n\tThis slightly reduces the maximum-allowed length of indexed values.\n\t-------------------------------------------------------------------\n\tIndexes pg_upgraded from previous releases will not have this ordering. \n\nI don't think more details are really needed.\n\n> +</listitem>\n> +\n> +<listitem>\n> +<!--\n> +Author: Peter Geoghegan <pg@bowt.ie>\n> +2019-03-20 [dd299df8] Make heap TID a tiebreaker nbtree index column.\n> +-->\n> +\n> +<para>\n> + Lower the limit on the size of new B-tree index tuples by 8 bytes\n> + (Peter Geoghegan)\n> +</para>\n> +\n> +<para>\n> + The definition of the \"1/3 of a page\" restriction on new B-tree\n> + entries has been revised to account for the possible overhead of\n> + storing table TIDs in branch page keys. Indexes in databases that\n> + are migrated using pg_upgrade are not affected, unless and until\n> + they are reindexed.\n> +</para>\n> </listitem>\n\nSee above, already mentioned.\n\n> -Improve speed of btree index insertions (Alexander Korotkov, Peter Geoghegan)\n> + Don't re-lock B-Tree leaf pages while inserting a new entry (Alexander Korotkov)\n\nWhat we have already seems like enough detail:\n\n\tImprove speed of btree index insertions (Alexander Korotkov,\n\tPeter Geoghegan) \n\nLocking speed seems like an implementation detail.\n\n> +<para>\n> + Make B-tree index keys unique by treating table TID as a tiebreaker\n> + column internally (Peter Geoghegan, Heikki Linnakangas)\n> </para>\n> \n> <para>\n> - LOOKUP, INDEX CLEANUP IMPROVEMENTS?\n> + The new approach has more predictable performance characteristics\n> + with indexes that have many duplicate entries, particularly when\n> + there are <command>DELETE</command>s or <command>UPDATE</command>s\n> + that affect a large number of contiguous table rows.\n\nYou have given me very good detail, so the new text is:\n\n\tImprove speed of btree index insertions (Alexander Korotkov, Peter\n\tGeoghegan)\n\t\n\tThe new code improves the space-efficiency of page splits, reduces\n\tlocking\n\toverhead, and gives better performance for <command>UPDATE</command>s\n\tand <command>DELETE</command>s on indexes with many duplicates.\n\n> +<para>\n> + Make more sophisticated decisions about where to split B-tree pages\n> + (Peter Geoghegan)\n> +</para>\n> +\n> +<para>\n> + The algorithm for choosing B-tree split points now considers the\n> + overall pattern of how new entries are inserted, which can result in\n> + more free space being available where it is likely to be needed.\n> </para>\n> </listitem>\n\nSee above.\n\n> -<para>\n> -Have new btree indexes sort duplicate index entries in heap-storage order (Peter Geoghegan)\n> -</para>\n> -\n> -<para>\n> -Btree indexes pg_upgraded from previous releases will not have this ordering. This slightly reduces the maximum length of indexed values.\n> -</para>\n> -</listitem>\n\nSee above.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 10 May 2019 21:02:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 03:34:15PM +1200, David Rowley wrote:\n> On Fri, 10 May 2019 at 13:03, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, May 9, 2019 at 07:13:35PM -0500, Justin Pryzby wrote:\n> > > On Thu, May 09, 2019 at 07:35:18PM -0400, Bruce Momjian wrote:\n> > > > I have made your other changes, with adjustment, patch attached.\n> > > >\n> > > > The results are here:\n> > > >\n> > > > http://momjian.us/pgsql_docs/release-12.html\n> \n> Hi Bruce,\n> \n> Just a question about the item: \"Allow IN comparisons with arrays to\n> use IS NOT NULL partial indexes more frequently (Tom Lane)\"\n> \n> >From what I can tell this must refer to 65ce07e0202f. If so, I think\n> James Coleman should be the author.\n\nYou are 100% correct, my apologies, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 10 May 2019 21:06:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 6:02 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Have new btree indexes sort duplicate index entries in heap-storage\n> order (Peter Geoghegan)\n>\n> This slightly reduces the maximum-allowed length of indexed values.\n> -------------------------------------------------------------------\n> Indexes pg_upgraded from previous releases will not have this ordering.\n>\n> I don't think more details are really needed.\n\n Whether or not you include more details is not what I care about here\n-- I *agree* that this is insignificant.\n\nI think that the maximum allowed length thing should be listed in the\ncompatibility section as a formality -- not alongside the point that\nthe feature is listed. I had better be correct in that general\nassessment, because it's not possible to opt out of the restriction\nwithin CREATE INDEX. That was my point here.\n\n> What we have already seems like enough detail:\n>\n> Improve speed of btree index insertions (Alexander Korotkov,\n> Peter Geoghegan)\n\nWhy?\n\nI think that it's unfortunate that Heikki wasn't given an authorship\ncredit here, as proposed in my patch. I think that he deserves it.\n\n> Locking speed seems like an implementation detail.\n\nThey're all implementations details, though. If that was the actual\nstandard, then few or no \"indexing\" items would be listed.\n\n> You have given me very good detail, so the new text is:\n>\n> Improve speed of btree index insertions (Alexander Korotkov, Peter\n> Geoghegan)\n>\n> The new code improves the space-efficiency of page splits, reduces\n> locking\n> overhead, and gives better performance for <command>UPDATE</command>s\n> and <command>DELETE</command>s on indexes with many duplicates.\n\nI can live with that.\n\nI'm not trying to be difficult -- reasonable people can disagree on\nthe level of detail that is appropriate (they can even have radically\ndifferent ideas about where to draw the line). And, I would expect it\nto be a little arbitrary under the best of circumstances, no matter\nwho was tasked with writing the release notes. However, I think the\nprocess would be easier and more effective if you took more time to\nunderstand the concerns behind the feedback you get. There are\nsometimes important nuances.\n\nAs things stand, I feel like the question of authorship and credit\ncomplicates the question of making the release notes useful, which is\nunfortunate. (Not sure what can be done about that.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 10 May 2019 18:53:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": ">>>>> \"Bruce\" == Bruce Momjian <bruce@momjian.us> writes:\n\n >> I noticed that the compatibility note for Andrew Gierth's RYU\n >> floating point patch seems to simply say why the feature is useful.\n >> Shouldn't it be listed separately, and its impact on users upgrading\n >> be listed here instead?\n\n Bruce> The text is now in the incompatibility section:\n\n Bruce> http://momjian.us/pgsql_docs/release-12.html\n\n Bruce> Avoid performing unnecessary rounding of REAL and DOUBLE PRECISION\n Bruce> values (Andrew Gierth)\n\t\n Bruce> This dramatically speeds up processing of floating-point values, though\n Bruce> trailing digits are display slightly differently. Users who wish to have\n Bruce> ------------------------------------------------\n Bruce> output that is rounded can set extra_float_digits=0. \n\n Bruce> Do I need more?\n\nThat isn't quite how I'd have worded it, but I'm not sure what the best\nwording is. Something like:\n\n * Output REAL and DOUBLE PRECISION values in shortest-exact format by\n default, and change the behavior of extra_float_digits\n\n Previously, float values were output rounded to 6 or 15 decimals by\n default, with the number of decimals adjusted by extra_float_digits.\n The previous rounding behavior is no longer the default, and is now\n done only if extra_float_digits is set to zero or less; if the value\n is greater than zero (which it is by default), a shortest-precise\n representation is output (for a substantial performance improvement).\n This representation preserves the exact binary value when correctly\n read back in, even though the trailing digits will usually differ\n from the output generated by previous versions when\n extra_float_digits=3.\n\nBut I'm not 100% happy with this wording and am entirely open to\nsuggestions for improvement.\n\n(In passing I've spotted some related typos in the body of the docs\nwhich are probably my fault, I'll fix those.)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 11 May 2019 03:06:40 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 06:53:55PM -0700, Peter Geoghegan wrote:\n> On Fri, May 10, 2019 at 6:02 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Have new btree indexes sort duplicate index entries in heap-storage\n> > order (Peter Geoghegan)\n> >\n> > This slightly reduces the maximum-allowed length of indexed values.\n> > -------------------------------------------------------------------\n> > Indexes pg_upgraded from previous releases will not have this ordering.\n> >\n> > I don't think more details are really needed.\n> \n> Whether or not you include more details is not what I care about here\n> -- I *agree* that this is insignificant.\n> \n> I think that the maximum allowed length thing should be listed in the\n> compatibility section as a formality -- not alongside the point that\n> the feature is listed. I had better be correct in that general\n> assessment, because it's not possible to opt out of the restriction\n> within CREATE INDEX. That was my point here.\n\nWell, we can move the entire item up to the incompatibility section, but\nthat seems unbalanced since the incompatibility is so small relative to\nthe entire item, and it is rare, as you mentioned. It also seems odd to\ncreate a stand-alone incompatibility item that really is part of a later\nitem already in the release notes.\n\n> > What we have already seems like enough detail:\n> >\n> > Improve speed of btree index insertions (Alexander Korotkov,\n> > Peter Geoghegan)\n> \n> Why?\n> \n> I think that it's unfortunate that Heikki wasn't given an authorship\n> credit here, as proposed in my patch. I think that he deserves it.\n\nI did not notice that change. I have added him.\n\n> > Locking speed seems like an implementation detail.\n> \n> They're all implementations details, though. If that was the actual\n> standard, then few or no \"indexing\" items would be listed.\n> \n> > You have given me very good detail, so the new text is:\n> >\n> > Improve speed of btree index insertions (Alexander Korotkov, Peter\n> > Geoghegan)\n> >\n> > The new code improves the space-efficiency of page splits, reduces\n> > locking\n> > overhead, and gives better performance for <command>UPDATE</command>s\n> > and <command>DELETE</command>s on indexes with many duplicates.\n> \n> I can live with that.\n> \n> I'm not trying to be difficult -- reasonable people can disagree on\n> the level of detail that is appropriate (they can even have radically\n> different ideas about where to draw the line). And, I would expect it\n> to be a little arbitrary under the best of circumstances, no matter\n> who was tasked with writing the release notes. However, I think the\n> process would be easier and more effective if you took more time to\n> understand the concerns behind the feedback you get. There are\n> sometimes important nuances.\n\nI think I have understood the nuances, as listed above --- I just don't\nagree with the solution.\n\n> As things stand, I feel like the question of authorship and credit\n> complicates the question of making the release notes useful, which is\n> unfortunate. (Not sure what can be done about that.)\n\nThat part I always need big help with, particularly with multiple\ncommits being lumped into a single release note entry. I just can't\ntell which commit is more important when knowing what order to list the\nnames. I have that problem big-time with the partition commits.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 10 May 2019 22:11:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 7:11 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Whether or not you include more details is not what I care about here\n> > -- I *agree* that this is insignificant.\n\n> Well, we can move the entire item up to the incompatibility section, but\n> that seems unbalanced since the incompatibility is so small relative to\n> the entire item, and it is rare, as you mentioned. It also seems odd to\n> create a stand-alone incompatibility item that really is part of a later\n> item already in the release notes.\n\nThat is what we've always done. The list has always been very long,\nwith individual items that are on average totally insignificant.\nBreaking with that pattern in this instance will be confusing to\nusers.\n\n> I think I have understood the nuances, as listed above --- I just don't\n> agree with the solution.\n\nTo be clear, I don't expect you to agree with the solution.\n\nAnother thing that you missed from my patch is that bugfix commit\n9b10926263d831fac5758f1493c929a49b55669b shouldn't be listed.\n\n> > As things stand, I feel like the question of authorship and credit\n> > complicates the question of making the release notes useful, which is\n> > unfortunate. (Not sure what can be done about that.)\n>\n> That part I always need big help with, particularly with multiple\n> commits being lumped into a single release note entry. I just can't\n> tell which commit is more important when knowing what order to list the\n> names. I have that problem big-time with the partition commits.\n\nI understand that it's a difficult job. It's inevitable that there\nwill need to be corrections. You don't appear to be particularly\nreceptive to feedback, which makes the process harder for everyone --\neven in instances where you make the right call. I don't think that I\nam alone in seeing it this way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 10 May 2019 19:31:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 03:06:40AM +0100, Andrew Gierth wrote:\n> >>>>> \"Bruce\" == Bruce Momjian <bruce@momjian.us> writes:\n> Bruce> Do I need more?\n> \n> That isn't quite how I'd have worded it, but I'm not sure what the best\n> wording is. Something like:\n> \n> * Output REAL and DOUBLE PRECISION values in shortest-exact format by\n> default, and change the behavior of extra_float_digits\n> \n> Previously, float values were output rounded to 6 or 15 decimals by\n> default, with the number of decimals adjusted by extra_float_digits.\n> The previous rounding behavior is no longer the default, and is now\n> done only if extra_float_digits is set to zero or less; if the value\n> is greater than zero (which it is by default), a shortest-precise\n> representation is output (for a substantial performance improvement).\n> This representation preserves the exact binary value when correctly\n> read back in, even though the trailing digits will usually differ\n> from the output generated by previous versions when\n> extra_float_digits=3.\n> \n> But I'm not 100% happy with this wording and am entirely open to\n> suggestions for improvement.\n\nI went with this paragraph:\n\n\tThis dramatically speeds up processing of floating-point values but\n\tcauses additional trailing digits to potentially be displayed. Users\n\twishing to have output that is rounded to match the previous behavior\n\tcan set <literal>extra_float_digits=0</literal>, which is no longer the\n\tdefault.\n\nImprovements?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 10:29:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 07:31:21PM -0700, Peter Geoghegan wrote:\n> On Fri, May 10, 2019 at 7:11 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Whether or not you include more details is not what I care about here\n> > > -- I *agree* that this is insignificant.\n> \n> > Well, we can move the entire item up to the incompatibility section, but\n> > that seems unbalanced since the incompatibility is so small relative to\n> > the entire item, and it is rare, as you mentioned. It also seems odd to\n> > create a stand-alone incompatibility item that really is part of a later\n> > item already in the release notes.\n> \n> That is what we've always done. The list has always been very long,\n> with individual items that are on average totally insignificant.\n> Breaking with that pattern in this instance will be confusing to\n> users.\n\nI have no idea what you are suggesting above.\n\n> > I think I have understood the nuances, as listed above --- I just don't\n> > agree with the solution.\n> \n> To be clear, I don't expect you to agree with the solution.\n> \n> Another thing that you missed from my patch is that bugfix commit\n> 9b10926263d831fac5758f1493c929a49b55669b shouldn't be listed.\n\nWhy should it not be listed?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 10:36:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 10:36:13AM -0400, Bruce Momjian wrote:\n> On Fri, May 10, 2019 at 07:31:21PM -0700, Peter Geoghegan wrote:\n> > > I think I have understood the nuances, as listed above --- I just don't\n> > > agree with the solution.\n> > \n> > To be clear, I don't expect you to agree with the solution.\n> > \n> > Another thing that you missed from my patch is that bugfix commit\n> > 9b10926263d831fac5758f1493c929a49b55669b shouldn't be listed.\n> \n> Why should it not be listed?\n\nThinking some more, I try to aggregate all the feature addition commits\ntogether, but often skip \"cleanups\" of previous feature additions. Are\nyou saying that 9b10926263d831fac5758f1493c929a49b55669b is a cleanup,\nand not part of the feature addition? It was not clear to me of\n9b10926263d831fac5758f1493c929a49b55669b was a further enhancement\nmade possible by previous commits, or a \"fix\" for a previous commit.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 10:40:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 7:40 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > I think I have understood the nuances, as listed above --- I just don't\n> > > > agree with the solution.\n> > >\n> > > To be clear, I don't expect you to agree with the solution.\n> > >\n> > > Another thing that you missed from my patch is that bugfix commit\n> > > 9b10926263d831fac5758f1493c929a49b55669b shouldn't be listed.\n> >\n> > Why should it not be listed?\n>\n> Thinking some more, I try to aggregate all the feature addition commits\n> together, but often skip \"cleanups\" of previous feature additions. Are\n> you saying that 9b10926263d831fac5758f1493c929a49b55669b is a cleanup,\n> and not part of the feature addition? It was not clear to me of\n> 9b10926263d831fac5758f1493c929a49b55669b was a further enhancement\n> made possible by previous commits, or a \"fix\" for a previous commit.\n\nYes. It's a bug fix that went in after feature freeze.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 11 May 2019 10:28:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 10:28:08AM -0700, Peter Geoghegan wrote:\n> On Sat, May 11, 2019 at 7:40 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > > I think I have understood the nuances, as listed above --- I just don't\n> > > > > agree with the solution.\n> > > >\n> > > > To be clear, I don't expect you to agree with the solution.\n> > > >\n> > > > Another thing that you missed from my patch is that bugfix commit\n> > > > 9b10926263d831fac5758f1493c929a49b55669b shouldn't be listed.\n> > >\n> > > Why should it not be listed?\n> >\n> > Thinking some more, I try to aggregate all the feature addition commits\n> > together, but often skip \"cleanups\" of previous feature additions. Are\n> > you saying that 9b10926263d831fac5758f1493c929a49b55669b is a cleanup,\n> > and not part of the feature addition? It was not clear to me of\n> > 9b10926263d831fac5758f1493c929a49b55669b was a further enhancement\n> > made possible by previous commits, or a \"fix\" for a previous commit.\n> \n> Yes. It's a bug fix that went in after feature freeze.\n\nOK, commit removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 14:02:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> OK, commit removed.\n\nYou're mistaken -- nothing has been pushed to master in the last 3 hours.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 11 May 2019 11:47:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 11:47:42AM -0700, Peter Geoghegan wrote:\n> On Sat, May 11, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > OK, commit removed.\n> \n> You're mistaken -- nothing has been pushed to master in the last 3 hours.\n\nI am not mistaken. I have removed it from my local copy, but have not\npushed it yet since I am adding links to the docs. It will be done\ntoday.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 15:26:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Fri, May 10, 2019 at 05:10:06PM +1200, David Rowley wrote:\n> On Fri, 10 May 2019 at 16:52, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'm rechecking my list from last month. What about these ?\n> >\n> > > c076f3d Remove pg_constraint.conincluding\n> > > bd09503 Increase the default vacuum_cost_limit from 200 to 2000\n> \n> bd09503 was reverted in 52985e4fea7 and replaced by cbccac371, which\n> is documented already by:\n> \n> <para> Reduce the default value of <varname>autovacuum_vacuum_cost_delay</varname> to 2ms (Tom Lane) </para>\n\nRight, thanks.\n\nI suspect c076f3d should be included, though.\n\nAlso,\n\n|The many system tables with such columns will now display those columns with SELECT * by default. \nI think could be better:\n|SELECT * will now output those columns for the many system tables which have them.\n|(previously, the columns weren't shown unless explicitly selected).\n\nJustin\n\n\n",
"msg_date": "Mon, 13 May 2019 12:48:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 12:48:00PM -0500, Justin Pryzby wrote:\n> I suspect c076f3d should be included, though.\n\nbd47c4a9 has removed pg_constraint.conincluding from REL_11_STABLE as\nwell.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 10:06:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 12:48:00PM -0500, Justin Pryzby wrote:\n> On Fri, May 10, 2019 at 05:10:06PM +1200, David Rowley wrote:\n> > On Fri, 10 May 2019 at 16:52, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > I'm rechecking my list from last month. What about these ?\n> > >\n> > > > c076f3d Remove pg_constraint.conincluding\n> > > > bd09503 Increase the default vacuum_cost_limit from 200 to 2000\n> > \n> > bd09503 was reverted in 52985e4fea7 and replaced by cbccac371, which\n> > is documented already by:\n> > \n> > <para> Reduce the default value of <varname>autovacuum_vacuum_cost_delay</varname> to 2ms (Tom Lane) </para>\n> \n> Right, thanks.\n> \n> I suspect c076f3d should be included, though.\n\nThis commit was part of a set of patches that forced a catalog version\nchanges in PG 11 in early September, 2018.\n\n> |The many system tables with such columns will now display those columns with SELECT * by default. \n> I think could be better:\n> |SELECT * will now output those columns for the many system tables which have them.\n> |(previously, the columns weren't shown unless explicitly selected).\n\nGood idea. The new text is:\n\n\t<command>SELECT *</command> will now output those columns for the many\n\tsystem tables which have them. Previously, the columns had to be selected\n\texplicitly.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 13 May 2019 22:55:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg12 release notes"
}
] |
[
{
"msg_contents": "pgTap has a view that references pg_proc; to support introspection of functions and aggregates. That view references proisagg in versions < 11, and prokind in 11+. pgtap's make process understands how to handle this; modifying the creation scripts as necessary. It actually has to do this for several functions as well.\r\n\r\nThe problem is that pg_dump --binary-upgrade intentionally does not simply issue a `CREATE EXTENSION` command the way a normal dump does, so that it can control the OIDs that are assigned to objects[1]. That means that attempting to pg_upgrade a database with pgtap installed to version 11+ fails trying to create the view that references pg_proc.proisagg[2].\r\n\r\nFor pgtap, we should be able to work around this by removing the offending column from the view and embedding the knowledge in a function. This would be more difficult in other types of extensions though, especially any that are attempting to provide more user-friendly views of catalog tables.\r\n\r\nI don’t recall why pg_upgrade wants to control OIDs… don’t we re-create all catalog entries for user objects from scratch?\r\n\r\n1: https://www.postgresql.org/message-id/AANLkTimm1c64=xkdpz5ji7Q-rH69zd3cMewmRpkH0WSf@mail.gmail.com\r\n2: https://github.com/theory/pgtap/issues/201",
"msg_date": "Wed, 8 May 2019 22:07:23 +0000",
"msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>",
"msg_from_op": true,
"msg_subject": "Problems with pg_upgrade and extensions referencing catalog\n tables/views"
},
{
"msg_contents": "\"Nasby, Jim\" <nasbyj@amazon.com> writes:\n> The problem is that pg_dump --binary-upgrade intentionally does not\n> simply issue a `CREATE EXTENSION` command the way a normal dump does, so\n> that it can control the OIDs that are assigned to objects[1].\n\nThat's not the only reason. The original concerns were about not\nbreaking the extension, in case the destination server had a different\nversion of the extension available. CREATE EXTENSION doesn't normally\nguarantee that you get an exactly compatible extension version, which\nis a good thing for regular pg_dump and restore but a bad thing\nfor binary upgrade.\n\nI'm not really sure how to improve the situation you describe, but\n\"issue CREATE EXTENSION and pray\" doesn't sound like a solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 May 2019 19:33:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_upgrade and extensions referencing catalog\n tables/views"
},
{
"msg_contents": "On Wed, May 8, 2019 at 10:07:23PM +0000, Nasby, Jim wrote:\n> I don’t recall why pg_upgrade wants to control OIDs… don’t we\n> re-create all catalog entries for user objects from scratch?\n\nThe C comment at top of pg_upgrade.c explains why some oids must be preserved:\n\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_upgrade/pg_upgrade.c;h=0b304bbd56ab0204396838618e86dfad757c2812;hb=HEAD\n\nIt doesn't mention extensions.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 9 May 2019 20:14:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_upgrade and extensions referencing catalog\n tables/views"
},
{
"msg_contents": "\r\n> On May 9, 2019, at 7:14 PM, Bruce Momjian <bruce@momjian.us> wrote:\r\n> \r\n> On Wed, May 8, 2019 at 10:07:23PM +0000, Nasby, Jim wrote:\r\n>> I don’t recall why pg_upgrade wants to control OIDs… don’t we\r\n>> re-create all catalog entries for user objects from scratch?\r\n> \r\n> The C comment at top of pg_upgrade.c explains why some oids must be preserved:\r\n> \r\n> \thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_upgrade/pg_upgrade.c;h=0b304bbd56ab0204396838618e86dfad757c2812;hb=HEAD\r\n> \r\n> It doesn't mention extensions.\r\n\r\nRight, but it does mention tables, types and enums, all of which can be created by extensions. So no matter what, we’d need to deal with those somehow.\r\n\r\n> \r\n> On May 8, 2019, at 6:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> \r\n> \"Nasby, Jim\" <nasbyj@amazon.com> writes:\r\n>> The problem is that pg_dump --binary-upgrade intentionally does not\r\n>> simply issue a `CREATE EXTENSION` command the way a normal dump does, so\r\n>> that it can control the OIDs that are assigned to objects[1].\r\n> \r\n> That's not the only reason. The original concerns were about not\r\n> breaking the extension, in case the destination server had a different\r\n> version of the extension available. CREATE EXTENSION doesn't normally\r\n> guarantee that you get an exactly compatible extension version, which\r\n> is a good thing for regular pg_dump and restore but a bad thing\r\n> for binary upgrade.\r\n> \r\n> I'm not really sure how to improve the situation you describe, but\r\n> \"issue CREATE EXTENSION and pray\" doesn't sound like a solution.\r\n\r\nI think it’s reasonable to expect that users have the same version of the extension already installed in the new version’s cluster, and that extension authors need to support at least 2 major versions per extension version so that users can upgrade. But that’s kind of moot unless we can solve the OID issues. I think that’s possible with a special mode for CREATE EXTENSION where we specify the OIDs to use for specific objects.\r\n\r\nThat does leave the question of whether all of this is worth it; AFAICT the only place this is really a problem is views that reference catalog tables. Right now, extension authors could work around that by defining the view on top of a plpgsql (or maybe plsql) SRF. The function won’t be checked when it’s created, so the upgrade would succeed. The extension would have to have an upgrade available that did whatever was necessary in the new version, and users would need to ALTER EXTENSION UPDATE after the upgrade. This is rather ugly, but potentially workable. Presumably it won’t perform as well as a native view would.\r\n\r\nAnother option is to allow for views to exist in an invalid state. This is something Oracle allows, and comes up from time to time. It still has the upgrade problems that using SRFs does.\r\n\r\nHowever, since this is really just a problem for referencing system catalogs, perhaps the best solution is to offer a set of views on the catalogs that have a defined policy for deprecation of old columns.",
"msg_date": "Fri, 10 May 2019 21:07:05 +0000",
"msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_upgrade and extensions referencing catalog\n tables/views"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe would need to integrate Postgres Users Authentication with our own LDAP Server.\n\nBasically as of now we are able to login to Postgress DB with a user/password credential.\n[cid:image001.png@01D50650.D807AE30]\n\nThese user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\nWe would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n\nCan you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n\nRegards\nTarkeshwar",
"msg_date": "Thu, 9 May 2019 04:51:02 +0000",
"msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>",
"msg_from_op": true,
"msg_subject": "integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On 9/5/19 7:51 π.μ., M Tarkeshwar Rao wrote:\n>\n> Hi all,\n>\n> We would need to integrate Postgres Users Authentication with our own LDAP Server.\n>\n> Basically as of now we are able to login to Postgress DB with a user/password credential.\n>\n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\n>\n> We would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n>\n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n>\nThe users must be existent as postgresql users. Authorization : roles, privileges etc also will be taken by postgresql definitions, grants, etc. But the authentication will be done in LDAP.\nIt is done in pg_hba.conf. There are two ways to do this (with 1 or 2 phases). We have successfully used both Lotus Notes LDAP and FreeIPA LDAP with our production PostgreSQL servers, I have tested \nwith openldap as well, so I guess chances are that it will work with yours.\n>\n> Regards\n>\n> Tarkeshwar\n>\n\n\n-- \nAchilleas Mantzios\nIT DEV Lead\nIT DEPT\nDynacom Tankers Mgmt",
"msg_date": "Thu, 9 May 2019 09:17:37 +0300",
"msg_from": "Achilleas Mantzios <achill@matrix.gatewaynet.com>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, 2019-05-09 at 04:51 +0000, M Tarkeshwar Rao wrote:\n> We would need to integrate Postgres Users Authentication with our own LDAP Server. \n> \n> Basically as of now we are able to login to Postgress DB with a user/password credential.\n>\n> [roles \"pg_signal_backend\" and \"postgres\"]\n> \n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\n> We would want the authentication to be done with LDAP, so basically the user credentials should be store in LDAP server\n> \n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\n\nLDAP authentication is well documented:\nhttps://www.postgresql.org/docs/current/auth-ldap.html\n\nBut I don't think you are on the right track.\n\n\"pg_signal_backend\" cannot login, it is a role to which you add a login user\nto give it certain privileges. So you don't need to authenticate the role.\n\n\"postgres\" is the installation superuser. If security is important for you,\nyou won't set a password for that user and you won't allow remote logins\nwith that user.\n\nBut for your application users LDAP authentication is a fine thing, and not\nhard to set up if you know a little bit about LDAP.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 09 May 2019 08:42:28 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\r\n\r\nWe are getting issue with special characters by following steps given in postgres documentation. \r\nIt is not accepting any special characters as special characters are mandatory in our use case.\r\n\r\nCan you please help us or have you any steps by which we can configure any postgres with LDAP?\r\n-----Original Message-----\r\nFrom: Laurenz Albe <laurenz.albe@cybertec.at> \r\nSent: Thursday, May 9, 2019 12:12 PM\r\nTo: M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>; pgsql-general <pgsql-general@lists.postgresql.org>; 'postgres-discuss@mailman.lmera.ericsson.se' <postgres-discuss@mailman.lmera.ericsson.se>; 'pgsql-general@postgresql.org' <pgsql-general@postgresql.org>; pgsql-performance@postgresql.org; pgsql-hackers@postgresql.org; 'pgsql-hackers-owner@postgresql.org' <pgsql-hackers-owner@postgresql.org>; Aashish Nagpaul <aashish.nagpaul@ericsson.com>\r\nSubject: Re: integrate Postgres Users Authentication with our own LDAP Server\r\n\r\nOn Thu, 2019-05-09 at 04:51 +0000, M Tarkeshwar Rao wrote:\r\n> We would need to integrate Postgres Users Authentication with our own LDAP Server. \r\n> \r\n> Basically as of now we are able to login to Postgress DB with a user/password credential.\r\n>\r\n> [roles \"pg_signal_backend\" and \"postgres\"]\r\n> \r\n> These user objects are the part of Postgres DB server. Now we want that these users should be authenticated by LDAP server.\r\n> We would want the authentication to be done with LDAP, so basically \r\n> the user credentials should be store in LDAP server\r\n> \r\n> Can you mention the prescribed steps in Postgres needed for this integration with LDAP Server?\r\n\r\nLDAP authentication is well documented:\r\nhttps://www.postgresql.org/docs/current/auth-ldap.html\r\n\r\nBut I don't think you are on the right track.\r\n\r\n\"pg_signal_backend\" cannot login, it is a role to which you add a login user to give it certain privileges. So you don't need to authenticate the role.\r\n\r\n\"postgres\" is the installation superuser. If security is important for you, you won't set a password for that user and you won't allow remote logins with that user.\r\n\r\nBut for your application users LDAP authentication is a fine thing, and not hard to set up if you know a little bit about LDAP.\r\n\r\nYours,\r\nLaurenz Albe\r\n--\r\nCybertec | https://protect2.fireeye.com/url?k=4f372c5d-13a52101-4f376cc6-0cc47ad93d46-aed009fdc0b3e18f&u=https://www.cybertec-postgresql.com/\r\n\r\n",
"msg_date": "Thu, 9 May 2019 07:11:24 +0000",
"msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>",
"msg_from_op": true,
"msg_subject": "RE: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, 2019-05-09 at 07:11 +0000, M Tarkeshwar Rao wrote:\n> We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\n> \n> We are getting issue with special characters by following steps given in postgres documentation. \n> It is not accepting any special characters as special characters are mandatory in our use case.\n> \n> Can you please help us or have you any steps by which we can configure any postgres with LDAP?\n\nIt was very inconsiderate of you to write to 100 PostgreSQL lists at once (and I was stupid\nenough not to notice right away).\n\nThen, please don't top-post on these lists. Write your reply *below* what you quote.\n\nWhat exactly is your problem? \"We are getting issues\" is not detailed enough.\nYou probably just have to get the encoding right.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n",
"msg_date": "Thu, 09 May 2019 09:23:50 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "On Thu, May 09, 2019 at 07:11:24AM +0000, M Tarkeshwar Rao wrote:\n>We want to setup ldap authentication in pg_hba.conf, for Postgresql\n>users(other than postgres super user).\n>\n>We are getting issue with special characters by following steps given in\n>postgres documentation. It is not accepting any special characters as\n>special characters are mandatory in our use case.\n>\n>Can you please help us or have you any steps by which we can configure\n>any postgres with LDAP?\n\nPlease don't cross-post - this is a fairly generic question, it has\nnothing to do with performance or development, so the right thing is to\nsend it to pgsql-general. Likewise, it makes little sense to send\nquestions to the \"owner\". I've removed the other lists from CC.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 9 May 2019 14:43:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
},
{
"msg_contents": "Greetings,\n\n(Dropping all the extra mailing lists and such, please do *not*\ncross-post like that)\n\n* M Tarkeshwar Rao (m.tarkeshwar.rao@ericsson.com) wrote:\n> We want to setup ldap authentication in pg_hba.conf, for Postgresql users(other than postgres super user).\n> \n> We are getting issue with special characters by following steps given in postgres documentation. \n> It is not accepting any special characters as special characters are mandatory in our use case.\n> \n> Can you please help us or have you any steps by which we can configure any postgres with LDAP?\n\nIs this an active directory environment? If so, you should probably be\nusing GSSAPI anyway and not LDAP for the actual authentication.\n\nAs for the \"special characters\", you really need to provide specifics\nand be able to show us the actual errors that you're getting.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 May 2019 15:24:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: integrate Postgres Users Authentication with our own LDAP Server"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm not quite clear what the goal of allow_system_table_mods\nis. Obviously, it's extremely dangerous to target catalogs with DDL. But\nat the same time we allow DML to catalog tables without any sort of\nrestriction.\n\nI also don't understand what's achieved by having\nallow_system_table_mods be PGC_POSTMASTER. If anything it seems to make\nit more likely to resort to a) leaving it enabled all the time b) use\nDML to modify catalogs.\n\nWouldn't it be more sensible to disallow all catalog modifications\nunless allow_system_table_mods was enabled, and make\nallow_system_table_mods PGC_SUSET and GUC_DISALLOW_IN_FILE?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 May 2019 07:50:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm not quite clear what the goal of allow_system_table_mods\n> is. Obviously, it's extremely dangerous to target catalogs with DDL. But\n> at the same time we allow DML to catalog tables without any sort of\n> restriction.\n\nThe last is not true, see pg_class_aclmask().\n\n> I also don't understand what's achieved by having\n> allow_system_table_mods be PGC_POSTMASTER.\n\nTrue. Possibly there was some confusion with ignore_system_indexes,\nwhich probably *should* be PGC_POSTMASTER: if you think the system\nindexes are corrupt then they're corrupt for everybody.\n\n> Wouldn't it be more sensible to disallow all catalog modifications\n> unless allow_system_table_mods was enabled, and make\n> allow_system_table_mods PGC_SUSET and GUC_DISALLOW_IN_FILE?\n\nI'm on board with the second part of that but not the first.\nDDL on the system catalogs is significantly more dangerous\nthan DML, so I think that having an extra layer of protection\nfor it is a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 May 2019 12:22:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-09 12:22:39 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm not quite clear what the goal of allow_system_table_mods\n> > is. Obviously, it's extremely dangerous to target catalogs with DDL. But\n> > at the same time we allow DML to catalog tables without any sort of\n> > restriction.\n> \n> The last is not true, see pg_class_aclmask().\n\nYou mean because it's restricted to superusers?\n\n\n> > I also don't understand what's achieved by having\n> > allow_system_table_mods be PGC_POSTMASTER.\n> \n> True. Possibly there was some confusion with ignore_system_indexes,\n> which probably *should* be PGC_POSTMASTER: if you think the system\n> indexes are corrupt then they're corrupt for everybody.\n\nHm, but it's pretty useful to be able to verify if system index\ncorruption is to blame, by enabling ignore_system_indexes in one\nsession. Don't really see us gaining anything by forcing it to be done\nsystem-wide.\n\n\n> > Wouldn't it be more sensible to disallow all catalog modifications\n> > unless allow_system_table_mods was enabled, and make\n> > allow_system_table_mods PGC_SUSET and GUC_DISALLOW_IN_FILE?\n> \n> I'm on board with the second part of that but not the first.\n> DDL on the system catalogs is significantly more dangerous\n> than DML, so I think that having an extra layer of protection\n> for it is a good idea.\n\nWhy is it so much more dangerous? I've seen plenty of corrupted clusters\ndue to people doing DML against the catalogs. I'm OK with adding\nseparate GUCs for both, if we want to do that, but I do think we\nshouldn't allow updating the catalogs wthout having having the superuser\nexplicitly opt into that. We need superusers permissions for a lot of\npretty routine tasks (say creating an extension with C functions) -\nso that also being the permission to do dangerous things like UPDATE to\npg_class etc, isn't great.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 11:06:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": ">>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n\n Andres> Why is it so much more dangerous? I've seen plenty of corrupted\n Andres> clusters due to people doing DML against the catalogs. I'm OK\n Andres> with adding separate GUCs for both, if we want to do that, but\n Andres> I do think we shouldn't allow updating the catalogs wthout\n Andres> having having the superuser explicitly opt into that.\n\nBe aware that a nonzero number of extensions (postgis especially) do\ncatalog DML in their install or update scripts. While you might well\nthink they shouldn't do that, in practice there is usually no viable\nalternative.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 10 May 2019 19:51:10 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> Andres> Why is it so much more dangerous? I've seen plenty of corrupted\n> Andres> clusters due to people doing DML against the catalogs. I'm OK\n> Andres> with adding separate GUCs for both, if we want to do that, but\n> Andres> I do think we shouldn't allow updating the catalogs wthout\n> Andres> having having the superuser explicitly opt into that.\n\n> Be aware that a nonzero number of extensions (postgis especially) do\n> catalog DML in their install or update scripts.\n\nI believe we've done that in some contrib update scripts, as well.\n\n> While you might well\n> think they shouldn't do that, in practice there is usually no viable\n> alternative.\n\nIn principle, if the thing is SUSET, we could have such extension scripts\nset it temporarily. But it would be a compatibility hazard -- a script\nwith such a SET command in it would fail in older branches.\n\nWhat exactly is the motivation for changing this now, after 20 years?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 15:00:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 19:51:10 +0100, Andrew Gierth wrote:\n> >>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> \n> Andres> Why is it so much more dangerous? I've seen plenty of corrupted\n> Andres> clusters due to people doing DML against the catalogs. I'm OK\n> Andres> with adding separate GUCs for both, if we want to do that, but\n> Andres> I do think we shouldn't allow updating the catalogs wthout\n> Andres> having having the superuser explicitly opt into that.\n> \n> Be aware that a nonzero number of extensions (postgis especially) do\n> catalog DML in their install or update scripts. While you might well\n> think they shouldn't do that, in practice there is usually no viable\n> alternative.\n\nSure, but if it's a SUSET GUC that'd not be a huge problem, would it?\nThey'd need to locally set it, which, sure. But it'd also be a good way\nto signal such things to readers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 12:16:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 15:00:18 -0400, Tom Lane wrote:\n> What exactly is the motivation for changing this now, after 20 years?\n\nThat I've seen enough corruption and other hard to investigate issues\nrelated to manual catalog modifications to make me complain. Note that\nother have complained about this before, too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 12:19:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-10 15:00:18 -0400, Tom Lane wrote:\n>> What exactly is the motivation for changing this now, after 20 years?\n\n> That I've seen enough corruption and other hard to investigate issues\n> related to manual catalog modifications to make me complain. Note that\n> other have complained about this before, too.\n\nSo, if the problem is that cowboy DBAs are making ill-advised manual\nchanges, how is a SUSET GUC going to stop them from doing that?\nThey'll just turn it on and make the same ill-advised change, especially\nafter they see us and other people doing exactly that in extensions.\n\nIf you're arguing that the changes were accidental, it seems like the real\nanswer to that is \"stop using superuser unnecessarily\". I don't think\nthat adding training wheels to superuser is really a great idea in the\nlong run. I remember wars back in the last century about whether rm\nshould be hacked to disallow \"rm -rf /\" even to superusers. The eventual\nconsensus was \"no\", and this seems about the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 15:48:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-10 15:48:49 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-10 15:00:18 -0400, Tom Lane wrote:\n> >> What exactly is the motivation for changing this now, after 20 years?\n> \n> > That I've seen enough corruption and other hard to investigate issues\n> > related to manual catalog modifications to make me complain. Note that\n> > other have complained about this before, too.\n> \n> So, if the problem is that cowboy DBAs are making ill-advised manual\n> changes, how is a SUSET GUC going to stop them from doing that?\n> They'll just turn it on and make the same ill-advised change, especially\n> after they see us and other people doing exactly that in extensions.\n\nHaving to figure out what that GUC is called, looking in the\ndocumentation to do so, seing that there's a large WARNING about it does\nreduce risk.\n\n\n> If you're arguing that the changes were accidental, it seems like the real\n> answer to that is \"stop using superuser unnecessarily\". I don't think\n> that adding training wheels to superuser is really a great idea in the\n> long run.\n\nWell, we require superuser for a lot of operations with a vastly lower\nrisk. Like CREATE EXTENSION postgres_fdw etc.. So that's not really a\nfeasible answer. Nor do we provide a setup, where non-superuser admin\nroles exist by default.\n\n\n> I remember wars back in the last century about whether rm\n> should be hacked to disallow \"rm -rf /\" even to superusers. The eventual\n> consensus was \"no\", and this seems about the same.\n\nNote that rm has prohibited this for a *long* time now (and not just\nlinux, solaris too). And you've argued for disallowing most DDL against\ncatalogs...\n\nI don't think it's always obvious to users how dangerous such operations\ncan be.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 May 2019 13:09:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: What's the point of allow_system_table_mods?"
}
] |
[
{
"msg_contents": "Hi,\n\nI just noticed that reindexdb could report an extraneous message\nsaying an error happened while reindexing a database if it failed\nreindexing a table or an index.\n\nTrivial fix attached.",
"msg_date": "Fri, 10 May 2019 11:02:52 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Fri, May 10, 2019 at 11:02:52AM +0200, Julien Rouhaud wrote:\n> I just noticed that reindexdb could report an extraneous message\n> saying an error happened while reindexing a database if it failed\n> reindexing a table or an index.\n> \n> Trivial fix attached.\n\nOops. That's true, nice catch. This is older than 9.4, so it needs\nto go all the way down. Let's fix this. Do others have any comments?\n--\nMichael",
"msg_date": "Fri, 10 May 2019 19:24:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "> On 10 May 2019, at 12:24, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, May 10, 2019 at 11:02:52AM +0200, Julien Rouhaud wrote:\n>> I just noticed that reindexdb could report an extraneous message\n>> saying an error happened while reindexing a database if it failed\n>> reindexing a table or an index.\n>> \n>> Trivial fix attached.\n> \n> Oops. That's true, nice catch. This is older than 9.4, so it needs\n> to go all the way down. Let's fix this. Do others have any comments?\n\nNice catch indeed, LGTM.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 10 May 2019 12:38:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On 2019-May-10, Julien Rouhaud wrote:\n\n> I just noticed that reindexdb could report an extraneous message\n> saying an error happened while reindexing a database if it failed\n> reindexing a table or an index.\n\nKudos, good find -- that's a 14 years old bug, introduced in this commit:\n\nAuthor: Bruce Momjian <bruce@momjian.us>\nBranch: master Release: REL8_1_BR [85e9a5a01] 2005-07-29 15:13:11 +0000\n\n Move reindexdb from /contrib to /bin.\n \n Euler Taveira de Oliveira\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 10:18:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-10, Julien Rouhaud wrote:\n>> I just noticed that reindexdb could report an extraneous message\n>> saying an error happened while reindexing a database if it failed\n>> reindexing a table or an index.\n\n> Kudos, good find -- that's a 14 years old bug, introduced in this commit:\n\nYeah :-(.\n\nPatch is good as far as it goes, but I wonder if it'd be smarter to\nconvert the function's \"type\" argument from a string to an enum,\nand then replace the if/else chains with switches?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 10:43:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Fri, May 10, 2019 at 4:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-May-10, Julien Rouhaud wrote:\n> >> I just noticed that reindexdb could report an extraneous message\n> >> saying an error happened while reindexing a database if it failed\n> >> reindexing a table or an index.\n>\n> > Kudos, good find -- that's a 14 years old bug, introduced in this commit:\n>\n> Yeah :-(.\n>\n> Patch is good as far as it goes, but I wonder if it'd be smarter to\n> convert the function's \"type\" argument from a string to an enum,\n> and then replace the if/else chains with switches?\n\nI've also thought about it. I think the reason why type argument was\nkept as a string is that reindex_one_database is doing:\n\n appendPQExpBufferStr(&sql, type);\n\nto avoid an extra switch to append the textual reindex type. I don't\nhave a strong opinion on whether to change that on master or not.\n\n\n",
"msg_date": "Fri, 10 May 2019 17:18:11 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On 2019-May-10, Julien Rouhaud wrote:\n\n> On Fri, May 10, 2019 at 4:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Patch is good as far as it goes, but I wonder if it'd be smarter to\n> > convert the function's \"type\" argument from a string to an enum,\n> > and then replace the if/else chains with switches?\n> \n> I've also thought about it. I think the reason why type argument was\n> kept as a string is that reindex_one_database is doing:\n> \n> appendPQExpBufferStr(&sql, type);\n> \n> to avoid an extra switch to append the textual reindex type. I don't\n> have a strong opinion on whether to change that on master or not.\n\nI did have the same thought. It seem clear now that we should do it :-)\nISTM that the way to fix that problem is to use the proposed enum\neverywhere and turn it into a string when generating the SQL command,\nnot before.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 11:33:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Fri, May 10, 2019 at 5:33 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-May-10, Julien Rouhaud wrote:\n>\n> > On Fri, May 10, 2019 at 4:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > > Patch is good as far as it goes, but I wonder if it'd be smarter to\n> > > convert the function's \"type\" argument from a string to an enum,\n> > > and then replace the if/else chains with switches?\n> >\n> > I've also thought about it. I think the reason why type argument was\n> > kept as a string is that reindex_one_database is doing:\n> >\n> > appendPQExpBufferStr(&sql, type);\n> >\n> > to avoid an extra switch to append the textual reindex type. I don't\n> > have a strong opinion on whether to change that on master or not.\n>\n> I did have the same thought. It seem clear now that we should do it :-)\n> ISTM that the way to fix that problem is to use the proposed enum\n> everywhere and turn it into a string when generating the SQL command,\n> not before.\n\nok :) Patch v2 attached.",
"msg_date": "Fri, 10 May 2019 17:58:03 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Fri, May 10, 2019 at 05:58:03PM +0200, Julien Rouhaud wrote:\n> On Fri, May 10, 2019 at 5:33 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I did have the same thought. It seem clear now that we should do it :-)\n>> ISTM that the way to fix that problem is to use the proposed enum\n>> everywhere and turn it into a string when generating the SQL command,\n>> not before.\n> \n> ok :) Patch v2 attached.\n\nThe refactoring bits are fine for HEAD. For back-branches I would\nsuggest using the simplest patch of upthread.\n\n> +typedef enum ReindexType {\n> +\tDATABASE,\n> +\tSCHEMA,\n> +\tTABLE,\n> +\tINDEX\n> +} ReindexType;\n\nThat's perhaps too much generic when it comes to grep in the source\ncode, why not appending REINDEX_ to each element?\n\n> +\tswitch(type)\n> +\t{\n> +\t\tcase DATABASE:\n> +\t\t\tappendPQExpBufferStr(&sql, \"DATABASE\");\n> +\t\t\tbreak;\n> +\t\tcase SCHEMA:\n> +\t\t\tappendPQExpBufferStr(&sql, \"SCHEMA\");\n> +\t\t\tbreak;\n> +\t\tcase TABLE:\n> +\t\t\tappendPQExpBufferStr(&sql, \"TABLE\");\n> +\t\t\tbreak;\n> +\t\tcase INDEX:\n> +\t\t\tappendPQExpBufferStr(&sql, \"INDEX\");\n> +\t\t\tbreak;\n> +\t\tdefault:\n> +\t\t\tpg_log_error(\"Unrecognized reindex type %d\", type);\n> +\t\t\texit(1);\n> +\t\t\tbreak;\n> +\t}\n\nWe could actually remove this default part, so as we get compiler\nwarning when introducing a new element.\n--\nMichael",
"msg_date": "Sat, 11 May 2019 09:42:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The refactoring bits are fine for HEAD. For back-branches I would\n> suggest using the simplest patch of upthread.\n\nMakes sense to me too. The refactoring is mostly to make future\nadditions easier, so there's not much point for back branches.\n\n> That's perhaps too much generic when it comes to grep in the source\n> code, why not appending REINDEX_ to each element?\n\n+1\n\n> We could actually remove this default part, so as we get compiler\n> warning when introducing a new element.\n\nRight. Also, I was imagining folding the steps together while\nbuilding the commands so that there's just one switch() for that,\nalong the lines of\n\n const char *verbose_option = verbose ? \" (VERBOSE)\" : \"\";\n const char *concurrent_option = concurrently ? \" CONCURRENTLY\" : \"\";\n\n switch (type)\n {\n case REINDEX_DATABASE:\n appendPQExpBufferStr(&sql, \"REINDEX%s DATABASE%s %s\",\n verbose_option, concurrent_option,\n fmtId(PQdb(conn)));\n break;\n case REINDEX_TABLE:\n appendPQExpBufferStr(&sql, \"REINDEX%s TABLE%s \",\n verbose_option, concurrent_option);\n appendQualifiedRelation(&sql, name, conn, progname, echo);\n break;\n ....\n\nIt seemed to me that this would be more understandable and flexible\nthan the way it's being done now, though of course others might see\nthat differently. I'm not dead set on that, just suggesting it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 21:25:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Fri, May 10, 2019 at 09:25:58PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > The refactoring bits are fine for HEAD. For back-branches I would\n> > suggest using the simplest patch of upthread.\n> \n> Makes sense to me too. The refactoring is mostly to make future\n> additions easier, so there's not much point for back branches.\n\nFor now, I have committed and back-patched all the way down the bug\nfix. The refactoring is also kind of nice so I'll be happy to look at\nan updated patch. At the same time, let's get rid of\nreindex_system_catalogs() and integrate it with reindex_one_database()\nwith a REINDEX_SYSTEM option in the enum. Julien, could you send a\nnew version?\n\n> Right. Also, I was imagining folding the steps together while\n> building the commands so that there's just one switch() for that,\n> along the lines of\n\nYes, that makes sense.\n--\nMichael",
"msg_date": "Sat, 11 May 2019 13:04:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Sat, May 11, 2019 at 6:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, May 10, 2019 at 09:25:58PM -0400, Tom Lane wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> > > The refactoring bits are fine for HEAD. For back-branches I would\n> > > suggest using the simplest patch of upthread.\n> >\n> > Makes sense to me too. The refactoring is mostly to make future\n> > additions easier, so there's not much point for back branches.\n>\n> For now, I have committed and back-patched all the way down the bug\n> fix.\n\nThanks!\n\n> The refactoring is also kind of nice so I'll be happy to look at\n> an updated patch. At the same time, let's get rid of\n> reindex_system_catalogs() and integrate it with reindex_one_database()\n> with a REINDEX_SYSTEM option in the enum. Julien, could you send a\n> new version?\n\nYes, I had further refactoring in mind including this one (there are\nalso quite some parameters passed to the functions, passing a struct\ninstead could be worthwhile), but I thought this should be better done\nafter branching.\n\n> > Right. Also, I was imagining folding the steps together while\n> > building the commands so that there's just one switch() for that,\n> > along the lines of\n>\n> Yes, that makes sense.\n\nIndeed.\n\nI attach the switch refactoring that applies on top of current HEAD,\nand the reindex_system_catalogs() removal in a different patch in case\nthat's too much during feature freeze.",
"msg_date": "Sat, 11 May 2019 10:28:43 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Sat, May 11, 2019 at 10:28:43AM +0200, Julien Rouhaud wrote:\n> I attach the switch refactoring that applies on top of current HEAD,\n> and the reindex_system_catalogs() removal in a different patch in case\n> that's too much during feature freeze.\n\nBoth Look fine to me at quick glance, but I have not tested them. I\nam not sure about refactoring all the options into a structure,\nperhaps it depends on what kind of patch it gives. Regarding a merge\ninto the tree, I think that this refactoring should wait until\nREL_12_STABLE has been created. It is no time to take risks in\ndestabilizing the code.\n\nAlso, as this thread's problem has been solved, perhaps it would be\nbetter to spawn a new thread, and to add a new entry in the CF app for\nthe refactoring set so as it attracts the correct audience? The\ncurrent thread topic is unfortunately misleading based on the latest\nmessages exchanged.\n--\nMichael",
"msg_date": "Sat, 11 May 2019 21:09:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On Sat, May 11, 2019 at 2:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 11, 2019 at 10:28:43AM +0200, Julien Rouhaud wrote:\n> > I attach the switch refactoring that applies on top of current HEAD,\n> > and the reindex_system_catalogs() removal in a different patch in case\n> > that's too much during feature freeze.\n>\n> Both Look fine to me at quick glance, but I have not tested them. I\n> am not sure about refactoring all the options into a structure,\n> perhaps it depends on what kind of patch it gives. Regarding a merge\n> into the tree, I think that this refactoring should wait until\n> REL_12_STABLE has been created. It is no time to take risks in\n> destabilizing the code.\n\nI've run the TAP tests and it's running fine, but this should\ndefinitely wait for branching.\n\n> Also, as this thread's problem has been solved, perhaps it would be\n> better to spawn a new thread, and to add a new entry in the CF app for\n> the refactoring set so as it attracts the correct audience? The\n> current thread topic is unfortunately misleading based on the latest\n> messages exchanged.\n\nUnless someone argue it should be applied in v12, I'll do that soon.\n\n\n",
"msg_date": "Sat, 11 May 2019 20:03:20 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in reindexdb's error reporting"
},
{
"msg_contents": "On 2019-May-11, Julien Rouhaud wrote:\n\n> On Sat, May 11, 2019 at 2:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > Also, as this thread's problem has been solved, perhaps it would be\n> > better to spawn a new thread, and to add a new entry in the CF app for\n> > the refactoring set so as it attracts the correct audience? The\n> > current thread topic is unfortunately misleading based on the latest\n> > messages exchanged.\n> \n> Unless someone argue it should be applied in v12, I'll do that soon.\n\nCertainly not.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 May 2019 18:28:00 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in reindexdb's error reporting"
}
] |
[
{
"msg_contents": "While working on 1aebfbea83c, I noticed that the new multivariate MCV\nstats feature suffers from the same problem, and also the original\nproblems that were fixed in e2d4ef8de8 and earlier --- namely that a\nuser can see values in the MCV lists that they shouldn't see (values\nfrom tables that they don't have privileges on).\n\nI think there are 2 separate issues here:\n\n1). The table pg_statistic_ext is accessible to anyone, so any user\ncan see the MCV lists of any table. I think we should give this the\nsame treatment as pg_statistic, and hide it behind a security barrier\nview, revoking public access from the table.\n\n2). The multivariate MCV stats planner code can be made to invoke\nuser-defined operators, so a user can create a leaky operator and use\nit to reveal data values from the MCV lists even if they have no\npermissions on the table.\n\nAttached is a draft patch to fix (2), which hooks into\nstatext_is_compatible_clause().\n\nI haven't thought much about (1). There are some questions about what\nexactly the view should look like. Probably it should translate table\noids to names, like pg_stats does, but should it also translate column\nindexes to names? That could get fiddly. Should it unpack MCV items?\n\nI'll raise this as an open item for PG12.\n\nRegards,\nDean",
"msg_date": "Fri, 10 May 2019 10:19:44 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Fri, May 10, 2019 at 10:19:44AM +0100, Dean Rasheed wrote:\n>While working on 1aebfbea83c, I noticed that the new multivariate MCV\n>stats feature suffers from the same problem, and also the original\n>problems that were fixed in e2d4ef8de8 and earlier --- namely that a\n>user can see values in the MCV lists that they shouldn't see (values\n>from tables that they don't have privileges on).\n>\n>I think there are 2 separate issues here:\n>\n>1). The table pg_statistic_ext is accessible to anyone, so any user\n>can see the MCV lists of any table. I think we should give this the\n>same treatment as pg_statistic, and hide it behind a security barrier\n>view, revoking public access from the table.\n>\n>2). The multivariate MCV stats planner code can be made to invoke\n>user-defined operators, so a user can create a leaky operator and use\n>it to reveal data values from the MCV lists even if they have no\n>permissions on the table.\n>\n>Attached is a draft patch to fix (2), which hooks into\n>statext_is_compatible_clause().\n>\n\nI think that patch is good.\n\n>I haven't thought much about (1). There are some questions about what\n>exactly the view should look like. Probably it should translate table\n>oids to names, like pg_stats does, but should it also translate column\n>indexes to names? That could get fiddly. Should it unpack MCV items?\n>\n\nYeah. I suggest we add a simple pg_stats_ext view, similar to pg_stats.\nIt would:\n\n(1) translate the schema / relation / attribute names\n\n I don't see why translating column indexes to names would be fiddly.\n It's a matter of simple unnest + join, no? Or what issues you see?\n\n(2) include values for ndistinct / dependencies, if built\n\n Those don't include any actual values, so this should be OK. You might\n make the argument that even this does leak a bit of information (e.g.\n when you can see values in one column, and you know there's a strong\n functional dependence, it tells you something about the other column\n which you may not see). But I think we kinda already leak information\n about that through estimates, so maybe that's not an issue.\n\n(3) include MCV list only when user has access to all columns\n\n Essentially, if the user is missing 'select' privilege on at least one\n of the columns, there'll be NULL. Otherwise the MCV value.\n\nThe attached patch adds pg_stats_ext doing this. I don't claim it's the\nbest possible query backing the view, so any improvements are welcome.\n\n\nI've been thinking we might somehow filter the statistics values, e.g. by\nnot showing values for attributes the user has no 'select' privilege on,\nbut that seems like a can of worms. It'd lead to MCV items that can't be\ndistinguished because the only difference was the removed attribute, and\nso on. So I propose we simply show/hide the whole MCV list.\n\nLikewise, I don't think it makes sense to expand the MCV list in this\nview - that works for the single-dimensional case, because then the\nlist is expanded into two arrays (values + frequencies), which are easy\nto process further. But for multivariate MCV lists that's much more\ncomplex - we don't know how many attributes are there, for example.\n\nSo I suggest we just show the pg_mcv_list value as is, and leave it up\nto the user to call the pg_mcv_list_items() function if needed.\n\nThis will also work for histograms, where expanding the value in the\npg_stats_ext would be even trickier.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 May 2019 00:36:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, 13 May 2019 at 23:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Yeah. I suggest we add a simple pg_stats_ext view, similar to pg_stats.\n> It would:\n>\n> (1) translate the schema / relation / attribute names\n>\n> I don't see why translating column indexes to names would be fiddly.\n> It's a matter of simple unnest + join, no? Or what issues you see?\n>\n\nYeah, you're right. I thought it would be harder than that. One minor\nthing -- I think we should have an explicit ORDER BY where we collect\nthe column names, to guarantee that they're listed in the right order.\n\n> (2) include values for ndistinct / dependencies, if built\n>\n> Those don't include any actual values, so this should be OK. You might\n> make the argument that even this does leak a bit of information (e.g.\n> when you can see values in one column, and you know there's a strong\n> functional dependence, it tells you something about the other column\n> which you may not see). But I think we kinda already leak information\n> about that through estimates, so maybe that's not an issue.\n>\n\nHmm. For normal statistics, if the user has no permissions on the\ntable, they don't get to see any of these kinds of statistics, not\neven things like n_distinct. I think we should do the same here --\ni.e., if the user has no permissions on the table, don't let them see\nanything. Such a user will not be able to run EXPLAIN on queries\nagainst the table, so they won't get to see any estimates, and I don't\nthink they should get to see any extended statistics either.\n\n> (3) include MCV list only when user has access to all columns\n>\n> Essentially, if the user is missing 'select' privilege on at least one\n> of the columns, there'll be NULL. Otherwise the MCV value.\n>\n\nOK, that seems reasonable, except as I said above, I think that should\napply to all statistics, not just the MCV lists.\n\n> The attached patch adds pg_stats_ext doing this. I don't claim it's the\n> best possible query backing the view, so any improvements are welcome.\n>\n>\n> I've been thinking we might somehow filter the statistics values, e.g. by\n> not showing values for attributes the user has no 'select' privilege on,\n> but that seems like a can of worms. It'd lead to MCV items that can't be\n> distinguished because the only difference was the removed attribute, and\n> so on. So I propose we simply show/hide the whole MCV list.\n>\n\nAgreed.\n\n> Likewise, I don't think it makes sense to expand the MCV list in this\n> view - that works for the single-dimensional case, because then the\n> list is expanded into two arrays (values + frequencies), which are easy\n> to process further. But for multivariate MCV lists that's much more\n> complex - we don't know how many attributes are there, for example.\n>\n> So I suggest we just show the pg_mcv_list value as is, and leave it up\n> to the user to call the pg_mcv_list_items() function if needed.\n>\n\nI think expanding the MCV lists is actually quite useful because then\nyou can see arrays of values, nulls, frequencies and base frequencies\nin a reasonably readable form (it certainly looks better than a binary\ndump), without needing to join to a function call, which is a bit\nugly, and unmemorable.\n\nThe results from the attached look quite reasonable at first glance.\nIt contains a few other changes as well:\n\n1). It exposes the schema, name and owner of the statistics object as\nwell via the view, for completeness.\n\n2). It changes a few column names -- traditionally these views strip\noff the column name prefix from the underlying table, so I've\nattempted to be consistent with other similar views.\n\n3). I added array-valued columns for each of the MCV list components,\nwhich makes it more like pg_stats.\n\n4). I moved all the permission checks to the top-level WHERE clause,\nso a user needs to have select permissions on all the columns\nmentioned by the statistics, and the table mustn't have RLS in effect,\notherwise the user won't see the row for that statistics object.\n\n5). Some columns from pg_statistic_ext have to be made visible for\npsql \\d to work. Basically, it needs to be able to query for the\nexistence of extended statistics, but it doesn't need to see the\nactual statistical data. Of course, we could change psql to use the\nview, but this way gives us better backwards compatibility with older\nclients.\n\nThis is still going to break compatibility of any user code looking at\nstxndistinct or stxdependencies from pg_statistic_ext, but at least it\ndoesn't break old versions of psql.\n\nNote: doc and test updates still to do.\n\nRegards,\nDean",
"msg_date": "Thu, 16 May 2019 14:28:03 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Thu, May 16, 2019 at 02:28:03PM +0100, Dean Rasheed wrote:\n>On Mon, 13 May 2019 at 23:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Yeah. I suggest we add a simple pg_stats_ext view, similar to pg_stats.\n>> It would:\n>>\n>> (1) translate the schema / relation / attribute names\n>>\n>> I don't see why translating column indexes to names would be fiddly.\n>> It's a matter of simple unnest + join, no? Or what issues you see?\n>>\n>\n>Yeah, you're right. I thought it would be harder than that. One minor\n>thing -- I think we should have an explicit ORDER BY where we collect\n>the column names, to guarantee that they're listed in the right order.\n>\n\nGood point.\n\n>> (2) include values for ndistinct / dependencies, if built\n>>\n>> Those don't include any actual values, so this should be OK. You might\n>> make the argument that even this does leak a bit of information (e.g.\n>> when you can see values in one column, and you know there's a strong\n>> functional dependence, it tells you something about the other column\n>> which you may not see). But I think we kinda already leak information\n>> about that through estimates, so maybe that's not an issue.\n>>\n>\n>Hmm. For normal statistics, if the user has no permissions on the\n>table, they don't get to see any of these kinds of statistics, not\n>even things like n_distinct. I think we should do the same here --\n>i.e., if the user has no permissions on the table, don't let them see\n>anything. Such a user will not be able to run EXPLAIN on queries\n>against the table, so they won't get to see any estimates, and I don't\n>think they should get to see any extended statistics either.\n>\n\nOK, I haven't realized we don't show that even for normal stats.\n\n>> (3) include MCV list only when user has access to all columns\n>>\n>> Essentially, if the user is missing 'select' privilege on at least one\n>> of the columns, there'll be NULL. Otherwise the MCV value.\n>>\n>\n>OK, that seems reasonable, except as I said above, I think that should\n>apply to all statistics, not just the MCV lists.\n>\n>> The attached patch adds pg_stats_ext doing this. I don't claim it's the\n>> best possible query backing the view, so any improvements are welcome.\n>>\n>>\n>> I've been thinking we might somehow filter the statistics values, e.g. by\n>> not showing values for attributes the user has no 'select' privilege on,\n>> but that seems like a can of worms. It'd lead to MCV items that can't be\n>> distinguished because the only difference was the removed attribute, and\n>> so on. So I propose we simply show/hide the whole MCV list.\n>>\n>\n>Agreed.\n>\n>> Likewise, I don't think it makes sense to expand the MCV list in this\n>> view - that works for the single-dimensional case, because then the\n>> list is expanded into two arrays (values + frequencies), which are easy\n>> to process further. But for multivariate MCV lists that's much more\n>> complex - we don't know how many attributes are there, for example.\n>>\n>> So I suggest we just show the pg_mcv_list value as is, and leave it up\n>> to the user to call the pg_mcv_list_items() function if needed.\n>>\n>\n>I think expanding the MCV lists is actually quite useful because then\n>you can see arrays of values, nulls, frequencies and base frequencies\n>in a reasonably readable form (it certainly looks better than a binary\n>dump), without needing to join to a function call, which is a bit\n>ugly, and unmemorable.\n>\n\nHmmm, ok. I think my main worry here is that it may or may not work for\nmore complex types of extended stats that are likely to come in the\nfuture. Although, maybe it can be made work even for that.\n\n>The results from the attached look quite reasonable at first glance.\n>It contains a few other changes as well:\n>\n>1). It exposes the schema, name and owner of the statistics object as\n>well via the view, for completeness.\n>\n>2). It changes a few column names -- traditionally these views strip\n>off the column name prefix from the underlying table, so I've\n>attempted to be consistent with other similar views.\n>\n>3). I added array-valued columns for each of the MCV list components,\n>which makes it more like pg_stats.\n>\n>4). I moved all the permission checks to the top-level WHERE clause,\n>so a user needs to have select permissions on all the columns\n>mentioned by the statistics, and the table mustn't have RLS in effect,\n>otherwise the user won't see the row for that statistics object.\n>\n>5). Some columns from pg_statistic_ext have to be made visible for\n>psql \\d to work. Basically, it needs to be able to query for the\n>existence of extended statistics, but it doesn't need to see the\n>actual statistical data. Of course, we could change psql to use the\n>view, but this way gives us better backwards compatibility with older\n>clients.\n>\n>This is still going to break compatibility of any user code looking at\n>stxndistinct or stxdependencies from pg_statistic_ext, but at least it\n>doesn't break old versions of psql.\n>\n>Note: doc and test updates still to do.\n>\n\nThanks. I'm travelling today/tomorrow, but I'll do my best to fill in the\nmissing bits ASAP.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 16 May 2019 16:41:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-16 14:28:03 +0100, Dean Rasheed wrote:\n> 5). Some columns from pg_statistic_ext have to be made visible for\n> psql \\d to work. Basically, it needs to be able to query for the\n> existence of extended statistics, but it doesn't need to see the\n> actual statistical data. Of course, we could change psql to use the\n> view, but this way gives us better backwards compatibility with older\n> clients.\n> \n> This is still going to break compatibility of any user code looking at\n> stxndistinct or stxdependencies from pg_statistic_ext, but at least it\n> doesn't break old versions of psql.\n\nHm, it's not normally a goal to keep old psql working against new\npostgres versions. And there's plenty other issues preventing a v11 psql\nto work against 12. I'd not let this guide any design decisions.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 13:29:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Fri, 17 May 2019 at 21:29, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-05-16 14:28:03 +0100, Dean Rasheed wrote:\n> > 5). Some columns from pg_statistic_ext have to be made visible for\n> > psql \\d to work. Basically, it needs to be able to query for the\n> > existence of extended statistics, but it doesn't need to see the\n> > actual statistical data. Of course, we could change psql to use the\n> > view, but this way gives us better backwards compatibility with older\n> > clients.\n> >\n> > This is still going to break compatibility of any user code looking at\n> > stxndistinct or stxdependencies from pg_statistic_ext, but at least it\n> > doesn't break old versions of psql.\n>\n> Hm, it's not normally a goal to keep old psql working against new\n> postgres versions. And there's plenty other issues preventing a v11 psql\n> to work against 12. I'd not let this guide any design decisions.\n>\n\nAh good point. In fact running \"\\d some_table\" from v11's psql against\na v12 database immediately falls over because of the removal of\nrelhasoids from pg_class, so this isn't a valid reason for retaining\naccess to any columns from pg_statistic_ext.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 18 May 2019 10:11:58 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sat, 18 May 2019 at 10:11, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 17 May 2019 at 21:29, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-05-16 14:28:03 +0100, Dean Rasheed wrote:\n> > > 5). Some columns from pg_statistic_ext have to be made visible for\n> > > psql \\d to work. Basically, it needs to be able to query for the\n> > > existence of extended statistics, but it doesn't need to see the\n> > > actual statistical data. Of course, we could change psql to use the\n> > > view, but this way gives us better backwards compatibility with older\n> > > clients.\n> > >\n> > > This is still going to break compatibility of any user code looking at\n> > > stxndistinct or stxdependencies from pg_statistic_ext, but at least it\n> > > doesn't break old versions of psql.\n> >\n> > Hm, it's not normally a goal to keep old psql working against new\n> > postgres versions. And there's plenty other issues preventing a v11 psql\n> > to work against 12. I'd not let this guide any design decisions.\n> >\n>\n> Ah good point. In fact running \"\\d some_table\" from v11's psql against\n> a v12 database immediately falls over because of the removal of\n> relhasoids from pg_class, so this isn't a valid reason for retaining\n> access to any columns from pg_statistic_ext.\n>\n\nOn the other hand, pg_dump relies on pg_statistic_ext to work out\nwhich extended statistics objects to dump. If we were to change that\nto use pg_stats_ext, then a user dumping a table with RLS using the\n--enable-row-security flag wouldn't get any extended statistics\nobjects, which would be a somewhat surprising result.\n\nThat could be fixed by changing the view to return rows for every\nextended statistics object, nulling out values in columns that the\nuser doesn't have permission to see, in a similar way to Tomas'\noriginal patch. It would have to be modified to do the RLS check in\nthe same place as the privilege checks, rather than in the top-level\nWHERE clause, and we'd probably also have to expose OIDs in addition\nto names, because that's what clients like psql and pg_dump want. To\nme, that feels quite messy though, so I think I'd still vote for\nleaving the first few columns of pg_statistic_ext accessible to\npublic, and not have to change the clients to work differently from\nv12 onwards.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 18 May 2019 11:48:35 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On the other hand, pg_dump relies on pg_statistic_ext to work out\n> which extended statistics objects to dump. If we were to change that\n> to use pg_stats_ext, then a user dumping a table with RLS using the\n> --enable-row-security flag wouldn't get any extended statistics\n> objects, which would be a somewhat surprising result.\n\nIt seems like what we need here is to have a separation between the\n*definition* of a stats object (which is what pg_dump needs access\nto) and the current actual *data* in it. I'd have expected that\nkeeping those in separate catalogs would be the thing to do, though\nperhaps it's too late for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 11:13:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sat, 18 May 2019 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > On the other hand, pg_dump relies on pg_statistic_ext to work out\n> > which extended statistics objects to dump. If we were to change that\n> > to use pg_stats_ext, then a user dumping a table with RLS using the\n> > --enable-row-security flag wouldn't get any extended statistics\n> > objects, which would be a somewhat surprising result.\n>\n> It seems like what we need here is to have a separation between the\n> *definition* of a stats object (which is what pg_dump needs access\n> to) and the current actual *data* in it. I'd have expected that\n> keeping those in separate catalogs would be the thing to do, though\n> perhaps it's too late for that.\n>\n\nYeah, with the benefit of hindsight, that would have made sense, but\nthat seems like a pretty big change to be attempting at this stage.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 18 May 2019 16:43:29 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Hi,\n\nOn May 18, 2019 8:43:29 AM PDT, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>On Sat, 18 May 2019 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> > On the other hand, pg_dump relies on pg_statistic_ext to work out\n>> > which extended statistics objects to dump. If we were to change\n>that\n>> > to use pg_stats_ext, then a user dumping a table with RLS using the\n>> > --enable-row-security flag wouldn't get any extended statistics\n>> > objects, which would be a somewhat surprising result.\n>>\n>> It seems like what we need here is to have a separation between the\n>> *definition* of a stats object (which is what pg_dump needs access\n>> to) and the current actual *data* in it. I'd have expected that\n>> keeping those in separate catalogs would be the thing to do, though\n>> perhaps it's too late for that.\n>>\n>\n>Yeah, with the benefit of hindsight, that would have made sense, but\n>that seems like a pretty big change to be attempting at this stage.\n\nOtoh, having a suboptimal catalog representation that we'll likely have to change in one of the next releases also isn't great. Seems likely that we'll need post beta1 catversion bumps anyway?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 18 May 2019 11:49:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sat, May 18, 2019 at 11:49:06AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On May 18, 2019 8:43:29 AM PDT, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>On Sat, 18 May 2019 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>>> > On the other hand, pg_dump relies on pg_statistic_ext to work out\n>>> > which extended statistics objects to dump. If we were to change\n>>that\n>>> > to use pg_stats_ext, then a user dumping a table with RLS using the\n>>> > --enable-row-security flag wouldn't get any extended statistics\n>>> > objects, which would be a somewhat surprising result.\n>>>\n>>> It seems like what we need here is to have a separation between the\n>>> *definition* of a stats object (which is what pg_dump needs access\n>>> to) and the current actual *data* in it. I'd have expected that\n>>> keeping those in separate catalogs would be the thing to do, though\n>>> perhaps it's too late for that.\n>>>\n>>\n>>Yeah, with the benefit of hindsight, that would have made sense, but\n>>that seems like a pretty big change to be attempting at this stage.\n\n>\n>Otoh, having a suboptimal catalog representation that we'll likely have\n>to change in one of the next releases also isn't great. Seems likely\n>that we'll need post beta1 catversion bumps anyway?\n>\n\nBut that's not an issue intruduced by PG12, it works like that even for\nthe extended statistics introduced in PG10.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 18 May 2019 21:00:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sat, May 18, 2019 at 11:49:06AM -0700, Andres Freund wrote:\n>>> On Sat, 18 May 2019 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> It seems like what we need here is to have a separation between the\n>>>> *definition* of a stats object (which is what pg_dump needs access\n>>>> to) and the current actual *data* in it.\n\n>> Otoh, having a suboptimal catalog representation that we'll likely have\n>> to change in one of the next releases also isn't great. Seems likely\n>> that we'll need post beta1 catversion bumps anyway?\n\n> But that's not an issue intruduced by PG12, it works like that even for\n> the extended statistics introduced in PG10.\n\nYeah, but no time like the present to fix it if it's wrong ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 15:45:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sat, May 18, 2019 at 03:45:11PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sat, May 18, 2019 at 11:49:06AM -0700, Andres Freund wrote:\n>>>> On Sat, 18 May 2019 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> It seems like what we need here is to have a separation between the\n>>>>> *definition* of a stats object (which is what pg_dump needs access\n>>>>> to) and the current actual *data* in it.\n>\n>>> Otoh, having a suboptimal catalog representation that we'll likely have\n>>> to change in one of the next releases also isn't great. Seems likely\n>>> that we'll need post beta1 catversion bumps anyway?\n>\n>> But that's not an issue intruduced by PG12, it works like that even for\n>> the extended statistics introduced in PG10.\n>\n>Yeah, but no time like the present to fix it if it's wrong ...\n>\n\nSorry, not sure I understand. Are you saying we should try to rework\nthis before the beta1 release, or that we don't have time to do that?\n\nI think we have four options - rework it before beta1, rework it after\nbeta1, rework it in PG13 and leave it as it is now.\n\nIf the pg_dump thing si the only issue, maybe there's a simple solution\nthat reworking all the catalogs. Not sure. Are there any other reasons\nwhy the current catalog representation would be suboptimal, or do we\nhave some precedent of a catalog split this way? I can't think of any.\n\nregards \n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 19 May 2019 01:28:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sat, May 18, 2019 at 03:45:11PM -0400, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> But that's not an issue intruduced by PG12, it works like that even for\n>>> the extended statistics introduced in PG10.\n\n>> Yeah, but no time like the present to fix it if it's wrong ...\n\n> Sorry, not sure I understand. Are you saying we should try to rework\n> this before the beta1 release, or that we don't have time to do that?\n\n> I think we have four options - rework it before beta1, rework it after\n> beta1, rework it in PG13 and leave it as it is now.\n\nYup, that's about what the options are. I'm just voting against\n\"change it in v13\". If we're going to change it, then the fewer\nmajor versions that have the bogus definition the better --- and\nsince we're changing that catalog in v12 anyway, users will see\nfewer distinct behaviors if we do this change too.\n\nIt's very possibly too late to get it done before beta1,\nunfortunately. But as Andres noted, post-beta1 catversion\nbumps are hardly unusual, so I do not think \"rework after\nbeta1\" is unacceptable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 19:44:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Sat, May 18, 2019 at 03:45:11PM -0400, Tom Lane wrote:\n> >> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >>> But that's not an issue intruduced by PG12, it works like that even for\n> >>> the extended statistics introduced in PG10.\n> \n> >> Yeah, but no time like the present to fix it if it's wrong ...\n> \n> > Sorry, not sure I understand. Are you saying we should try to rework\n> > this before the beta1 release, or that we don't have time to do that?\n> \n> > I think we have four options - rework it before beta1, rework it after\n> > beta1, rework it in PG13 and leave it as it is now.\n> \n> Yup, that's about what the options are. I'm just voting against\n> \"change it in v13\". If we're going to change it, then the fewer\n> major versions that have the bogus definition the better --- and\n> since we're changing that catalog in v12 anyway, users will see\n> fewer distinct behaviors if we do this change too.\n> \n> It's very possibly too late to get it done before beta1,\n> unfortunately. But as Andres noted, post-beta1 catversion\n> bumps are hardly unusual, so I do not think \"rework after\n> beta1\" is unacceptable.\n\nAgreed.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 18 May 2019 19:48:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, 19 May 2019 at 00:48, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >\n> > > I think we have four options - rework it before beta1, rework it after\n> > > beta1, rework it in PG13 and leave it as it is now.\n> >\n> > Yup, that's about what the options are. I'm just voting against\n> > \"change it in v13\". If we're going to change it, then the fewer\n> > major versions that have the bogus definition the better --- and\n> > since we're changing that catalog in v12 anyway, users will see\n> > fewer distinct behaviors if we do this change too.\n> >\n> > It's very possibly too late to get it done before beta1,\n> > unfortunately. But as Andres noted, post-beta1 catversion\n> > bumps are hardly unusual, so I do not think \"rework after\n> > beta1\" is unacceptable.\n>\n> Agreed.\n>\n\nYes, that makes sense.\n\nI think we shouldn't risk trying to get this into beta1, but let's try\nto get it done as soon as possible after that.\n\nActually, it doesn't appear to be as big a change as I had feared. As\na starter for ten, here's a patch doing the basic split, moving the\nextended stats data into a new catalog pg_statistic_ext_data (I'm not\nparticularly wedded to that name, it's just the first name that came\nto mind).\n\nWith this patch the catalogs look like this:\n\n\n\\d pg_statistic_ext\n Table \"pg_catalog.pg_statistic_ext\"\n Column | Type | Collation | Nullable | Default\n--------------+------------+-----------+----------+---------\n oid | oid | | not null |\n stxrelid | oid | | not null |\n stxname | name | | not null |\n stxnamespace | oid | | not null |\n stxowner | oid | | not null |\n stxkeys | int2vector | | not null |\n stxkind | \"char\"[] | | not null |\nIndexes:\n \"pg_statistic_ext_name_index\" UNIQUE, btree (stxname, stxnamespace)\n \"pg_statistic_ext_oid_index\" UNIQUE, btree (oid)\n \"pg_statistic_ext_relid_index\" btree (stxrelid)\n\n\n\\d pg_statistic_ext_data\n Table \"pg_catalog.pg_statistic_ext_data\"\n Column | Type | Collation | Nullable | Default\n-----------------+-----------------+-----------+----------+---------\n stxoid | oid | | not null |\n stxndistinct | pg_ndistinct | C | |\n stxdependencies | pg_dependencies | C | |\n stxmcv | pg_mcv_list | C | |\nIndexes:\n \"pg_statistic_ext_data_stxoid_index\" UNIQUE, btree (stxoid)\n\n\nI opted to create/remove pg_statistic_ext_data tuples at the same time\nas the pg_statistic_ext tuples, in CreateStatistics() /\nRemoveStatisticsById(), so then it's easier to see that they're in a\none-to-one relationship, and other code doesn't need to worry about\nthe data tuple not existing. The other option would be to defer\ninserting the data tuple to ANALYZE.\n\nI couldn't resist moving the code block that declares\npg_statistic_ext's indexes in indexing.h to the right place, assuming\nthat file is (mostly) sorted alphabetically by catalog name. This puts\nthe extended stats entries just after the normal stats entries which\nseems preferable.\n\nThis is only a very rough first draft (e.g., no doc updates), but it\npasses all the regression tests.\n\nRegards,\nDean",
"msg_date": "Sun, 19 May 2019 10:49:03 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I think we shouldn't risk trying to get this into beta1, but let's try\n> to get it done as soon as possible after that.\n\nAgreed.\n\n> \\d pg_statistic_ext\n> Table \"pg_catalog.pg_statistic_ext\"\n> Column | Type | Collation | Nullable | Default\n> --------------+------------+-----------+----------+---------\n> oid | oid | | not null |\n> stxrelid | oid | | not null |\n> stxname | name | | not null |\n> stxnamespace | oid | | not null |\n> stxowner | oid | | not null |\n> stxkeys | int2vector | | not null |\n> stxkind | \"char\"[] | | not null |\n> Indexes:\n> \"pg_statistic_ext_name_index\" UNIQUE, btree (stxname, stxnamespace)\n> \"pg_statistic_ext_oid_index\" UNIQUE, btree (oid)\n> \"pg_statistic_ext_relid_index\" btree (stxrelid)\n\nCheck.\n\n> \\d pg_statistic_ext_data\n> Table \"pg_catalog.pg_statistic_ext_data\"\n> Column | Type | Collation | Nullable | Default\n> -----------------+-----------------+-----------+----------+---------\n> stxoid | oid | | not null |\n> stxndistinct | pg_ndistinct | C | |\n> stxdependencies | pg_dependencies | C | |\n> stxmcv | pg_mcv_list | C | |\n> Indexes:\n> \"pg_statistic_ext_data_stxoid_index\" UNIQUE, btree (stxoid)\n\nI wonder ... another way we could potentially do this is\n\ncreate table pg_statistic_ext_data(\n stxoid oid, -- OID of owning pg_statistic_ext entry\n stxkind char, -- what kind of data\n stxdata bytea -- the data, in some format or other\n);\n\nThe advantage of this way is that we'd not have to rejigger the\ncatalog's rowtype every time we think of a new kind of extended\nstats. The disadvantage is that manual inspection of the contents\nof an entry would become much harder, for lack of any convenient\noutput function. However, this whole exercise is mostly to prevent\ncasual manual inspection anyway :-(, so I wonder how much we care\nabout that.\n\nAlso, I assume there's going to be a user-accessible view that shows\na join of these tables, but only those rows that correspond to columns\nthe current user can read all of. Should we give that view the name\npg_statistic_ext for maximum backward compatibility? I'm not sure.\npg_dump would probably prefer it if the view is what has a new name,\nbut other clients might like the other way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 May 2019 10:12:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "I wrote:\n> I wonder ... another way we could potentially do this is\n\n> create table pg_statistic_ext_data(\n> stxoid oid, -- OID of owning pg_statistic_ext entry\n> stxkind char, -- what kind of data\n> stxdata bytea -- the data, in some format or other\n> );\n\n> The advantage of this way is that we'd not have to rejigger the\n> catalog's rowtype every time we think of a new kind of extended\n> stats. The disadvantage is that manual inspection of the contents\n> of an entry would become much harder, for lack of any convenient\n> output function.\n\nNo, wait, scratch that. We could fold the three existing types\npg_ndistinct, pg_dependencies, pg_mcv_list into one new type, say\n\"pg_stats_ext_data\", where the actual storage would need to have an\nID field (so we'd waste a byte or two duplicating the externally\nvisible stxkind field inside stxdata). The output function for this\ntype is just a switch over the existing code. The big advantage of\nthis way compared to the current approach is that adding a new\next-stats type requires *zero* work with adding new catalog entries.\nJust add another switch case in pg_stats_ext_data_out() and you're\ndone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 May 2019 10:28:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, May 19, 2019 at 10:28:43AM -0400, Tom Lane wrote:\n>I wrote:\n>> I wonder ... another way we could potentially do this is\n>\n>> create table pg_statistic_ext_data(\n>> stxoid oid, -- OID of owning pg_statistic_ext entry\n>> stxkind char, -- what kind of data\n>> stxdata bytea -- the data, in some format or other\n>> );\n>\n>> The advantage of this way is that we'd not have to rejigger the\n>> catalog's rowtype every time we think of a new kind of extended\n>> stats. The disadvantage is that manual inspection of the contents\n>> of an entry would become much harder, for lack of any convenient\n>> output function.\n>\n>No, wait, scratch that. We could fold the three existing types\n>pg_ndistinct, pg_dependencies, pg_mcv_list into one new type, say\n>\"pg_stats_ext_data\", where the actual storage would need to have an\n>ID field (so we'd waste a byte or two duplicating the externally\n>visible stxkind field inside stxdata). The output function for this\n>type is just a switch over the existing code. The big advantage of\n>this way compared to the current approach is that adding a new\n>ext-stats type requires *zero* work with adding new catalog entries.\n>Just add another switch case in pg_stats_ext_data_out() and you're\n>done.\n>\n\nThe annoying thing is that this undoes the protections provided by special\ndata types generated only in internally. It's not possible to generate\ne.g. pg_mcv_list values in user code (except for C code, at which points\nall bets are off anyway). By abandoning this and reverting to bytea anyone\ncould craft a bytea and claim it's a statistic value.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 19 May 2019 19:38:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, 19 May 2019 at 15:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > I wonder ... another way we could potentially do this is\n>\n> > create table pg_statistic_ext_data(\n> > stxoid oid, -- OID of owning pg_statistic_ext entry\n> > stxkind char, -- what kind of data\n> > stxdata bytea -- the data, in some format or other\n> > );\n>\n> > The advantage of this way is that we'd not have to rejigger the\n> > catalog's rowtype every time we think of a new kind of extended\n> > stats. The disadvantage is that manual inspection of the contents\n> > of an entry would become much harder, for lack of any convenient\n> > output function.\n>\n> No, wait, scratch that. We could fold the three existing types\n> pg_ndistinct, pg_dependencies, pg_mcv_list into one new type, say\n> \"pg_stats_ext_data\", where the actual storage would need to have an\n> ID field (so we'd waste a byte or two duplicating the externally\n> visible stxkind field inside stxdata). The output function for this\n> type is just a switch over the existing code. The big advantage of\n> this way compared to the current approach is that adding a new\n> ext-stats type requires *zero* work with adding new catalog entries.\n> Just add another switch case in pg_stats_ext_data_out() and you're\n> done.\n>\n\nThis feels a little over-engineered to me. Presumably there'd be a\ncompound key on (stxoid, stxkind) and we'd have to scan multiple rows\nto get all the applicable stats, whereas currently they're all in one\nrow. And then the user-accessible view would probably need separate\nsub-queries for each stats kind.\n\nIf the point is just to avoid adding columns to the catalog in future\nreleases, I'm not sure it's worth the added complexity. We know that\nwe will probably add histogram stats in a future release. I'm not sure\nhow many more kinds we'll end up adding, but it doesn't seem likely to\nbe a huge number. I think we'll add far more columns to other catalog\ntables as we add new features to each release.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 19 May 2019 18:39:56 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, May 19, 2019 at 10:28:43AM -0400, Tom Lane wrote:\n>> No, wait, scratch that. We could fold the three existing types\n>> pg_ndistinct, pg_dependencies, pg_mcv_list into one new type, say\n>> \"pg_stats_ext_data\", where the actual storage would need to have an\n>> ID field (so we'd waste a byte or two duplicating the externally\n>> visible stxkind field inside stxdata). The output function for this\n>> type is just a switch over the existing code. The big advantage of\n>> this way compared to the current approach is that adding a new\n>> ext-stats type requires *zero* work with adding new catalog entries.\n>> Just add another switch case in pg_stats_ext_data_out() and you're\n>> done.\n\n> The annoying thing is that this undoes the protections provided by special\n> data types generated only in internally. It's not possible to generate\n> e.g. pg_mcv_list values in user code (except for C code, at which points\n> all bets are off anyway). By abandoning this and reverting to bytea anyone\n> could craft a bytea and claim it's a statistic value.\n\nThat would have been true of the original proposal, but not of this\nmodified one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 May 2019 14:14:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, May 19, 2019 at 02:14:54PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, May 19, 2019 at 10:28:43AM -0400, Tom Lane wrote:\n>>> No, wait, scratch that. We could fold the three existing types\n>>> pg_ndistinct, pg_dependencies, pg_mcv_list into one new type, say\n>>> \"pg_stats_ext_data\", where the actual storage would need to have an\n>>> ID field (so we'd waste a byte or two duplicating the externally\n>>> visible stxkind field inside stxdata). The output function for this\n>>> type is just a switch over the existing code. The big advantage of\n>>> this way compared to the current approach is that adding a new\n>>> ext-stats type requires *zero* work with adding new catalog entries.\n>>> Just add another switch case in pg_stats_ext_data_out() and you're\n>>> done.\n>\n>> The annoying thing is that this undoes the protections provided by special\n>> data types generated only in internally. It's not possible to generate\n>> e.g. pg_mcv_list values in user code (except for C code, at which points\n>> all bets are off anyway). By abandoning this and reverting to bytea anyone\n>> could craft a bytea and claim it's a statistic value.\n>\n>That would have been true of the original proposal, but not of this\n>modified one.\n>\n\nOh, right. It still has the disadvantage that it obfuscates the actual\ndata stored in the pg_stats_ext_data (or whatever would it be called),\nso e.g. functions would have to do additional checks to make sure it\nactually is the right statistic type. For example pg_mcv_list_items()\ncould not rely on receiving pg_mcv_list values, as per the signature,\nbut would have to check the value.\n\nOf course, I don't expect to have too many such functions, but overall\nthis approach with a single type feels a bit too like EAV to my taste.\n\nI think Dean is right we should not expect many more statistic types\nthan what we already have - a histogram, and perhaps one or two more. So\nI agree with Dean the current design with separate statistic types is\nnot such a big issue.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 20 May 2019 00:44:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, 19 May 2019 at 23:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Oh, right. It still has the disadvantage that it obfuscates the actual\n> data stored in the pg_stats_ext_data (or whatever would it be called),\n> so e.g. functions would have to do additional checks to make sure it\n> actually is the right statistic type. For example pg_mcv_list_items()\n> could not rely on receiving pg_mcv_list values, as per the signature,\n> but would have to check the value.\n>\n\nYes. In fact, since the user-accessible view would want to expose\ndatatypes specific to the stats kinds rather than bytea or cstring\nvalues, we would need SQL-callable conversion functions for each kind:\n\n* to_pg_ndistinct(pg_extended_stats_ext_data) returns pg_ndistinct\n* to_pg_dependencies(pg_extended_stats_ext_data) returns pg_dependencies\n* to_pg_mcv(pg_extended_stats_ext_data) returns pg_mcv\n* ...\n\nand each of these would throw an error if it weren't given an extended\nstats object of the right kind. Then to extract MCV data, you'd have\nto do pg_mcv_list_items(to_pg_mcv(ext_data)), and presumably there'd\nbe something similar for histograms.\n\nIMO, that's not a nice model, compared to just having columns of the\nright types in the first place.\n\nAlso this model presupposes that all future stats kinds are most\nconveniently represented in a single column, but maybe that won't be\nthe case. It's conceivable that a future stats kind would benefit from\nsplitting its data across multiple columns.\n\n\n> Of course, I don't expect to have too many such functions, but overall\n> this approach with a single type feels a bit too like EAV to my taste.\n>\n\nYes, I think it is an EAV model. I think EAV models do have their\nplace, but I think that's largely where adding new columns is a common\noperation and involves adding little to no extra code. I don't think\neither of those is true for extended stats. What we've seen over the\nlast couple of years is that adding each new stats kind is a large\nundertaking, involving lots of new code. That alone is going to limit\njust how many ever get added, and compared to that effort, adding new\ncolumns to the catalog is small fry.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 May 2019 08:33:49 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Sun, 19 May 2019 at 23:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> Oh, right. It still has the disadvantage that it obfuscates the actual\n>> data stored in the pg_stats_ext_data (or whatever would it be called),\n>> so e.g. functions would have to do additional checks to make sure it\n>> actually is the right statistic type. For example pg_mcv_list_items()\n>> could not rely on receiving pg_mcv_list values, as per the signature,\n>> but would have to check the value.\n\n> Yes. In fact, since the user-accessible view would want to expose\n> datatypes specific to the stats kinds rather than bytea or cstring\n> values, we would need SQL-callable conversion functions for each kind:\n\nIt seems like people are willfully misunderstanding my suggestion.\nYou'd only need *one* conversion function, which would look at the\nembedded ID field and then emit the appropriate text representation.\nI don't see a reason why we'd have the separate pg_ndistinct etc. types\nany more at all.\n\n> Also this model presupposes that all future stats kinds are most\n> conveniently represented in a single column, but maybe that won't be\n> the case. It's conceivable that a future stats kind would benefit from\n> splitting its data across multiple columns.\n\nHm, that's possible I suppose, but it seems a little far-fetched.\nYou could equally well argue that pg_ndistinct etc. should have been\nbroken down into smaller types, but we didn't.\n\n> Yes, I think it is an EAV model. I think EAV models do have their\n> place, but I think that's largely where adding new columns is a common\n> operation and involves adding little to no extra code. I don't think\n> either of those is true for extended stats. What we've seen over the\n> last couple of years is that adding each new stats kind is a large\n> undertaking, involving lots of new code. That alone is going to limit\n> just how many ever get added, and compared to that effort, adding new\n> columns to the catalog is small fry.\n\nI can't argue with that --- the make-work is just a small part of the\ntotal. But it's still make-work.\n\nAnyway, it was just a suggestion, and if people don't like it that's\nfine. But I don't want it to be rejected on the basis of false\narguments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 09:32:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, May 20, 2019 at 09:32:24AM -0400, Tom Lane wrote:\n>Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> On Sun, 19 May 2019 at 23:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>> Oh, right. It still has the disadvantage that it obfuscates the actual\n>>> data stored in the pg_stats_ext_data (or whatever would it be called),\n>>> so e.g. functions would have to do additional checks to make sure it\n>>> actually is the right statistic type. For example pg_mcv_list_items()\n>>> could not rely on receiving pg_mcv_list values, as per the signature,\n>>> but would have to check the value.\n>\n>> Yes. In fact, since the user-accessible view would want to expose\n>> datatypes specific to the stats kinds rather than bytea or cstring\n>> values, we would need SQL-callable conversion functions for each kind:\n>\n>It seems like people are willfully misunderstanding my suggestion.\n>You'd only need *one* conversion function, which would look at the\n>embedded ID field and then emit the appropriate text representation.\n>I don't see a reason why we'd have the separate pg_ndistinct etc. types\n>any more at all.\n>\n\nThat would however require having input functions, which we currently\ndon't have. Otherwise people could not process the statistic values using\nfunctions like pg_mcv_list_items(). Which I think is useful.\n\nOf course, we could add input functions, but there was a reasoning for not\nhaving them (similarly to pg_node_tree). \n\n>> Also this model presupposes that all future stats kinds are most\n>> conveniently represented in a single column, but maybe that won't be\n>> the case. It's conceivable that a future stats kind would benefit from\n>> splitting its data across multiple columns.\n>\n>Hm, that's possible I suppose, but it seems a little far-fetched.\n>You could equally well argue that pg_ndistinct etc. should have been\n>broken down into smaller types, but we didn't.\n>\n\nTrue. I can't rule out adding such \"split\" statistic type, but don't think\nit's very likely. The extended statistic values tend to be complex and\neasier to represent in a single value.\n\n>> Yes, I think it is an EAV model. I think EAV models do have their\n>> place, but I think that's largely where adding new columns is a common\n>> operation and involves adding little to no extra code. I don't think\n>> either of those is true for extended stats. What we've seen over the\n>> last couple of years is that adding each new stats kind is a large\n>> undertaking, involving lots of new code. That alone is going to limit\n>> just how many ever get added, and compared to that effort, adding new\n>> columns to the catalog is small fry.\n>\n>I can't argue with that --- the make-work is just a small part of the\n>total. But it's still make-work.\n>\n>Anyway, it was just a suggestion, and if people don't like it that's\n>fine. But I don't want it to be rejected on the basis of false\n>arguments.\n>\n\nSure.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 20 May 2019 16:45:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, 20 May 2019 at 14:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > On Sun, 19 May 2019 at 23:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >> Oh, right. It still has the disadvantage that it obfuscates the actual\n> >> data stored in the pg_stats_ext_data (or whatever would it be called),\n> >> so e.g. functions would have to do additional checks to make sure it\n> >> actually is the right statistic type. For example pg_mcv_list_items()\n> >> could not rely on receiving pg_mcv_list values, as per the signature,\n> >> but would have to check the value.\n>\n> > Yes. In fact, since the user-accessible view would want to expose\n> > datatypes specific to the stats kinds rather than bytea or cstring\n> > values, we would need SQL-callable conversion functions for each kind:\n>\n> It seems like people are willfully misunderstanding my suggestion.\n\nI'm more than capable of inadvertently misunderstanding, without the\nneed to willfully do so :-)\n\n> You'd only need *one* conversion function, which would look at the\n> embedded ID field and then emit the appropriate text representation.\n> I don't see a reason why we'd have the separate pg_ndistinct etc. types\n> any more at all.\n\nHmm, OK. So then would you also make the user-accessible view agnostic\nabout the kinds of stats supported in the same way, returning zero or\nmore rows per STATISTICS object, depending on how many kinds of stats\nhave been built? That would have the advantage of never needing to\nchange the view definition again, as more stats kinds are supported.\n\nWe'd need to change pg_mcv_list_items() to accept a pg_stats_ext_data\nvalue rather than a pg_mcv value, and it would be the user's\nresponsibility to call it if they wanted to see the contents of the\nMCV list (I was originally thinking that we'd include a call to\npg_mcv_list_items() in the view definition, so that it produced\nfriendlier looking output, since the default textual representation of\nan MCV list is completely opaque, unlike the other stats kinds).\nActually, I can see another advantage to not including\npg_mcv_list_items() in the view definition -- in the future, we may\ndream up a better version of pg_mcv_list_items(), like say one that\nproduced JSON, and then we'd regret using the current function.\n\n> Anyway, it was just a suggestion, and if people don't like it that's\n> fine. But I don't want it to be rejected on the basis of false\n> arguments.\n\nTo be clear, I'm not intentionally rejecting your idea. I'm merely\ntrying to fully understand the implications.\n\nAt this stage, perhaps it would be helpful to prototype something for\ncomparison.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 May 2019 16:09:24 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, May 20, 2019 at 04:09:24PM +0100, Dean Rasheed wrote:\n>On Mon, 20 May 2019 at 14:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> > On Sun, 19 May 2019 at 23:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >> Oh, right. It still has the disadvantage that it obfuscates the actual\n>> >> data stored in the pg_stats_ext_data (or whatever would it be called),\n>> >> so e.g. functions would have to do additional checks to make sure it\n>> >> actually is the right statistic type. For example pg_mcv_list_items()\n>> >> could not rely on receiving pg_mcv_list values, as per the signature,\n>> >> but would have to check the value.\n>>\n>> > Yes. In fact, since the user-accessible view would want to expose\n>> > datatypes specific to the stats kinds rather than bytea or cstring\n>> > values, we would need SQL-callable conversion functions for each kind:\n>>\n>> It seems like people are willfully misunderstanding my suggestion.\n>\n>I'm more than capable of inadvertently misunderstanding, without the\n>need to willfully do so :-)\n>\n>> You'd only need *one* conversion function, which would look at the\n>> embedded ID field and then emit the appropriate text representation.\n>> I don't see a reason why we'd have the separate pg_ndistinct etc. types\n>> any more at all.\n>\n>Hmm, OK. So then would you also make the user-accessible view agnostic\n>about the kinds of stats supported in the same way, returning zero or\n>more rows per STATISTICS object, depending on how many kinds of stats\n>have been built? That would have the advantage of never needing to\n>change the view definition again, as more stats kinds are supported.\n>\n\nIf I got Tom's proposal right, there'd be only one statistic value in\neach pg_stats_ext_data value. It'd be a very thin wrapper, essentially\njust the value itself + type flag. So for example if you did\n\n CREATE STATISTICS s (ndistinct, mcv, dependencies) ...\n\nyou'd get three rows in pg_statistic_ext_data (assuming all the stats\nget actually built).\n\n>We'd need to change pg_mcv_list_items() to accept a pg_stats_ext_data\n>value rather than a pg_mcv value, and it would be the user's\n>responsibility to call it if they wanted to see the contents of the\n>MCV list (I was originally thinking that we'd include a call to\n>pg_mcv_list_items() in the view definition, so that it produced\n>friendlier looking output, since the default textual representation of\n>an MCV list is completely opaque, unlike the other stats kinds).\n>Actually, I can see another advantage to not including\n>pg_mcv_list_items() in the view definition -- in the future, we may\n>dream up a better version of pg_mcv_list_items(), like say one that\n>produced JSON, and then we'd regret using the current function.\n>\n\nYeah. As I said, it obfuscates the \"actual\" type of the stats value, so\nwe can no longer rely on the function machinery to verify the type. All\nfunctions dealing with the \"wrapper\" type would have to verify it\nactually contains the right statistic type.\n\n>> Anyway, it was just a suggestion, and if people don't like it that's\n>> fine. But I don't want it to be rejected on the basis of false\n>> arguments.\n>\n>To be clear, I'm not intentionally rejecting your idea. I'm merely\n>trying to fully understand the implications.\n>\n>At this stage, perhaps it would be helpful to prototype something for\n>comparison.\n>\n\nI'll look into that. I'll try to whip something up before pgcon, but I\ncan't guarantee that :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 20 May 2019 20:17:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "Hi,\n\nAttached are three patches tweaking the stats - two were already posted\nin this thread, the third one is just updating docs.\n\n1) 0001 - split pg_statistic_ext into definition + data\n\nThis is pretty much the patch Dean posted some time ago, rebased to\ncurrent master (fixing just minor pgindent bitrot).\n\n2) 0002 - update sgml docs to reflect changes from 0001\n\n3) 0003 - define pg_stats_ext view, similar to pg_stats\n\n\nThe question is whether we want to also redesign pg_statistic_ext_data\nper Tom's proposal (more about that later), but I think we can treat\nthat as an additional step on top of 0001. So I propose we get those\nchanges committed, and then perhaps also switch the data table to the\nEAV model.\n\nBarring objections, I'll do that early next week, after cleaning up\nthose patches a bit more.\n\nOne thing I think we should fix is naming of the attributes in the 0001\npatch. At the moment both catalogs use \"stx\" prefix - e.g. \"stxkind\" is\nin pg_statistic_ext, and \"stxmcv\" is in pg_statistic_ext_data. We should\nprobably switch to \"stxd\" in the _data catalog. Opinions?\n\nNow, back to the proposal to split the _data catalog rows to EAV form,\nwith a new data type replacing the multiple types we have at the moment.\nI've started hacking on it today, but the more I work on it the less\nuseful it seems to me.\n\nMy understanding is that with that approach we'd replace the _data\ncatalog (which currently has one column per statistic type, with a\nseparate data type) with 1:M generic rows, with a generic data type.\nThat is, we'd replace this\n\n CREATE TABLE pg_statistic_ext_data (\n stxoid OID,\n stxdependencies pg_dependencies,\n stxndistinct pg_ndistinct,\n stxmcv pg_mcv_list,\n ... histograms ...\n );\n\nwith something like this:\n\n CREATE TABLE pg_statistiex_ext_data (\n stxoid OID,\n stxkind CHAR,\n stxdata pg_statistic_ext_type\n );\n\nwhere pg_statistic_ext would store all existing statistic types. along\nwith a \"flag\" saying which value it actually stored (essentially a copy\nof the stxkind column, which we however need to lookup a statistic of a\ncertain type, without having to detoast the statistic itself).\n\nAs I mentioned before, I kinda dislike the fact that this obfuscates the\nactual statistic type by hiding it behing the \"wrapper\" type.\n\nThe other thing is that we have to deal with 1:M relationship every time\nwe (re)build the statistics, or when we need to access them. Now, it may\nnot be a huge amount of code, but it just seems unnecessary. It would\nmake sense if we planned to add large number of additional statistic\ntypes, but that seems unlikely - I personally can think of maybe one new\nstatistic type, but that's about it.\n\nI'll continue working on it and I'll share the results early next week,\nafter playing with it a bit, but I think we should get the existing\npatches committed and then continue discussing this as an additional\nimprovement.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 6 Jun 2019 22:33:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Thu, 6 Jun 2019 at 21:33, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> Attached are three patches tweaking the stats - two were already posted\n> in this thread, the third one is just updating docs.\n>\n> 1) 0001 - split pg_statistic_ext into definition + data\n>\n> This is pretty much the patch Dean posted some time ago, rebased to\n> current master (fixing just minor pgindent bitrot).\n>\n> 2) 0002 - update sgml docs to reflect changes from 0001\n>\n> 3) 0003 - define pg_stats_ext view, similar to pg_stats\n>\n\nSeems reasonable on a quick read-through, except I spotted a bug in\nthe view (my fault) -- the statistics_owner column should come from\ns.stxowner rather than c.relowner.\n\n\n> The question is whether we want to also redesign pg_statistic_ext_data\n> per Tom's proposal (more about that later), but I think we can treat\n> that as an additional step on top of 0001. So I propose we get those\n> changes committed, and then perhaps also switch the data table to the\n> EAV model.\n>\n> Barring objections, I'll do that early next week, after cleaning up\n> those patches a bit more.\n>\n> One thing I think we should fix is naming of the attributes in the 0001\n> patch. At the moment both catalogs use \"stx\" prefix - e.g. \"stxkind\" is\n> in pg_statistic_ext, and \"stxmcv\" is in pg_statistic_ext_data. We should\n> probably switch to \"stxd\" in the _data catalog. Opinions?\n>\n\nYes, that makes sense. Especially when joining the 2 tables, since it\nmakes it more obvious which table a given column is coming from in a\njoin clause.\n\n\n> Now, back to the proposal to split the _data catalog rows to EAV form,\n> with a new data type replacing the multiple types we have at the moment.\n> I've started hacking on it today, but the more I work on it the less\n> useful it seems to me.\n>\n> My understanding is that with that approach we'd replace the _data\n> catalog (which currently has one column per statistic type, with a\n> separate data type) with 1:M generic rows, with a generic data type.\n> That is, we'd replace this\n>\n> CREATE TABLE pg_statistic_ext_data (\n> stxoid OID,\n> stxdependencies pg_dependencies,\n> stxndistinct pg_ndistinct,\n> stxmcv pg_mcv_list,\n> ... histograms ...\n> );\n>\n> with something like this:\n>\n> CREATE TABLE pg_statistiex_ext_data (\n> stxoid OID,\n> stxkind CHAR,\n> stxdata pg_statistic_ext_type\n> );\n>\n> where pg_statistic_ext would store all existing statistic types. along\n> with a \"flag\" saying which value it actually stored (essentially a copy\n> of the stxkind column, which we however need to lookup a statistic of a\n> certain type, without having to detoast the statistic itself).\n>\n> As I mentioned before, I kinda dislike the fact that this obfuscates the\n> actual statistic type by hiding it behing the \"wrapper\" type.\n>\n> The other thing is that we have to deal with 1:M relationship every time\n> we (re)build the statistics, or when we need to access them. Now, it may\n> not be a huge amount of code, but it just seems unnecessary. It would\n> make sense if we planned to add large number of additional statistic\n> types, but that seems unlikely - I personally can think of maybe one new\n> statistic type, but that's about it.\n>\n> I'll continue working on it and I'll share the results early next week,\n> after playing with it a bit, but I think we should get the existing\n> patches committed and then continue discussing this as an additional\n> improvement.\n>\n\nI wonder ... would it be completely crazy to just use a JSON column to\nstore the extended stats data?\n\nIt wouldn't be as compact as your representation, but it would allow\nfor future stats kinds without changing the catalog definitions, and\nit wouldn't obfuscate the stats types. You could keep the 1:1\nrelationship, and have top-level JSON keys for each stats kind built,\nand you wouldn't need the pg_mcv_list_items() function because you\ncould just put the MCV data in JSON arrays, which would be much more\ntransparent, and would make the user-accessible view much simpler. One\ncould also imagine writing regression tests that checked for specific\nexpected MCV values like \"stxdata->'mcv'->'frequency'->0\".\n\nJust a thought.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 10 Jun 2019 14:32:04 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 4:33 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> 2) 0002 - update sgml docs to reflect changes from 0001\n\nThere is some copypasta here in the new section referring to the old catalog:\n\n+ <sect1 id=\"catalog-pg-statistic-ext-data\">\n+ <title><structname>pg_statistic_ext_data</structname></title>\n+\n+ <indexterm zone=\"catalog-pg-statistic-ext\">\n+ <primary>pg_statistic_ext</primary>\n+ </indexterm>\n+\n+ <para>\n+ The catalog <structname>pg_statistic_ext</structname>\n+ holds extended planner statistics.\n+ Each row in this catalog corresponds to a <firstterm>statistics\nobject</firstterm>\n+ created with <xref linkend=\"sql-createstatistics\"/>.\n+ </para>\n\nAnd a minor stylistic nit -- it might be good to capitalize \"JOIN\" and\n\"ON\" in the queries in the docs and tests.\n\n> One thing I think we should fix is naming of the attributes in the 0001\n> patch. At the moment both catalogs use \"stx\" prefix - e.g. \"stxkind\" is\n> in pg_statistic_ext, and \"stxmcv\" is in pg_statistic_ext_data. We should\n> probably switch to \"stxd\" in the _data catalog. Opinions?\n\nThat's probably a good idea.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 14:04:34 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 02:32:04PM +0100, Dean Rasheed wrote:\n>On Thu, 6 Jun 2019 at 21:33, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> Hi,\n>>\n>> Attached are three patches tweaking the stats - two were already posted\n>> in this thread, the third one is just updating docs.\n>>\n>> 1) 0001 - split pg_statistic_ext into definition + data\n>>\n>> This is pretty much the patch Dean posted some time ago, rebased to\n>> current master (fixing just minor pgindent bitrot).\n>>\n>> 2) 0002 - update sgml docs to reflect changes from 0001\n>>\n>> 3) 0003 - define pg_stats_ext view, similar to pg_stats\n>>\n>\n>Seems reasonable on a quick read-through, except I spotted a bug in\n>the view (my fault) -- the statistics_owner column should come from\n>s.stxowner rather than c.relowner.\n>\n>\n>> The question is whether we want to also redesign pg_statistic_ext_data\n>> per Tom's proposal (more about that later), but I think we can treat\n>> that as an additional step on top of 0001. So I propose we get those\n>> changes committed, and then perhaps also switch the data table to the\n>> EAV model.\n>>\n>> Barring objections, I'll do that early next week, after cleaning up\n>> those patches a bit more.\n>>\n>> One thing I think we should fix is naming of the attributes in the 0001\n>> patch. At the moment both catalogs use \"stx\" prefix - e.g. \"stxkind\" is\n>> in pg_statistic_ext, and \"stxmcv\" is in pg_statistic_ext_data. We should\n>> probably switch to \"stxd\" in the _data catalog. Opinions?\n>>\n>\n>Yes, that makes sense. Especially when joining the 2 tables, since it\n>makes it more obvious which table a given column is coming from in a\n>join clause.\n>\n\nOK, attached are patches fixing the issues reported by you and John\nNaylor, and squashing the parts into just two patches (catalog split and\npg_stats_ext). Barring objections, I'll push those tomorrow.\n\nI've renamed columns in the _data catalog from 'stx' to 'stxd', which I\nthink is appropriate given the \"data\" in catalog name.\n\nI'm wondering if we should change the examples in SGML docs (say, in\nplanstats.sgml) to use the new pg_stats_ext view, instead of querying the\ncatalogs directly. I've tried doing that, but I found the results less\nreadable than what we currently have (especially for the MCV list, where\nit'd require matching elements in multiple arrays). So I've left this\nunchanged for now.\n\n>\n>> Now, back to the proposal to split the _data catalog rows to EAV form,\n>> with a new data type replacing the multiple types we have at the moment.\n>> I've started hacking on it today, but the more I work on it the less\n>> useful it seems to me.\n>>\n>> My understanding is that with that approach we'd replace the _data\n>> catalog (which currently has one column per statistic type, with a\n>> separate data type) with 1:M generic rows, with a generic data type.\n>> That is, we'd replace this\n>>\n>> CREATE TABLE pg_statistic_ext_data (\n>> stxoid OID,\n>> stxdependencies pg_dependencies,\n>> stxndistinct pg_ndistinct,\n>> stxmcv pg_mcv_list,\n>> ... histograms ...\n>> );\n>>\n>> with something like this:\n>>\n>> CREATE TABLE pg_statistiex_ext_data (\n>> stxoid OID,\n>> stxkind CHAR,\n>> stxdata pg_statistic_ext_type\n>> );\n>>\n>> where pg_statistic_ext would store all existing statistic types. along\n>> with a \"flag\" saying which value it actually stored (essentially a copy\n>> of the stxkind column, which we however need to lookup a statistic of a\n>> certain type, without having to detoast the statistic itself).\n>>\n>> As I mentioned before, I kinda dislike the fact that this obfuscates the\n>> actual statistic type by hiding it behing the \"wrapper\" type.\n>>\n>> The other thing is that we have to deal with 1:M relationship every time\n>> we (re)build the statistics, or when we need to access them. Now, it may\n>> not be a huge amount of code, but it just seems unnecessary. It would\n>> make sense if we planned to add large number of additional statistic\n>> types, but that seems unlikely - I personally can think of maybe one new\n>> statistic type, but that's about it.\n>>\n>> I'll continue working on it and I'll share the results early next week,\n>> after playing with it a bit, but I think we should get the existing\n>> patches committed and then continue discussing this as an additional\n>> improvement.\n>>\n>\n>I wonder ... would it be completely crazy to just use a JSON column to\n>store the extended stats data?\n>\n>It wouldn't be as compact as your representation, but it would allow\n>for future stats kinds without changing the catalog definitions, and\n>it wouldn't obfuscate the stats types. You could keep the 1:1\n>relationship, and have top-level JSON keys for each stats kind built,\n>and you wouldn't need the pg_mcv_list_items() function because you\n>could just put the MCV data in JSON arrays, which would be much more\n>transparent, and would make the user-accessible view much simpler. One\n>could also imagine writing regression tests that checked for specific\n>expected MCV values like \"stxdata->'mcv'->'frequency'->0\".\n>\n\nYou mean storing it as JSONB, I presume?\n\nI've actually considered that at some point, but eventually concluded it's\nnot a good match. I mean, JSON(B) is pretty versatile and can be whacked\nto store pretty much anything, but it has various limitations - e.g. it\ndoes not support arbitrary data types, so we'd have to store a lot of\nstuff as text (through input/output functions). That doesn't seem very\nnice, I guess.\n\nIf we want JSONB output, that should not be difficult to generate. But I\nguess your point was about generic storage format.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 13 Jun 2019 19:37:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 07:37:45PM +0200, Tomas Vondra wrote:\n> ...\n>\n>OK, attached are patches fixing the issues reported by you and John\n>Naylor, and squashing the parts into just two patches (catalog split and\n>pg_stats_ext). Barring objections, I'll push those tomorrow.\n>\n>I've renamed columns in the _data catalog from 'stx' to 'stxd', which I\n>think is appropriate given the \"data\" in catalog name.\n>\n>I'm wondering if we should change the examples in SGML docs (say, in\n>planstats.sgml) to use the new pg_stats_ext view, instead of querying the\n>catalogs directly. I've tried doing that, but I found the results less\n>readable than what we currently have (especially for the MCV list, where\n>it'd require matching elements in multiple arrays). So I've left this\n>unchanged for now.\n>\n\nI've pushed those changes, after adding docs for the pg_stats_ext view.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 16 Jun 2019 01:24:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Mon, 13 May 2019 at 23:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Fri, May 10, 2019 at 10:19:44AM +0100, Dean Rasheed wrote:\n> >While working on 1aebfbea83c, I noticed that the new multivariate MCV\n> >stats feature suffers from the same problem, and also the original\n> >problems that were fixed in e2d4ef8de8 and earlier --- namely that a\n> >user can see values in the MCV lists that they shouldn't see (values\n> >from tables that they don't have privileges on).\n> >\n> >I think there are 2 separate issues here:\n> >\n> >1). The table pg_statistic_ext is accessible to anyone, so any user\n> >can see the MCV lists of any table. I think we should give this the\n> >same treatment as pg_statistic, and hide it behind a security barrier\n> >view, revoking public access from the table.\n> >\n> >2). The multivariate MCV stats planner code can be made to invoke\n> >user-defined operators, so a user can create a leaky operator and use\n> >it to reveal data values from the MCV lists even if they have no\n> >permissions on the table.\n> >\n> >Attached is a draft patch to fix (2), which hooks into\n> >statext_is_compatible_clause().\n> >\n>\n> I think that patch is good.\n>\n\nI realised that we forgot to push this second part, so I've just done so.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 23 Jun 2019 18:56:53 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 06:56:53PM +0100, Dean Rasheed wrote:\n>On Mon, 13 May 2019 at 23:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Fri, May 10, 2019 at 10:19:44AM +0100, Dean Rasheed wrote:\n>> >While working on 1aebfbea83c, I noticed that the new multivariate MCV\n>> >stats feature suffers from the same problem, and also the original\n>> >problems that were fixed in e2d4ef8de8 and earlier --- namely that a\n>> >user can see values in the MCV lists that they shouldn't see (values\n>> >from tables that they don't have privileges on).\n>> >\n>> >I think there are 2 separate issues here:\n>> >\n>> >1). The table pg_statistic_ext is accessible to anyone, so any user\n>> >can see the MCV lists of any table. I think we should give this the\n>> >same treatment as pg_statistic, and hide it behind a security barrier\n>> >view, revoking public access from the table.\n>> >\n>> >2). The multivariate MCV stats planner code can be made to invoke\n>> >user-defined operators, so a user can create a leaky operator and use\n>> >it to reveal data values from the MCV lists even if they have no\n>> >permissions on the table.\n>> >\n>> >Attached is a draft patch to fix (2), which hooks into\n>> >statext_is_compatible_clause().\n>> >\n>>\n>> I think that patch is good.\n>>\n>\n>I realised that we forgot to push this second part, so I've just done so.\n>\n\nWhoops! Too many patches in this thread. Thanks for noticing.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 23 Jun 2019 22:04:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multivariate MCV stats can leak data to unprivileged users"
}
] |
[
{
"msg_contents": "Hi!\nMy name is Dhruvi Vadalia.\nI'm a 3rd year Computer Engineer and I'm applying for GSoD.\nI looked through your project ideas for GSoD and I'm interested in applying\nfor your organisation. I've worked on PostgreSQL for college projects and\nhence I already have a fair idea about the available documentation and how\nit works.\n\nI've looked through all the ideas you guys have put up for the project\nideas, and I'm interested in working on:\n- Compose cheat sheets for common tools/operations in PostgreSQL\n- Write a PostgreSQL technical mumbo-jumbo dictionary\n- Add documentation for JDBC driver CopyManager API (pgJDBC)\nBut I actually don't mind working on any of the project ideas put up in the\nlist.\n\nAttached herewith is my resume for your reference.\n\nMy *experience* in *technical writing* includes *project reports for\ncollege projects* and technical documentation such as *SRS*, etc. It was\nalso one of my duties to write the* event reports* for the technical\ncouncil, KJSCE-CodeCell <https://www.kjscecodecell.com/>, that I was a part\nof.\n\nIs there any sort of task that you would like me to do for applying?\nOr any sort of guidelines I have to follow?\n\nLink to my GitHub profile- diggy-19 <https://github.com/diggy-19>\n\nPlease let me know.\n\nRegards,\nDhruvi Vadalia.\n\n-- \n\n <https://www.somaiya.edu> <http://www.somaiya-ayurvihar.org> \n<http://nareshwadi.org> <http://somaiya.com> <http://www.helpachild.in> \n<http://nareshwadi.org>",
"msg_date": "Fri, 10 May 2019 17:12:35 +0530",
"msg_from": "DHRUVI VADALIA <dhruvi.vadalia@somaiya.edu>",
"msg_from_op": true,
"msg_subject": "Regarding GSoD"
},
{
"msg_contents": "Greetings,\n\n* DHRUVI VADALIA (dhruvi.vadalia@somaiya.edu) wrote:\n> I'm a 3rd year Computer Engineer and I'm applying for GSoD.\n\n[...]\n\n> My *experience* in *technical writing* includes *project reports for\n> college projects* and technical documentation such as *SRS*, etc. It was\n> also one of my duties to write the* event reports* for the technical\n> council, KJSCE-CodeCell <https://www.kjscecodecell.com/>, that I was a part\n> of.\n\nBased on my understanding, GSoD is not intended as an internship or\nsimilar program (in other words, it's not like GSoC), but is for\nexperienced technical writers. I'd suggest you discuss with Google if\nit would be appropriate for you.\n\n> Is there any sort of task that you would like me to do for applying?\n> Or any sort of guidelines I have to follow?\n\nThe PG GSoD wiki page that's linked to from GSoD asks for proposals to\nbe sent to the pgsql-docs mailing list, not here.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 May 2019 09:44:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Regarding GSoD"
}
] |
[
{
"msg_contents": "Obviously, this macro does not do what it claims to do:\n\n/*\n * check to see if the ATT'th bit of an array of 8-bit bytes is set.\n */\n#define att_isnull(ATT, BITS) (!((BITS)[(ATT) >> 3] & (1 << ((ATT) & 0x07))))\n\nOK, I lied. It's not at all obvious, or at least it wasn't to me.\nThe macro actually tests whether the bit is *unset*, because there's\nan exclamation point in there. I think the comment should be updated\nto something like \"Check a tuple's null bitmap to determine whether\nthe attribute is null. Note that a 0 in the null bitmap indicates a\nnull, while 1 indicates non-null.\"\n\nThere is some kind of broader confusion here, I think, because we\nrefer in many places to the \"null bitmap\" but it's actually not a\nbitmap of which attributes are null but rather of which attributes are\nnot null. That is confusing in and of itself, and it's also not very\nintuitive that it uses exactly the opposite convention from what we do\nwith datum/isnull arrays.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 10 May 2019 10:20:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "att_isnull"
},
{
"msg_contents": "On 2019-May-10, Robert Haas wrote:\n\n> Obviously, this macro does not do what it claims to do:\n> \n> /*\n> * check to see if the ATT'th bit of an array of 8-bit bytes is set.\n> */\n> #define att_isnull(ATT, BITS) (!((BITS)[(ATT) >> 3] & (1 << ((ATT) & 0x07))))\n> \n> OK, I lied. It's not at all obvious, or at least it wasn't to me.\n> The macro actually tests whether the bit is *unset*, because there's\n> an exclamation point in there. I think the comment should be updated\n> to something like \"Check a tuple's null bitmap to determine whether\n> the attribute is null. Note that a 0 in the null bitmap indicates a\n> null, while 1 indicates non-null.\"\n\nYeah, I've noticed this inconsistency too. I doubt we want to change\nthe macro definition or its name, but +1 for expanding the comment.\nYour proposed wording seems sufficient.\n\n> There is some kind of broader confusion here, I think, because we\n> refer in many places to the \"null bitmap\" but it's actually not a\n> bitmap of which attributes are null but rather of which attributes are\n> not null. That is confusing in and of itself, and it's also not very\n> intuitive that it uses exactly the opposite convention from what we do\n> with datum/isnull arrays.\n\nI remember being bit by this inconsistency while fixing data corruption\nproblems, but I'm not sure what, if anything, should we do about it.\nMaybe there's a perfect spot where to add some further documentation\nabout it (a code comment somewhere?) but I don't know where would that\nbe.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 10 May 2019 10:34:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: att_isnull"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Yeah, I've noticed this inconsistency too. I doubt we want to change\n> the macro definition or its name, but +1 for expanding the comment.\n> Your proposed wording seems sufficient.\n\n+1\n\n>> There is some kind of broader confusion here, I think, because we\n>> refer in many places to the \"null bitmap\" but it's actually not a\n>> bitmap of which attributes are null but rather of which attributes are\n>> not null. That is confusing in and of itself, and it's also not very\n>> intuitive that it uses exactly the opposite convention from what we do\n>> with datum/isnull arrays.\n\n> I remember being bit by this inconsistency while fixing data corruption\n> problems, but I'm not sure what, if anything, should we do about it.\n> Maybe there's a perfect spot where to add some further documentation\n> about it (a code comment somewhere?) but I don't know where would that\n> be.\n\nIt is documented in the \"Database Physical Storage\" part of the docs,\nbut no particular emphasis is laid on the 1-vs-0 convention. Maybe\na few more words there are worthwhile?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 May 2019 10:46:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: att_isnull"
},
{
"msg_contents": "On Fri, May 10, 2019 at 10:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Yeah, I've noticed this inconsistency too. I doubt we want to change\n> > the macro definition or its name, but +1 for expanding the comment.\n> > Your proposed wording seems sufficient.\n>\n> +1\n\nOK, committed. I assume nobody is going to complain that such changes\nare off-limits during feature freeze, but maybe I'll be unpleasantly\nsurprised.\n\n> > I remember being bit by this inconsistency while fixing data corruption\n> > problems, but I'm not sure what, if anything, should we do about it.\n> > Maybe there's a perfect spot where to add some further documentation\n> > about it (a code comment somewhere?) but I don't know where would that\n> > be.\n>\n> It is documented in the \"Database Physical Storage\" part of the docs,\n> but no particular emphasis is laid on the 1-vs-0 convention. Maybe\n> a few more words there are worthwhile?\n\nTo me it seems like we more need to emphasize it in the code comments,\nbut I have no concrete proposal. I don't think this is an urgent\nproblem that needs to consume a lot of cycles right now, but I thought\nit was worth mentioning for the archives and just to get the idea out\nthere that maybe we could do better someday.\n\n(My first idea was to deadpan a proposal that we reverse the\nconvention, but then I realized that trolling the list might not be my\nbest strategy.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 May 2019 13:20:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: att_isnull"
}
] |
[
{
"msg_contents": "Hello!\n\nGreetings for the day!\n\nI am an undergraduate student at the Indian Institute of Technology,\nKanpur. I am very much interested in contributing to your organization in\nthe Season of Docs.\n\nPlease let me know how we can begin discussing the project.\n\nI will be delighted to be a part of your team.\n\nRegards,\nBhuvan Singla\nContact: Mail <bhuvansingla2000@gmail.com>, LinkedIn\n<https://www.linkedin.com/in/bhuvansingla>\n\nHello!Greetings for the day!I am an undergraduate student at the Indian Institute of Technology, Kanpur. I am very much interested in contributing to your organization in the Season of Docs.Please let me know how we can begin discussing the project.I will be delighted to be a part of your team.Regards,Bhuvan SinglaContact: Mail, LinkedIn",
"msg_date": "Sat, 11 May 2019 20:22:22 +0530",
"msg_from": "Bhuvan Singla <bhuvansingla2000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Hello"
}
] |
[
{
"msg_contents": "I have posted a draft copy of the PG 12 release notes here:\n\n\thttp://momjian.us/pgsql_docs/release-12.html\n\nThey are committed to git. It includes links to the main docs, where\nappropriate. Our official developer docs will rebuild in a few hours.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 11 May 2019 16:33:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "PG 12 draft release notes"
},
{
"msg_contents": "Bruce,\n\nI noticed that jsonpath in your version is mentioned only in functions\nchapter, but commit\n72b6460336e86ad5cafd3426af6013c7d8457367 is about implementation of\nSQL-2016 standard. We implemented JSON Path language as a jsonpath\ndatatype with a bunch of support functions, our implementation\nsupports 14 out of 15 features and it's the most complete\nimplementation (we compared oracle, mysql and ms sql).\n\nOn Sat, May 11, 2019 at 11:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have posted a draft copy of the PG 12 release notes here:\n>\n> http://momjian.us/pgsql_docs/release-12.html\n>\n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 12 May 2019 10:00:38 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sun, May 12, 2019 at 10:00:38AM +0300, Oleg Bartunov wrote:\n> Bruce,\n> \n> I noticed that jsonpath in your version is mentioned only in functions\n> chapter, but commit\n> 72b6460336e86ad5cafd3426af6013c7d8457367 is about implementation of\n> SQL-2016 standard. We implemented JSON Path language as a jsonpath\n> datatype with a bunch of support functions, our implementation\n> supports 14 out of 15 features and it's the most complete\n> implementation (we compared oracle, mysql and ms sql).\n\nGlad you asked. I was very confused about why a data type was added for\na new path syntax. Is it a new storage format for JSON, or something\nelse? I need help on this.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sun, 12 May 2019 09:49:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi Bruce,\n\nOn 5/11/19 4:33 PM, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n\nThank you for working on this, I know it's a gargantuan task.\n\nI have a small modification for a section entitled \"Source Code\" which\nis a repeat of the previous section. Based on the bullet points in that\npart, I thought \"Documentation\" might be a more appropriate name; please\nsee attached.\n\nThanks,\n\nJonathan",
"msg_date": "Sun, 12 May 2019 10:49:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sun, 12 May 2019 at 08:33, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have posted a draft copy of the PG 12 release notes here:\n>\n> http://momjian.us/pgsql_docs/release-12.html\n\nI noticed a couple of different spellings of Álvaro's name. Loading\nthe file line by line into a table and crudely performing:\n\nselect distinct name from (select\ntrim(regexp_split_to_table(substring(a, '\\((.*?)\\)'),',')) as name\nfrom r where a like '%(%)%')a order by name;\n\nturned up variations in Michaël and Pavel's names\n\nThe attached fixes.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 13 May 2019 13:37:25 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sun, May 12, 2019 at 10:49:07AM -0400, Jonathan Katz wrote:\n> Hi Bruce,\n> \n> On 5/11/19 4:33 PM, Bruce Momjian wrote:\n> > I have posted a draft copy of the PG 12 release notes here:\n> > \n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> > \n> > They are committed to git. It includes links to the main docs, where\n> > appropriate. Our official developer docs will rebuild in a few hours.\n> \n> Thank you for working on this, I know it's a gargantuan task.\n> \n> I have a small modification for a section entitled \"Source Code\" which\n> is a repeat of the previous section. Based on the bullet points in that\n> part, I thought \"Documentation\" might be a more appropriate name; please\n> see attached.\n\nYes, I saw that myself and just updated it. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> \n> Thanks,\n> \n> Jonathan\n\n> diff --git a/doc/src/sgml/release-12.sgml b/doc/src/sgml/release-12.sgml\n> index 5f5d1da33d..1bbd91a02e 100644\n> --- a/doc/src/sgml/release-12.sgml\n> +++ b/doc/src/sgml/release-12.sgml\n> @@ -2617,7 +2617,7 @@ Require a C99-supported compiler, and <acronym>MSCV</acronym> 2013 or later on <\n> </sect3>\n> \n> <sect3>\n> - <title>Source Code</title>\n> + <title>Documentation</title>\n> \n> <itemizedlist>\n> \n\n\n\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sun, 12 May 2019 23:42:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sun, May 12, 2019 at 09:49:40AM -0400, Bruce Momjian wrote:\n> On Sun, May 12, 2019 at 10:00:38AM +0300, Oleg Bartunov wrote:\n> > Bruce,\n> > \n> > I noticed that jsonpath in your version is mentioned only in functions\n> > chapter, but commit\n> > 72b6460336e86ad5cafd3426af6013c7d8457367 is about implementation of\n> > SQL-2016 standard. We implemented JSON Path language as a jsonpath\n> > datatype with a bunch of support functions, our implementation\n> > supports 14 out of 15 features and it's the most complete\n> > implementation (we compared oracle, mysql and ms sql).\n> \n> Glad you asked. I was very confused about why a data type was added for\n> a new path syntax. Is it a new storage format for JSON, or something\n> else? I need help on this.\n\nI talked to Alexander Korotkov on chat about this. The data types are\nused as arguments to the functions, similar to how tsquery and tsvector\nare used for full text search.\n\nTherefore, the data types are not really useful on their own, but as\nsupport for path functions. However, path functions are more like JSON\nqueries, rather than traditional functions, so it odd to list them under\nfunctions, but there isn't a more reasonable place to put them.\n\nAlexander researched how we listed full text search in the release notes\nthat added the feature, but we had \"General\" category at that time that\nwe don't have now.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sun, 12 May 2019 23:52:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 01:37:25PM +1200, David Rowley wrote:\n> On Sun, 12 May 2019 at 08:33, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have posted a draft copy of the PG 12 release notes here:\n> >\n> > http://momjian.us/pgsql_docs/release-12.html\n> \n> I noticed a couple of different spellings of �lvaro's name. Loading\n> the file line by line into a table and crudely performing:\n> \n> select distinct name from (select\n> trim(regexp_split_to_table(substring(a, '\\((.*?)\\)'),',')) as name\n> from r where a like '%(%)%')a order by name;\n> \n> turned up variations in Micha�l and Pavel's names\n\nThat is a big help, thanks, applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sun, 12 May 2019 23:53:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "\nHello Bruce,\n\n> I have posted a draft copy of the PG 12 release notes here:\n>\n> \thttp://momjian.us/pgsql_docs/release-12.html\n>\n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n\nPgbench entry \"Improve pgbench error reporting with clearer messages and \nreturn codes\" is by Peter Eisentraut, not me. I just reviewed it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 13 May 2019 08:41:25 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sun, May 12, 2019 at 09:49:40AM -0400, Bruce Momjian wrote:\n> > On Sun, May 12, 2019 at 10:00:38AM +0300, Oleg Bartunov wrote:\n> > > Bruce,\n> > >\n> > > I noticed that jsonpath in your version is mentioned only in functions\n> > > chapter, but commit\n> > > 72b6460336e86ad5cafd3426af6013c7d8457367 is about implementation of\n> > > SQL-2016 standard. We implemented JSON Path language as a jsonpath\n> > > datatype with a bunch of support functions, our implementation\n> > > supports 14 out of 15 features and it's the most complete\n> > > implementation (we compared oracle, mysql and ms sql).\n> >\n> > Glad you asked. I was very confused about why a data type was added for\n> > a new path syntax. Is it a new storage format for JSON, or something\n> > else? I need help on this.\n>\n> I talked to Alexander Korotkov on chat about this. The data types are\n> used as arguments to the functions, similar to how tsquery and tsvector\n> are used for full text search.\n>\n> Therefore, the data types are not really useful on their own, but as\n> support for path functions. However, path functions are more like JSON\n> queries, rather than traditional functions, so it odd to list them under\n> functions, but there isn't a more reasonable place to put them.\n>\n> Alexander researched how we listed full text search in the release notes\n> that added the feature, but we had \"General\" category at that time that\n> we don't have now.\n\nI attached slide about our Jsonpath implementation in Postgres, it\nsummarizes the reasons to have jsonpath data type. But my point was:\nJSON Path is a part of SQL-2016 standard and I think it's worth to\nmention it, not just a set of jsonb functions.\n\n>\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 13 May 2019 10:08:57 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 08:41:25AM +0200, Fabien COELHO wrote:\n> \n> Hello Bruce,\n> \n> > I have posted a draft copy of the PG 12 release notes here:\n> > \n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> > \n> > They are committed to git. It includes links to the main docs, where\n> > appropriate. Our official developer docs will rebuild in a few hours.\n> \n> Pgbench entry \"Improve pgbench error reporting with clearer messages and\n> return codes\" is by Peter Eisentraut, not me. I just reviewed it.\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 13 May 2019 22:21:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 10:08:57AM +0300, Oleg Bartunov wrote:\n> On Mon, May 13, 2019 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Sun, May 12, 2019 at 09:49:40AM -0400, Bruce Momjian wrote:\n> > > On Sun, May 12, 2019 at 10:00:38AM +0300, Oleg Bartunov wrote:\n> > > > Bruce,\n> > > >\n> > > > I noticed that jsonpath in your version is mentioned only in functions\n> > > > chapter, but commit\n> > > > 72b6460336e86ad5cafd3426af6013c7d8457367 is about implementation of\n> > > > SQL-2016 standard. We implemented JSON Path language as a jsonpath\n> > > > datatype with a bunch of support functions, our implementation\n> > > > supports 14 out of 15 features and it's the most complete\n> > > > implementation (we compared oracle, mysql and ms sql).\n> > >\n> > > Glad you asked. I was very confused about why a data type was added for\n> > > a new path syntax. Is it a new storage format for JSON, or something\n> > > else? I need help on this.\n> >\n> > I talked to Alexander Korotkov on chat about this. The data types are\n> > used as arguments to the functions, similar to how tsquery and tsvector\n> > are used for full text search.\n> >\n> > Therefore, the data types are not really useful on their own, but as\n> > support for path functions. However, path functions are more like JSON\n> > queries, rather than traditional functions, so it odd to list them under\n> > functions, but there isn't a more reasonable place to put them.\n> >\n> > Alexander researched how we listed full text search in the release notes\n> > that added the feature, but we had \"General\" category at that time that\n> > we don't have now.\n> \n> I attached slide about our Jsonpath implementation in Postgres, it\n> summarizes the reasons to have jsonpath data type. But my point was:\n> JSON Path is a part of SQL-2016 standard and I think it's worth to\n> mention it, not just a set of jsonb functions.\n\nSo, are you suggesting we mention the jsonpath data type in the Data\nType section, even though it is useless without jsonpath, which is in\nanother section, or are you suggesting to move the jsonpath item to the\nData Type section?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 13 May 2019 22:23:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sat, May 11, 2019, at 22:33, Bruce Momjian wrote:\n> http://momjian.us/pgsql_docs/release-12.html\n\nThere is a typo in E.1.3.1.1.:\n> Expressions are evaluated at table partitioned table creation time.\nFirst \"table\" seems to be excessive.\n\nRegards,\nNick.\n\n\n",
"msg_date": "Tue, 14 May 2019 11:53:23 +0200",
"msg_from": "nickb <nickb@imap.cc>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 14, 2019 at 11:53:23AM +0200, nickb wrote:\n> On Sat, May 11, 2019, at 22:33, Bruce Momjian wrote:\n> > http://momjian.us/pgsql_docs/release-12.html\n> \n> There is a typo in E.1.3.1.1.:\n> > Expressions are evaluated at table partitioned table creation time.\n> First \"table\" seems to be excessive.\n\nYep, fixed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 14 May 2019 09:05:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi,\n\nNote that I've added a few questions to individuals involved with\nspecific points. If you're in the To: list, please search for your name.\n\n\nOn 2019-05-11 16:33:24 -0400, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n>\n> \thttp://momjian.us/pgsql_docs/release-12.html\n> They are committed to git.\n\nThanks!\n\n <title>Migration to Version 12</title>\n\nThere's a number of features in the compat section that are more general\nimprovements with a side of incompatibility. Won't it be confusing to\ne.g. have have the ryu floating point conversion speedups in the compat\nsection, but not in the \"General Performance\" section?\n\n\n <para>\n Remove the special behavior of <link\n linkend=\"datatype-oid\">OID</link> columns (Andres Freund,\n John Naylor)\n </para>\n\nShould we mention that tables with OIDs have to have their oids removed\nbefore they can be upgraded?\n\n\n <para>\n Refactor <link linkend=\"functions-geometry\">geometric\n functions</link> and operators (Emre Hasegeli)\n </para>\n\n <para>\n This could lead to more accurate, but slightly different, results\n from previous releases.\n </para>\n </listitem>\n <listitem>\n<!--\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n2018-08-16 [c4c340088] Use the built-in float datatypes to implement geometric \n-->\n\n <para>\n Restructure <link linkend=\"datatype-geometric\">geometric\n types</link> to handle NaN, underflow, overflow and division by\n zero more consistently (Emre Hasegeli)\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n2018-09-26 [2e2a392de] Fix problems in handling the line data type\n-->\n\n <para>\n Improve behavior and error reporting for the <link\n linkend=\"datatype-geometric\">line data type</link> (Emre Hasegeli)\n </para>\n </listitem>\n\nIs that sufficient explanation? Feels like we need to expand a bit\nmore. In particular, is it possible that a subset of the changes here\nrequire reindexing?\n\nAlso, aren't three different entries a bit too much?\n\n\n <para>\n Avoid performing unnecessary rounding of <link\n linkend=\"datatype-float\"><type>REAL</type></link> and <type>DOUBLE\n PRECISION</type> values (Andrew Gierth)\n </para>\n\n <para>\n This dramatically speeds up processing of floating-point\n values but causes additional trailing digits to\n potentially be displayed. Users wishing to have output\n that is rounded to match the previous behavior can set <link\n linkend=\"guc-extra-float-digits\"><literal>extra_float_digits=0</literal></link>,\n which is no longer the default.\n </para>\n </listitem>\n\nIsn't it exactly the *other* way round? *Previously* we'd output\nadditional trailing digits. The new algorithm instead will instead have\n*exactly* the required number of digits?\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-02-11 [1d92a0c9f] Redesign the partition dependency mechanism.\n-->\n\n <para>\n Improve handling of partition dependency (Tom Lane)\n </para>\n\n <para>\n This prevents the creation of inconsistent partition hierarchies\n in rare cases.\n </para>\n </listitem>\n\nThat seems not very informative for users?\n\n\n <listitem>\n<!--\nAuthor: Alexander Korotkov <akorotkov@postgresql.org>\n2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during \nAuthor: Peter Geoghegan <pg@bowt.ie>\n2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n-->\n\n <para>\n Improve speed of btree index insertions (Peter Geoghegan,\n Alexander Korotkov)\n </para>\n\n <para>\n The new code improves the space-efficiency of page splits,\n reduces locking overhead, and gives better performance for\n <command>UPDATE</command>s and <command>DELETE</command>s on\n indexes with many duplicates.\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Peter Geoghegan <pg@bowt.ie>\n2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\nAuthor: Peter Geoghegan <pg@bowt.ie>\n2019-03-20 [fab250243] Consider secondary factors during nbtree splits.\n-->\n\n <para>\n Have new btree indexes sort duplicate index entries in heap-storage\n order (Peter Geoghegan, Heikki Linnakangas)\n </para>\n\n <para>\n Indexes <application>pg_upgraded</application> from previous\n releases will not have this ordering.\n </para>\n </listitem>\n\nI'm not sure that the grouping here is quite right. And the second entry\nprobably should have some explanation about the benefits?\n\n\n <listitem>\n<!--\nAuthor: Peter Eisentraut <peter_e@gmx.net>\n2018-11-14 [1b5d797cd] Lower lock level for renaming indexes\n-->\n\n <para>\n Reduce locking requirements for index renaming (Peter Eisentraut)\n </para>\n </listitem>\n\nShould we specify the newly required lock level? Because it's quire\nrelevant for users what exactly they're now able to do concurrently in\noperation?\n\n\n <para>\n Allow <link linkend=\"queries-with\">common table expressions</link>\n (<acronym>CTE</acronym>) to be inlined in later parts of the query\n (Andreas Karlsson, Andrew Gierth, David Fetter, Tom Lane)\n </para>\n\n <para>\n Specifically, <acronym>CTE</acronym>s are inlined\n if they are not recursive and are referenced only\n once later in the query. Inlining can be prevented by\n specifying <literal>MATERIALIZED</literal>, and forced by\n specifying <literal>NOT MATERIALIZED</literal>. Previously,\n <acronym>CTE</acronym>s were never inlined and were always\n evaluated before the rest of the query.\n </para>\n\nHm. Is it actually correct to say that \"were always evaluated before the\nrest of the query.\"? My understanding is that that's not actually how\nthey behaved. Materialization for CTE scans was on-demand (i.e. when\nneeded by a CTE scan), and even for DML CTEs we'd only force the\nunderlying query to completion at the end of the query?\n\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n-->\n\n <para>\n Add support for <link linkend=\"sql-createfunction\">function\n selectivity</link> (Tom Lane)\n </para>\n </listitem>\n\nHm, that message doesn't seem like an accurate description of that\ncommit (if anything it's a391ff3c?). Given that it all requires C\nhackery, perhaps we ought to move it to the source code section? And\nisn't the most important part of this set of changes\n\ncommit 74dfe58a5927b22c744b29534e67bfdd203ac028\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2019-02-11 21:26:08 -0500\n\n Allow extensions to generate lossy index conditions.\n\n\n <listitem>\n<!--\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\n2019-01-29 [36a1281f8] Separate per-batch and per-tuple memory contexts in COPY\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\nAuthor: Andres Freund <andres@anarazel.de>\n2019-01-26 [a9c35cf85] Change function call information to be variable length.\n-->\n\n <para>\n Greatly reduce memory consumption of <xref linkend=\"sql-copy\"/>\n and function calls (Andres Freund, Tomas Vondra, Tom Lane)\n </para>\n </listitem>\n\nGrouping these three changes together makes no sense to me.\n\nI think the first commit just ought not to be mentioned separately, it's\njust a fix for a memory leak in 31f3817402, essentially a 12 only bugfix?\n\nThe second commit is about position() etc, which seems not to match that\ndescription either?\n\nThe third is probably more appropriate to be in the source code\nsection. While it does speed up function calls a bit (in particular\nplpgsql which is very function call heavy), it also is a breaking change\nfor some external code? Not sure why Tom is listed with this entry?\n\n\n <listitem>\n<!--\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\n-->\n\n <para>\n Improve search performance for multi-byte characters (Heikki\n Linnakangas)\n </para>\n </listitem>\n\nThat's the second reference to the commit. I suspect this is much better\nseparate, so I'd just remove it from above.\n\n\n <listitem>\n<!--\nAuthor: Stephen Frost <sfrost@snowman.net>\n2019-04-02 [4d0e994ee] Add support for partial TOAST decompression\n-->\n\n <para>\n Allow <link linkend=\"storage-toast\"><literal>TOAST</literal></link>\n values to be minimally decompressed (Paul Ramsey)\n </para>\n\nI'd s/minimal/partial/ - I don't think the code guarantees anything\nabout it being minimal? And \"minimally decompressed\" also is somewhat\nconfusing, because it sounds like it's about the compression quality\nrather than only decompressing part of the data.\n\n\n <listitem>\n<!--\nAuthor: Michael Paquier <michael@paquier.xyz>\n2018-08-10 [f841ceb26] Improve TRUNCATE by avoiding early lock queue\n-->\n\n <para>\n Prevent <xref linkend=\"sql-truncate\"/> from requesting a lock on\n tables for which it lacks permission (Micha�l Paquier)\n </para>\n\n <para>\n This prevents unauthorized locking delays.\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Michael Paquier <michael@paquier.xyz>\n2018-08-27 [a556549d7] Improve VACUUM and ANALYZE by avoiding early lock queue\n-->\n\n <para>\n Prevent <command>VACUUM</command> and <command>ANALYZE</command>\n from requesting a lock on tables for which it lacks permission\n (Micha�l Paquier)\n </para>\n\n <para>\n This prevents unauthorized locking delays.\n </para>\n </listitem>\n\n\nI don't think this should be in the <title><acronym>Authentication</acronym></title>\nsection.\n\nAlso perhaps, s/it/the user/, or \"the caller\"?\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-10 [cbccac371] Reduce the default value of autovacuum_vacuum_cost_delay\n-->\n\n <para>\n Reduce the default value of <xref\n linkend=\"guc-autovacuum-vacuum-cost-delay\"/> to 2ms (Tom Lane)\n </para>\n </listitem>\n\nI think this needs to explain that this can increase autovacuum's IO\nthroughput considerably.\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n-->\n\n <para>\n Allow <xref linkend=\"guc-vacuum-cost-delay\"/> to specify\n sub-millisecond delays (Tom Lane)\n </para>\n\n <para>\n Floating-point values can also now be specified.\n </para>\n </listitem>\n\nAnd this should be merged with the previous entry?\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n-->\n\n <para>\n Allow time-based server variables to use <link\n linkend=\"config-setting\">micro-seconds</link> (us) (Tom Lane)\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-11 [1a83a80a2] Allow fractional input values for integer GUCs, and impr\n-->\n\n <para>\n Allow fractional input for integer server variables (Tom Lane)\n </para>\n\n <para>\n For example, <command>SET work_mem = '30.1GB'</command>.\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n-->\n\n <para>\n Allow units to be specified for floating-point server variables\n (Tom Lane)\n </para>\n </listitem>\n\nCan't we combine these? Seems excessively detailed in comparison to the\nrest of the entries.\n\n\n <listitem>\n<!--\nAuthor: Peter Eisentraut <peter@eisentraut.org>\n2019-01-11 [ff8530605] Add value 'current' for recovery_target_timeline\n-->\n\n <para>\n Add an explicit value of <literal>current</literal> for <xref\n linkend=\"guc-recovery-target-time\"/> (Peter Eisentraut)\n </para>\n </listitem>\n\nSeems like this should be combined with the earlier \"Cause recovery to\nadvance to the latest timeline by default\" entry.\n\n\n <listitem>\n<!--\nAuthor: Peter Eisentraut <peter@eisentraut.org>\n2019-03-30 [fc22b6623] Generated columns\n-->\n\n <para>\n Add support for <link linkend=\"sql-createtable\">generated\n columns</link> (Peter Eisentraut)\n </para>\n\n <para>\n Rather than storing a value only at row creation time, generated\n columns are also modified during updates, and can reference other\n table columns.\n </para>\n </listitem>\n\nI find this description confusing. How about cribbing from the commit?\nRoughly like\n\n This allows creating columns that are computed from expressions,\n including references to other columns in the same table, rather than\n having to be specified by the inserter/updater.\n\nThink we also ought to mention that this is only stored generated\ncolumns, given that the SQL feature also includes virtual columns?\n\n\n <listitem>\n<!--\nAuthor: Fujii Masao <fujii@postgresql.org>\n2019-04-08 [119dcfad9] Add vacuum_truncate reloption.\nAuthor: Fujii Masao <fujii@postgresql.org>\n2019-05-08 [b84dbc8eb] Add TRUNCATE parameter to VACUUM.\n-->\n\n <para>\n Add <xref linkend=\"sql-vacuum\"/> and <command>CREATE\n TABLE</command> options to prevent <command>VACUUM</command>\n from truncating trailing empty pages (Tsunakawa Takayuki)\n </para>\n\n <para>\n The options are <varname>vacuum_truncate</varname> and\n <varname>toast.vacuum_truncate</varname>. This reduces vacuum\n locking requirements.\n </para>\n </listitem>\n\nMaybe add something like: \"This can be helpful to avoid query\ncancellations on standby that are not avoided by hot_standby_feedback.\"?\n\n\n <listitem>\n<!--\nAuthor: Robert Haas <rhaas@postgresql.org>\n2019-04-04 [a96c41fee] Allow VACUUM to be run with index cleanup disabled.\n-->\n\n <para>\n Allow vacuum to avoid index cleanup with the\n <literal>INDEX_CLEANUP</literal> option (Masahiko Sawada)\n </para>\n </listitem>\n\nI think we ought to expand a bit more on why one would do that,\nincluding perhaps some caveat?\n\n\n <listitem>\n<!--\nAuthor: Peter Eisentraut <peter@eisentraut.org>\n2019-03-19 [590a87025] Ignore attempts to add TOAST table to shared or catalog \n-->\n\n <para>\n Allow modifications of system tables using <xref\n linkend=\"sql-altertable\"/> (Peter Eisentraut)\n </para>\n\n <para>\n This allows modifications of <literal>reloptions</literal> and\n autovacuum settings.\n </para>\n </listitem>\n\nI think the first paragraph is a bit dangerous. This does *not*\ngenerally allow modifications of system tables using ALTER TABLE.\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2019-01-30 [5f5c01459] Allow RECORD and RECORD[] to be specified in function co\n-->\n\n <para>\n Allow <type>RECORD</type> and <type>RECORD[]</type> to be specified\n as a function <link linkend=\"sql-createfunction\">return-value\n record</link> (Elvis Pranskevichus)\n </para>\n\n <para>\n DETAIL?\n </para>\n </listitem>\n\nThis description doesn't sound accurate to me. Tom?\n\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2018-09-25 [5b7e03670] Avoid unnecessary precision loss for pgbench's - -rate ta\n-->\n\n <para>\n Compute behavior based on pgbench's <option>--rate</option>\n value more precisely (Tom Lane)\n </para>\n </listitem>\n\n\"Computing behavior\" sounds a bit odd. Maybe \"Improve precision of\npgbench's <option>--rate</option>\" option?\n\n\n <listitem>\n<!--\nAuthor: Thomas Munro <tmunro@postgresql.org>\n2018-07-13 [387a5cfb9] Add pg_dump - -on-conflict-do-nothing option.\n-->\n\n <para>\n Allow restoration of an <command>INSERT</command>-statement dump\n to skip rows which would cause conflicts (Surafel Temesgen)\n </para>\n\n <para>\n The <application>pg_dump</application> option is\n <option>--on-conflict-do-nothing</option>.\n </para>\n </listitem>\n\nHm, this doesn't seem that clear. It's not really a restoration time\noption, and it sounds a bit like that in the above. How about instead saying something\nlike:\nAllow pg_dump to emit INSERT ... ON CONFLICT DO NOTHING (Surafel).\n\n\n <listitem>\n<!--\nAuthor: Andrew Dunstan <andrew@dunslane.net>\n2019-02-18 [af25bc03e] Provide an extra-float-digits setting for pg_dump / pg_d\n-->\n\n <para>\n Allow the number of float digits to be specified\n for <application>pg_dump</application> and\n <application>pg_dumpall</application> (Andrew Dunstan)\n </para>\n\n <para>\n This allows the float digit output to match previous dumps.\n </para>\n\nHm, feels like that should be combined with the ryu compat entry?\n\n\n <para>\n Add <xref linkend=\"sql-create-access-method\"/> command to create\n new table types (Haribabu Kommi, Andres Freund, �lvaro Herrera,\n Dimitri Dolgov)\n </para>\n\nA few points:\n\n1) Is this really source code, given that CREATE ACCESS METHOD TYPE\n TABLE is a DDL command, and USING (...) for CREATE TABLE etc is an\n option to DDL commands?\n\n2) I think the description sounds a bit too much like it's about new\n forms of tables, rather than their storage. How about something\n roughly like:\n\n Allow different <link linkend=\"tableam\">table access methods</> to be\n <link linkend=\"sql-create-access-method>created</> and <link\n linkend=\"sql-createtable-method\">used</>. This allows to develop and\n use new ways of storing and accessing table data, optimized for\n different use-cases, without having to modify\n PostgreSQL. The existing <literal>heap</literal> access method\n remains the default.\n\n3) This misses a large set of commits around making tableam possible, in\n particular the commits around\n\ncommit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2018-11-16 16:35:11 -0800\n\n Make TupleTableSlots extensible, finish split of existing slot type.\n\n Given that those commits entail an API break relevant for extensions,\n should we have them as a separate \"source code\" note?\n\n4) I think the attribution isn't quite right. For one, a few names with\n substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n Alexander Korotkov), and the order doesn't quite seem right. On the\n latter part I might be somewhat petty, but I spend *many* months of\n my life on this.\n\n How about:\n Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n if we keep 3) separate and\n Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n otherwise?\n\n I think it might actually make sense to take David off this list,\n because his tableam work is essentially part of it's own entry, as\n<!--\nAuthor: Peter Eisentraut <peter_e@gmx.net>\n2018-08-01 [0d5f05cde] Allow multi-inserts during COPY into a partitioned table\n-->\n\n <para>\n Improve speed of <command>COPY</command> into partitioned tables\n (David Rowley)\n </para>\n\n since his copy.c portions of 86b85044e823a largely are a rewrite of\n the above commit.\n\n\n <listitem>\n<!--\nAuthor: Greg Stark <stark@mit.edu>\n2018-10-09 [36e9d413a] Add \"B\" suffix for bytes to docs\n-->\n\n <para>\n Document that the <literal>B</literal>/bytes units can be specified\n for <link linkend=\"config-setting\">server variables</link>\n (Greg Stark)\n </para>\n </listitem>\n\nGiven how large changes we skip over in the release notes, I don't\nreally see a point in including changes like this. Feels like we'd at\nthe very least also have to include larger changes with typo/grammar\nfixes etc?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 15:17:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "[ To: header pruned ]\n\n>>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n\n Andres> <para>\n Andres> Avoid performing unnecessary rounding of <link\n Andres> linkend=\"datatype-float\"><type>REAL</type></link> and <type>DOUBLE\n Andres> PRECISION</type> values (Andrew Gierth)\n Andres> </para>\n\n Andres> <para>\n Andres> This dramatically speeds up processing of floating-point\n Andres> values but causes additional trailing digits to\n Andres> potentially be displayed. Users wishing to have output\n Andres> that is rounded to match the previous behavior can set <link\n Andres> linkend=\"guc-extra-float-digits\"><literal>extra_float_digits=0</literal></link>,\n Andres> which is no longer the default.\n Andres> </para>\n Andres> </listitem>\n\n Andres> Isn't it exactly the *other* way round? *Previously* we'd\n Andres> output additional trailing digits. The new algorithm instead\n Andres> will instead have *exactly* the required number of digits?\n\nYeah, this wording is not right. But your description is also wrong.\n\nPreviously we output values rounded to 6+d or 15+d digits where\nd=extra_float_digits, with d=0 being the default. Only clients that\nwanted exact results would set that to 3 instead.\n\nNow we output the minimum digits to get an exact result, which is\nusually 8 or 17 digits (sometimes less depending on the value, or 9 for\nthe relatively rare float4 values that need it) for any\nextra_float_digits value > 0. Clients that set d=3 will therefore\nusually get one less digit than before, and the value they get will\nusually be slightly different (i.e. not the same value that they would\nhave seen with d=2), but it should give them the same binary value after\ngoing through strtod() or strtof().\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 20 May 2019 23:56:33 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Note that I've added a few questions to individuals involved with\n> specific points. If you're in the To: list, please search for your name.\n\nI'm not sure which of my commits you want me to opine on, other than\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-01-30 [5f5c01459] Allow RECORD and RECORD[] to be specified in function co\n> -->\n> <para>\n> Allow <type>RECORD</type> and <type>RECORD[]</type> to be specified\n> as a function <link linkend=\"sql-createfunction\">return-value\n> record</link> (Elvis Pranskevichus)\n> </para>\n> <para>\n> DETAIL?\n> </para>\n> </listitem>\n\n> This description doesn't sound accurate to me. Tom?\n\nYeah, maybe better\n\n Allow <type>RECORD</type> and <type>RECORD[]</type> to be used\n as column types in a query's column definition list for a\n table function that is declared to return <type>RECORD</type>\n (Elvis Pranskevichus)\n\nYou could link to \"queries-tablefunctions\" which describes the column\ndefinition business; it's much more specific than \"sql-createfunction\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 18:56:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 23:56:33 +0100, Andrew Gierth wrote:\n> [ To: header pruned ]\n> \n> >>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> \n> Andres> <para>\n> Andres> Avoid performing unnecessary rounding of <link\n> Andres> linkend=\"datatype-float\"><type>REAL</type></link> and <type>DOUBLE\n> Andres> PRECISION</type> values (Andrew Gierth)\n> Andres> </para>\n> \n> Andres> <para>\n> Andres> This dramatically speeds up processing of floating-point\n> Andres> values but causes additional trailing digits to\n> Andres> potentially be displayed. Users wishing to have output\n> Andres> that is rounded to match the previous behavior can set <link\n> Andres> linkend=\"guc-extra-float-digits\"><literal>extra_float_digits=0</literal></link>,\n> Andres> which is no longer the default.\n> Andres> </para>\n> Andres> </listitem>\n> \n> Andres> Isn't it exactly the *other* way round? *Previously* we'd\n> Andres> output additional trailing digits. The new algorithm instead\n> Andres> will instead have *exactly* the required number of digits?\n> \n> Yeah, this wording is not right. But your description is also wrong.\n> \n> Previously we output values rounded to 6+d or 15+d digits where\n> d=extra_float_digits, with d=0 being the default. Only clients that\n> wanted exact results would set that to 3 instead.\n> \n> Now we output the minimum digits to get an exact result, which is\n> usually 8 or 17 digits (sometimes less depending on the value, or 9 for\n> the relatively rare float4 values that need it) for any\n> extra_float_digits value > 0. Clients that set d=3 will therefore\n> usually get one less digit than before, and the value they get will\n> usually be slightly different (i.e. not the same value that they would\n> have seen with d=2), but it should give them the same binary value after\n> going through strtod() or strtof().\n\nAny chance for you to propose a text?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 15:59:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": ">>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n\n Andres> Any chance for you to propose a text?\n\nThis is what I posted before; I'm not 100% happy with it but it's still\nbetter than any of the other versions:\n\n * Output REAL and DOUBLE PRECISION values in shortest-exact format by\n default, and change the behavior of extra_float_digits\n\n Previously, float values were output rounded to 6 or 15 decimals by\n default, with the number of decimals adjusted by extra_float_digits.\n The previous rounding behavior is no longer the default, and is now\n done only if extra_float_digits is set to zero or less; if the value\n is greater than zero (which it is by default), a shortest-precise\n representation is output (for a substantial performance improvement).\n This representation preserves the exact binary value when correctly\n read back in, even though the trailing digits will usually differ\n from the output generated by previous versions when\n extra_float_digits=3.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 21 May 2019 00:08:25 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 2019-05-20 18:56:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Note that I've added a few questions to individuals involved with\n> > specific points. If you're in the To: list, please search for your name.\n> \n> I'm not sure which of my commits you want me to opine on, other than\n\nThat was one of the main ones. I'm also specifically wondering about:\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n> -->\n> \n> <para>\n> Add support for <link linkend=\"sql-createfunction\">function\n> selectivity</link> (Tom Lane)\n> </para>\n> </listitem>\n> \n> Hm, that message doesn't seem like an accurate description of that\n> commit (if anything it's a391ff3c?). Given that it all requires C\n> hackery, perhaps we ought to move it to the source code section? And\n> isn't the most important part of this set of changes\n> \n> commit 74dfe58a5927b22c744b29534e67bfdd203ac028\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2019-02-11 21:26:08 -0500\n> \n> Allow extensions to generate lossy index conditions.\n\n\nand perhaps you could opine on whether we ought to include\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-02-11 [1d92a0c9f] Redesign the partition dependency mechanism.\n> -->\n> \n> <para>\n> Improve handling of partition dependency (Tom Lane)\n> </para>\n> \n> <para>\n> This prevents the creation of inconsistent partition hierarchies\n> in rare cases.\n> </para>\n> </listitem>\n> \n> That seems not very informative for users?\n\nand if so provide a better description? Because no user is going to make\nsense of that.\n\nAnd lastly, opine on the int GUC fractions, microsoecond, and cost_delay\nitems?\n\n\n> Yeah, maybe better\n> \n> Allow <type>RECORD</type> and <type>RECORD[]</type> to be used\n> as column types in a query's column definition list for a\n> table function that is declared to return <type>RECORD</type>\n> (Elvis Pranskevichus)\n> \n> You could link to \"queries-tablefunctions\" which describes the column\n> definition business; it's much more specific than \"sql-createfunction\".\n\nYea, that's much better.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 16:09:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 00:08:25 +0100, Andrew Gierth wrote:\n> >>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> \n> Andres> Any chance for you to propose a text?\n> \n> This is what I posted before; I'm not 100% happy with it but it's still\n> better than any of the other versions:\n\n> * Output REAL and DOUBLE PRECISION values in shortest-exact format by\n> default, and change the behavior of extra_float_digits\n> \n> Previously, float values were output rounded to 6 or 15 decimals by\n> default, with the number of decimals adjusted by extra_float_digits.\n> The previous rounding behavior is no longer the default, and is now\n> done only if extra_float_digits is set to zero or less; if the value\n> is greater than zero (which it is by default), a shortest-precise\n> representation is output (for a substantial performance improvement).\n> This representation preserves the exact binary value when correctly\n> read back in, even though the trailing digits will usually differ\n> from the output generated by previous versions when\n> extra_float_digits=3.\n\nDefinitely better from what's there in my opinion. Shortening it if\nreasonable wouldn't hurt. Perhaps\n\nOutput REAL and DOUBLE PRECISION values in shortest-exact format by\ndefault, and change the behavior of extra_float_digits (...)\n\nWhen extra_float_digits is set to a value greater than zero (the\ndefault), a shortest-precise representation is output (for a substantial\nperformance improvement). This representation preserves the exact binary\nvalue when correctly read back in, even though the trailing digits will\nusually differ from the output generated by previous versions when\nextra_float_digits=3.\n\nPreviously, float values were output rounded to 6 or 15 decimals by\ndefault, with the number of decimals adjusted by\nextra_float_digits. This behaviour can be restored by setting\nextra_float_digits is set to zero or less.\n\nOr something in that vein?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 16:16:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, 21 May 2019 at 10:17, Andres Freund <andres@anarazel.de> wrote:\n> commit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2018-11-16 16:35:11 -0800\n>\n> Make TupleTableSlots extensible, finish split of existing slot type.\n>\n> Given that those commits entail an API break relevant for extensions,\n> should we have them as a separate \"source code\" note?\n>\n> 4) I think the attribution isn't quite right. For one, a few names with\n> substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n> Alexander Korotkov), and the order doesn't quite seem right. On the\n> latter part I might be somewhat petty, but I spend *many* months of\n> my life on this.\n>\n> How about:\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n> if we keep 3) separate and\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n> otherwise?\n>\n> I think it might actually make sense to take David off this list,\n> because his tableam work is essentially part of it's own entry, as\n\nYeah, please take me off that one. My focus there was mostly on\nkeeping COPY fast with partitioned tables, to which, as Andres\nmentioned is listed somewhere else.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 12:04:50 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-20 18:56:50 -0400, Tom Lane wrote:\n>> I'm not sure which of my commits you want me to opine on, other than\n\n> That was one of the main ones. I'm also specifically wondering about:\n\n>> Author: Tom Lane <tgl@sss.pgh.pa.us>\n>> 2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n>> <para>\n>> Add support for <link linkend=\"sql-createfunction\">function\n>> selectivity</link> (Tom Lane)\n>> </para>\n>> </listitem>\n>> \n>> Hm, that message doesn't seem like an accurate description of that\n>> commit (if anything it's a391ff3c?). Given that it all requires C\n>> hackery, perhaps we ought to move it to the source code section?\n\nYes, this should be in \"source code\". I think it should be merged\nwith a391ff3c and 74dfe58a into something like\n\n\tAllow extensions to create planner support functions that\n\tcan provide function-specific selectivity, cost, and\n row-count estimates that can depend on the function arguments.\n Support functions can also transform WHERE clauses involving\n an extension's functions and operators into indexable clauses\n in ways that the core code cannot for lack of detailed semantic\n\tknowledge of those functions/operators.\n\n> and perhaps you could opine on whether we ought to include\n\n>> <listitem>\n>> <!--\n>> Author: Tom Lane <tgl@sss.pgh.pa.us>\n>> 2019-02-11 [1d92a0c9f] Redesign the partition dependency mechanism.\n>> -->\n>> \n>> <para>\n>> Improve handling of partition dependency (Tom Lane)\n>> </para>\n>> \n>> <para>\n>> This prevents the creation of inconsistent partition hierarchies\n>> in rare cases.\n>> </para>\n>> </listitem>\n\nIt's probably worth mentioning, but I'd say something like\n\n Fix bugs that could cause ALTER TABLE DETACH PARTITION\n to not drop objects that should be dropped, such as\n automatically-created child indexes.\n\nThe rest of it is not terribly interesting from a user's standpoint,\nI think.\n\n> And lastly, opine on the int GUC fractions, microsoecond, and cost_delay\n> items?\n\nI agree with your comments on those.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 20:48:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 20, 2019 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> <!--\n> Author: Alexander Korotkov <akorotkov@postgresql.org>\n> 2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during\n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n> -->\n>\n> <para>\n> Improve speed of btree index insertions (Peter Geoghegan,\n> Alexander Korotkov)\n> </para>\n\nMy concern here (which I believe Alexander shares) is that it doesn't\nmake sense to group these two items together. They're two totally\nunrelated pieces of work. Alexander's work does more or less help with\nlock contention with writes, whereas the feature that that was merged\nwith is about preventing index bloat, which is mostly helpful for\nreads (it helps writes to the extent that writes are also reads).\n\nThe release notes go on to say that this item \"gives better\nperformance for UPDATEs and DELETEs on indexes with many duplicates\",\nwhich is wrong. That is something that should have been listed below,\nunder the \"duplicate index entries in heap-storage order\" item.\n\n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-20 [fab250243] Consider secondary factors during nbtree splits.\n> -->\n>\n> <para>\n> Have new btree indexes sort duplicate index entries in heap-storage\n> order (Peter Geoghegan, Heikki Linnakangas)\n> </para>\n\n> I'm not sure that the grouping here is quite right. And the second entry\n> probably should have some explanation about the benefits?\n\nIt could stand to say something about the benefits. As I said, there\nis already a little bit about the benefits, but that ended up being\ntied to the \"Improve speed of btree index insertions\" item. Moving\nthat snippet to the correct item would be a good start.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 May 2019 17:48:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 8:17 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> <para>\n> Add <xref linkend=\"sql-create-access-method\"/> command to create\n> new table types (Haribabu Kommi, Andres Freund, Álvaro Herrera,\n> Dimitri Dolgov)\n> </para>\n>\n> A few points:\n>\n> 1) Is this really source code, given that CREATE ACCESS METHOD TYPE\n> TABLE is a DDL command, and USING (...) for CREATE TABLE etc is an\n> option to DDL commands?\n>\n\n+1\n\nIt would be better to provide a description of the newly added syntax.\nDo we need to provide any 'Note' explaining that currently there are no\nother\nalternatives to the heap?\n\n\n2) I think the description sounds a bit too much like it's about new\n> forms of tables, rather than their storage. How about something\n> roughly like:\n>\n> Allow different <link linkend=\"tableam\">table access methods</> to be\n> <link linkend=\"sql-create-access-method>created</> and <link\n> linkend=\"sql-createtable-method\">used</>. This allows to develop and\n> use new ways of storing and accessing table data, optimized for\n> different use-cases, without having to modify\n> PostgreSQL. The existing <literal>heap</literal> access method\n> remains the default.\n>\n> 3) This misses a large set of commits around making tableam possible, in\n> particular the commits around\n>\n> commit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2018-11-16 16:35:11 -0800\n>\n> Make TupleTableSlots extensible, finish split of existing slot type.\n>\n> Given that those commits entail an API break relevant for extensions,\n> should we have them as a separate \"source code\" note?\n>\n\n+1 to add, but I am not sure whether we need to list all the breakage that\nhas introduced by Tableam needs to be described separately or with some\ncombined note to explain it to extension developers is fine?\n\n\n\n> 4) I think the attribution isn't quite right. For one, a few names with\n> substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n> Alexander Korotkov), and the order doesn't quite seem right. On the\n> latter part I might be somewhat petty, but I spend *many* months of\n> my life on this.\n>\n> How about:\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov,\n> David Rowley, Dimitri Golgov\n> if we keep 3) separate and\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat,\n> Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n> otherwise?\n\n\n+1 to either of the above.\nWithout Andres enormous efforts, Tableam couldn't have been possible into\nv12.\n\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, May 21, 2019 at 8:17 AM Andres Freund <andres@anarazel.de> wrote:\n <para>\n Add <xref linkend=\"sql-create-access-method\"/> command to create\n new table types (Haribabu Kommi, Andres Freund, Álvaro Herrera,\n Dimitri Dolgov)\n </para>\n\nA few points:\n\n1) Is this really source code, given that CREATE ACCESS METHOD TYPE\n TABLE is a DDL command, and USING (...) for CREATE TABLE etc is an\n option to DDL commands?+1It would be better to provide a description of the newly added syntax.Do we need to provide any 'Note' explaining that currently there are no otheralternatives to the heap? \n2) I think the description sounds a bit too much like it's about new\n forms of tables, rather than their storage. How about something\n roughly like:\n\n Allow different <link linkend=\"tableam\">table access methods</> to be\n <link linkend=\"sql-create-access-method>created</> and <link\n linkend=\"sql-createtable-method\">used</>. This allows to develop and\n use new ways of storing and accessing table data, optimized for\n different use-cases, without having to modify\n PostgreSQL. The existing <literal>heap</literal> access method\n remains the default.\n\n3) This misses a large set of commits around making tableam possible, in\n particular the commits around\n\ncommit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2018-11-16 16:35:11 -0800\n\n Make TupleTableSlots extensible, finish split of existing slot type.\n\n Given that those commits entail an API break relevant for extensions,\n should we have them as a separate \"source code\" note?+1 to add, but I am not sure whether we need to list all the breakage thathas introduced by Tableam needs to be described separately or with somecombined note to explain it to extension developers is fine? \n4) I think the attribution isn't quite right. For one, a few names with\n substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n Alexander Korotkov), and the order doesn't quite seem right. On the\n latter part I might be somewhat petty, but I spend *many* months of\n my life on this.\n\n How about:\n Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n if we keep 3) separate and\n Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n otherwise? +1 to either of the above.Without Andres enormous efforts, Tableam couldn't have been possible into v12.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 21 May 2019 17:10:15 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 2019-05-21 00:17, Andres Freund wrote:\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> 2018-11-14 [1b5d797cd] Lower lock level for renaming indexes\n> -->\n> \n> <para>\n> Reduce locking requirements for index renaming (Peter Eisentraut)\n> </para>\n> </listitem>\n> \n> Should we specify the newly required lock level? Because it's quire\n> relevant for users what exactly they're now able to do concurrently in\n> operation?\n\nYes, more information is in the commit message. We could expand the\nrelease note item with:\n\n\"\"\"\nRenaming an index now requires only a ShareUpdateExclusiveLock instead\nof a AccessExclusiveLock. This allows index renaming without blocking\naccess to the table.\n\"\"\"\n\nNote also that this functionality later became part of REINDEX\nCONCURRENTLY, which is presumably where most people will make use of it.\n\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-01-11 [ff8530605] Add value 'current' for recovery_target_timeline\n> -->\n> \n> <para>\n> Add an explicit value of <literal>current</literal> for <xref\n> linkend=\"guc-recovery-target-time\"/> (Peter Eisentraut)\n> </para>\n> </listitem>\n> \n> Seems like this should be combined with the earlier \"Cause recovery to\n> advance to the latest timeline by default\" entry.\n\nIt could be combined or kept separate or not mentioned at all. Either\nway is fine.\n\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-03-30 [fc22b6623] Generated columns\n> -->\n> \n> <para>\n> Add support for <link linkend=\"sql-createtable\">generated\n> columns</link> (Peter Eisentraut)\n> </para>\n> \n> <para>\n> Rather than storing a value only at row creation time, generated\n> columns are also modified during updates, and can reference other\n> table columns.\n> </para>\n> </listitem>\n> \n> I find this description confusing. How about cribbing from the commit?\n> Roughly like\n> \n> This allows creating columns that are computed from expressions,\n> including references to other columns in the same table, rather than\n> having to be specified by the inserter/updater.\n\nYeah, that's better.\n\n> Think we also ought to mention that this is only stored generated\n> columns, given that the SQL feature also includes virtual columns?\n\nThe SQL standard doesn't specify whether generated columns are stored,\nbut reading between the lines suggest that they expect them to be. So\nwe don't need to get into more detail there in the release notes. The\nmain documentation does address this point.\n\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-03-19 [590a87025] Ignore attempts to add TOAST table to shared or catalog \n> -->\n> \n> <para>\n> Allow modifications of system tables using <xref\n> linkend=\"sql-altertable\"/> (Peter Eisentraut)\n> </para>\n> \n> <para>\n> This allows modifications of <literal>reloptions</literal> and\n> autovacuum settings.\n> </para>\n> </listitem>\n> \n> I think the first paragraph is a bit dangerous. This does *not*\n> generally allow modifications of system tables using ALTER TABLE.\n\nYes, it's overly broad. The second paragraph is really the gist of the\nchange, so we could write\n\n Allow modifications of reloptions of system tables\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 15:49:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sat, May 11, 2019 at 04:33:24PM -0400, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n> \n\nThank you for doing this. I didn't see [1] in the release notes, should \nit be included in the \"Source Code\" section?\n\n[1] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Tue, 21 May 2019 09:09:10 -0700",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 09:09:10AM -0700, Shawn Debnath wrote:\n> On Sat, May 11, 2019 at 04:33:24PM -0400, Bruce Momjian wrote:\n> > I have posted a draft copy of the PG 12 release notes here:\n> > \n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> > \n> > They are committed to git. It includes links to the main docs, where\n> > appropriate. Our official developer docs will rebuild in a few hours.\n> > \n> \n> Thank you for doing this. I didn't see [1] in the release notes, should \n> it be included in the \"Source Code\" section?\n> \n> [1] \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\n\nUh, this is an internals change that is usually not listed in the\nrelease notes since it mostly affects internals developers.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 15:11:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "qOn Mon, May 20, 2019 at 03:17:19PM -0700, Andres Freund wrote:\n> Hi,\n> \n> Note that I've added a few questions to individuals involved with\n> specific points. If you're in the To: list, please search for your name.\n> \n> \n> On 2019-05-11 16:33:24 -0400, Bruce Momjian wrote:\n> > I have posted a draft copy of the PG 12 release notes here:\n> >\n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> > They are committed to git.\n> \n> Thanks!\n> \n> <title>Migration to Version 12</title>\n> \n> There's a number of features in the compat section that are more general\n> improvements with a side of incompatibility. Won't it be confusing to\n> e.g. have have the ryu floating point conversion speedups in the compat\n> section, but not in the \"General Performance\" section?\n\nYes, it can be. What I did with the btree item was to split out the max\nlength change with the larger changes. We can do the same for other\nitems. As you rightly stated, it is for cases where the incompatibility\nis minor compared to the change. Do you have a list of the ones that\nneed this treatment?\n\n> <para>\n> Remove the special behavior of <link\n> linkend=\"datatype-oid\">OID</link> columns (Andres Freund,\n> John Naylor)\n> </para>\n> \n> Should we mention that tables with OIDs have to have their oids removed\n> before they can be upgraded?\n\nUh, is that true? pg_upgrade? pg_dump?\n\n> <para>\n> Refactor <link linkend=\"functions-geometry\">geometric\n> functions</link> and operators (Emre Hasegeli)\n> </para>\n> \n> <para>\n> This could lead to more accurate, but slightly different, results\n> from previous releases.\n> </para>\n> </listitem>\n> <listitem>\n> <!--\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> 2018-08-16 [c4c340088] Use the built-in float datatypes to implement geometric \n> -->\n> \n> <para>\n> Restructure <link linkend=\"datatype-geometric\">geometric\n> types</link> to handle NaN, underflow, overflow and division by\n> zero more consistently (Emre Hasegeli)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> 2018-09-26 [2e2a392de] Fix problems in handling the line data type\n> -->\n> \n> <para>\n> Improve behavior and error reporting for the <link\n> linkend=\"datatype-geometric\">line data type</link> (Emre Hasegeli)\n> </para>\n> </listitem>\n> \n> Is that sufficient explanation? Feels like we need to expand a bit\n> more. In particular, is it possible that a subset of the changes here\n> require reindexing?\n> \n> Also, aren't three different entries a bit too much?\n\nThe 'line' item related to more errors than just the ones listed for the\ngeometric data types, so I was not clear how to do that as a single\nentry. I think there is a much larger compatibility breakage\npossibility with 'line'.\n\n> <listitem>\n> <!--\n> Author: Alexander Korotkov <akorotkov@postgresql.org>\n> 2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during \n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n> -->\n> \n> <para>\n> Improve speed of btree index insertions (Peter Geoghegan,\n> Alexander Korotkov)\n> </para>\n> \n> <para>\n> The new code improves the space-efficiency of page splits,\n> reduces locking overhead, and gives better performance for\n> <command>UPDATE</command>s and <command>DELETE</command>s on\n> indexes with many duplicates.\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\n> Author: Peter Geoghegan <pg@bowt.ie>\n> 2019-03-20 [fab250243] Consider secondary factors during nbtree splits.\n> -->\n> \n> <para>\n> Have new btree indexes sort duplicate index entries in heap-storage\n> order (Peter Geoghegan, Heikki Linnakangas)\n> </para>\n> \n> <para>\n> Indexes <application>pg_upgraded</application> from previous\n> releases will not have this ordering.\n> </para>\n> </listitem>\n> \n> I'm not sure that the grouping here is quite right. And the second entry\n> probably should have some explanation about the benefits?\n\nAgreed.\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> 2018-11-14 [1b5d797cd] Lower lock level for renaming indexes\n> -->\n> \n> <para>\n> Reduce locking requirements for index renaming (Peter Eisentraut)\n> </para>\n> </listitem>\n> \n> Should we specify the newly required lock level? Because it's quire\n> relevant for users what exactly they're now able to do concurrently in\n> operation?\n\nSure.\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n> -->\n> \n> <para>\n> Add support for <link linkend=\"sql-createfunction\">function\n> selectivity</link> (Tom Lane)\n> </para>\n> </listitem>\n> \n> Hm, that message doesn't seem like an accurate description of that\n> commit (if anything it's a391ff3c?). Given that it all requires C\n> hackery, perhaps we ought to move it to the source code section? And\n> isn't the most important part of this set of changes\n> \n> commit 74dfe58a5927b22c744b29534e67bfdd203ac028\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2019-02-11 21:26:08 -0500\n> \n> Allow extensions to generate lossy index conditions.\n\nUh, I missed that as an important item. Can someone give me some text?\n\n> <listitem>\n> <!--\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> 2019-01-29 [36a1281f8] Separate per-batch and per-tuple memory contexts in COPY\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> 2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\n> Author: Andres Freund <andres@anarazel.de>\n> 2019-01-26 [a9c35cf85] Change function call information to be variable length.\n> -->\n> \n> <para>\n> Greatly reduce memory consumption of <xref linkend=\"sql-copy\"/>\n> and function calls (Andres Freund, Tomas Vondra, Tom Lane)\n> </para>\n> </listitem>\n> \n> Grouping these three changes together makes no sense to me.\n> \n> I think the first commit just ought not to be mentioned separately, it's\n> just a fix for a memory leak in 31f3817402, essentially a 12 only bugfix?\n\nOh, I was not aware of that.\n \n> The second commit is about position() etc, which seems not to match that\n> description either?\n\nUgh.\n\n> The third is probably more appropriate to be in the source code\n> section. While it does speed up function calls a bit (in particular\n> plpgsql which is very function call heavy), it also is a breaking change\n> for some external code? Not sure why Tom is listed with this entry?\n\nThe order of names is just a guess when multiple commits are merged ---\nthis needs help.\n\n> <listitem>\n> <!--\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> 2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\n> -->\n> \n> <para>\n> Improve search performance for multi-byte characters (Heikki\n> Linnakangas)\n> </para>\n> </listitem>\n> \n> That's the second reference to the commit. I suspect this is much better\n> separate, so I'd just remove it from above.\n\nDone.\n\n> <listitem>\n> <!--\n> Author: Stephen Frost <sfrost@snowman.net>\n> 2019-04-02 [4d0e994ee] Add support for partial TOAST decompression\n> -->\n> \n> <para>\n> Allow <link linkend=\"storage-toast\"><literal>TOAST</literal></link>\n> values to be minimally decompressed (Paul Ramsey)\n> </para>\n> \n> I'd s/minimal/partial/ - I don't think the code guarantees anything\n> about it being minimal? And \"minimally decompressed\" also is somewhat\n> confusing, because it sounds like it's about the compression quality\n> rather than only decompressing part of the data.\n\nIt is confusing. Is \"partially decompressed\" better?\n\n> <listitem>\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2018-08-10 [f841ceb26] Improve TRUNCATE by avoiding early lock queue\n> -->\n> \n> <para>\n> Prevent <xref linkend=\"sql-truncate\"/> from requesting a lock on\n> tables for which it lacks permission (Micha�l Paquier)\n> </para>\n> \n> <para>\n> This prevents unauthorized locking delays.\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2018-08-27 [a556549d7] Improve VACUUM and ANALYZE by avoiding early lock queue\n> -->\n> \n> <para>\n> Prevent <command>VACUUM</command> and <command>ANALYZE</command>\n> from requesting a lock on tables for which it lacks permission\n> (Micha�l Paquier)\n> </para>\n> \n> <para>\n> This prevents unauthorized locking delays.\n> </para>\n> </listitem>\n> \n> \n> I don't think this should be in the <title><acronym>Authentication</acronym></title>\n> section.\n\nI put it in that section since I thought the motivation was to prevent\npeople from locking up connecting to the database if someone has a\npending VACUUM/ANALYZE. No?\n\n> Also perhaps, s/it/the user/, or \"the caller\"?\n\nAgreed, \"the user\".\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-03-10 [cbccac371] Reduce the default value of autovacuum_vacuum_cost_delay\n> -->\n> \n> <para>\n> Reduce the default value of <xref\n> linkend=\"guc-autovacuum-vacuum-cost-delay\"/> to 2ms (Tom Lane)\n> </para>\n> </listitem>\n> \n> I think this needs to explain that this can increase autovacuum's IO\n> throughput considerably.\n\nUh, well, do we normally document the effect of a change like this? It\nwill cause vacuum to be more agressive, and increase I/O? Do we want to\nre-educate on what this paramater does?\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> -->\n> \n> <para>\n> Allow <xref linkend=\"guc-vacuum-cost-delay\"/> to specify\n> sub-millisecond delays (Tom Lane)\n> </para>\n> \n> <para>\n> Floating-point values can also now be specified.\n> </para>\n> </listitem>\n> \n> And this should be merged with the previous entry?\n\nUh, I thought the change of default and its range were different enough\nthat combining them would add confusion.\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> -->\n> \n> <para>\n> Allow time-based server variables to use <link\n> linkend=\"config-setting\">micro-seconds</link> (us) (Tom Lane)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-03-11 [1a83a80a2] Allow fractional input values for integer GUCs, and impr\n> -->\n> \n> <para>\n> Allow fractional input for integer server variables (Tom Lane)\n> </para>\n> \n> <para>\n> For example, <command>SET work_mem = '30.1GB'</command>.\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> -->\n> \n> <para>\n> Allow units to be specified for floating-point server variables\n> (Tom Lane)\n> </para>\n> </listitem>\n> \n> Can't we combine these? Seems excessively detailed in comparison to the\n> rest of the entries.\n\nSee above. It seems confusing to combine them but please propose text\nif you think it is possible.\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-01-11 [ff8530605] Add value 'current' for recovery_target_timeline\n> -->\n> \n> <para>\n> Add an explicit value of <literal>current</literal> for <xref\n> linkend=\"guc-recovery-target-time\"/> (Peter Eisentraut)\n> </para>\n> </listitem>\n> \n> Seems like this should be combined with the earlier \"Cause recovery to\n> advance to the latest timeline by default\" entry.\n\nThe odd part is that the old default was 'current' but there was no way\nto specify current --- you just specified nothing. That seemed\nconfusing enough that having them combined would add confusion, but if\nyou have some suggested text?\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-03-30 [fc22b6623] Generated columns\n> -->\n> \n> <para>\n> Add support for <link linkend=\"sql-createtable\">generated\n> columns</link> (Peter Eisentraut)\n> </para>\n> \n> <para>\n> Rather than storing a value only at row creation time, generated\n> columns are also modified during updates, and can reference other\n> table columns.\n> </para>\n> </listitem>\n> \n> I find this description confusing. How about cribbing from the commit?\n> Roughly like\n> \n> This allows creating columns that are computed from expressions,\n> including references to other columns in the same table, rather than\n> having to be specified by the inserter/updater.\n> \n> Think we also ought to mention that this is only stored generated\n> columns, given that the SQL feature also includes virtual columns?\n\nOK, new text is:\n\n The content of generated columns are computed from expressions\n (including references to other columns in the same table)\n rather than being specified by <command>INSERT</command> or\n <command>UPDATE</command> commands.\n> \n> <listitem>\n> <!--\n> Author: Fujii Masao <fujii@postgresql.org>\n> 2019-04-08 [119dcfad9] Add vacuum_truncate reloption.\n> Author: Fujii Masao <fujii@postgresql.org>\n> 2019-05-08 [b84dbc8eb] Add TRUNCATE parameter to VACUUM.\n> -->\n> \n> <para>\n> Add <xref linkend=\"sql-vacuum\"/> and <command>CREATE\n> TABLE</command> options to prevent <command>VACUUM</command>\n> from truncating trailing empty pages (Tsunakawa Takayuki)\n> </para>\n> \n> <para>\n> The options are <varname>vacuum_truncate</varname> and\n> <varname>toast.vacuum_truncate</varname>. This reduces vacuum\n> locking requirements.\n> </para>\n> </listitem>\n> \n> Maybe add something like: \"This can be helpful to avoid query\n> cancellations on standby that are not avoided by hot_standby_feedback.\"?\n\nSo you turn off truncate on the primary becaues the replay of the\ntruncate on the standby might cause a cancelation? I was not aware that\nwas a common problem.\n\n> <listitem>\n> <!--\n> Author: Robert Haas <rhaas@postgresql.org>\n> 2019-04-04 [a96c41fee] Allow VACUUM to be run with index cleanup disabled.\n> -->\n> \n> <para>\n> Allow vacuum to avoid index cleanup with the\n> <literal>INDEX_CLEANUP</literal> option (Masahiko Sawada)\n> </para>\n> </listitem>\n> \n> I think we ought to expand a bit more on why one would do that,\n> including perhaps some caveat?\n\nI actually have no idea why someone would want to do that.\n\n> <listitem>\n> <!--\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> 2019-03-19 [590a87025] Ignore attempts to add TOAST table to shared or catalog \n> -->\n> \n> <para>\n> Allow modifications of system tables using <xref\n> linkend=\"sql-altertable\"/> (Peter Eisentraut)\n> </para>\n> \n> <para>\n> This allows modifications of <literal>reloptions</literal> and\n> autovacuum settings.\n> </para>\n> </listitem>\n> \n> I think the first paragraph is a bit dangerous. This does *not*\n> generally allow modifications of system tables using ALTER TABLE.\n\nOK, new text added \"options\":\n\n Allow modifications of system table options using <xref\n linkend=\"sql-altertable\"/> (Peter Eisentraut)\n\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2018-09-25 [5b7e03670] Avoid unnecessary precision loss for pgbench's - -rate ta\n> -->\n> \n> <para>\n> Compute behavior based on pgbench's <option>--rate</option>\n> value more precisely (Tom Lane)\n> </para>\n> </listitem>\n> \n> \"Computing behavior\" sounds a bit odd. Maybe \"Improve precision of\n> pgbench's <option>--rate</option>\" option?\n\nDone.\n\n> <listitem>\n> <!--\n> Author: Thomas Munro <tmunro@postgresql.org>\n> 2018-07-13 [387a5cfb9] Add pg_dump - -on-conflict-do-nothing option.\n> -->\n> \n> <para>\n> Allow restoration of an <command>INSERT</command>-statement dump\n> to skip rows which would cause conflicts (Surafel Temesgen)\n> </para>\n> \n> <para>\n> The <application>pg_dump</application> option is\n> <option>--on-conflict-do-nothing</option>.\n> </para>\n> </listitem>\n> \n> Hm, this doesn't seem that clear. It's not really a restoration time\n> option, and it sounds a bit like that in the above. How about instead saying something\n> like:\n> Allow pg_dump to emit INSERT ... ON CONFLICT DO NOTHING (Surafel).\n\nDone.\n\n> <listitem>\n> <!--\n> Author: Andrew Dunstan <andrew@dunslane.net>\n> 2019-02-18 [af25bc03e] Provide an extra-float-digits setting for pg_dump / pg_d\n> -->\n> \n> <para>\n> Allow the number of float digits to be specified\n> for <application>pg_dump</application> and\n> <application>pg_dumpall</application> (Andrew Dunstan)\n> </para>\n> \n> <para>\n> This allows the float digit output to match previous dumps.\n> </para>\n> \n> Hm, feels like that should be combined with the ryu compat entry?\n\nUh, but it relates to this specific command, and it is a new feature\nrather than a compatibility.\n\n> <para>\n> Add <xref linkend=\"sql-create-access-method\"/> command to create\n> new table types (Haribabu Kommi, Andres Freund, �lvaro Herrera,\n> Dimitri Dolgov)\n> </para>\n> \n> A few points:\n> \n> 1) Is this really source code, given that CREATE ACCESS METHOD TYPE\n> TABLE is a DDL command, and USING (...) for CREATE TABLE etc is an\n> option to DDL commands?\n\nI struggled with this. It is a new command, but it has no use yet to\nusers, so if we move it out of \"source code\" we need to be clear it has\nno useful purpose yet. Can we do that clearly?\n\n\n> 2) I think the description sounds a bit too much like it's about new\n> forms of tables, rather than their storage. How about something\n> roughly like:\n> \n> Allow different <link linkend=\"tableam\">table access methods</> to be\n> <link linkend=\"sql-create-access-method>created</> and <link\n> linkend=\"sql-createtable-method\">used</>. This allows to develop and\n> use new ways of storing and accessing table data, optimized for\n> different use-cases, without having to modify\n> PostgreSQL. The existing <literal>heap</literal> access method\n> remains the default.\n\nI added a new detail paragraph:\n\n This enables the development of new <link linkend=\"tableam\">table\n access methods</>, which can optimize storage for different\n use-cases. The existing <literal>heap</literal> access method\n remains the default.\n\n> 3) This misses a large set of commits around making tableam possible, in\n> particular the commits around\n> \n> commit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2018-11-16 16:35:11 -0800\n> \n> Make TupleTableSlots extensible, finish split of existing slot type.\n> \n> Given that those commits entail an API break relevant for extensions,\n> should we have them as a separate \"source code\" note?\n\nI have added this commit to the table-am item. I don't know if this is\nsomething that extension people care about, but if so, we should\ncertainly add it.\n\n> 4) I think the attribution isn't quite right. For one, a few names with\n> substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n> Alexander Korotkov), and the order doesn't quite seem right. On the\n> latter part I might be somewhat petty, but I spend *many* months of\n> my life on this.\n> \n> How about:\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n> if we keep 3) separate and\n\nI used the above list since I combined 3 so far.\n\n> Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n> otherwise?\n> \n> I think it might actually make sense to take David off this list,\n> because his tableam work is essentially part of it's own entry, as\n\n> <!--\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> 2018-08-01 [0d5f05cde] Allow multi-inserts during COPY into a partitioned table\n> -->\n> \n> <para>\n> Improve speed of <command>COPY</command> into partitioned tables\n> (David Rowley)\n> </para>\n> \n> since his copy.c portions of 86b85044e823a largely are a rewrite of\n> the above commit.\n> \n\nOK, David removed.\n\n\n\n> <!--\n> Author: Greg Stark <stark@mit.edu>\n> 2018-10-09 [36e9d413a] Add \"B\" suffix for bytes to docs\n> -->\n> \n> <para>\n> Document that the <literal>B</literal>/bytes units can be specified\n> for <link linkend=\"config-setting\">server variables</link>\n> (Greg Stark)\n> </para>\n> </listitem>\n> \n> Given how large changes we skip over in the release notes, I don't\n> really see a point in including changes like this. Feels like we'd at\n> the very least also have to include larger changes with typo/grammar\n> fixes etc?\n\nI mentioned it since it was added in a prior release, but was not\ndocumented, so effectively there was no way for someone to know it was\npossible before, so I thought it made sense to mention it.\n\nI have only corrected a small number of issues above and look for\nguidance to finish the rest. I will reply to the other emails in this\nthread now.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 15:47:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 15:47:34 -0400, Bruce Momjian wrote:\n> On Mon, May 20, 2019 at 03:17:19PM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > Note that I've added a few questions to individuals involved with\n> > specific points. If you're in the To: list, please search for your name.\n> > \n> > \n> > On 2019-05-11 16:33:24 -0400, Bruce Momjian wrote:\n> > > I have posted a draft copy of the PG 12 release notes here:\n> > >\n> > > \thttp://momjian.us/pgsql_docs/release-12.html\n> > > They are committed to git.\n> > \n> > Thanks!\n> > \n> > <title>Migration to Version 12</title>\n> > \n> > There's a number of features in the compat section that are more general\n> > improvements with a side of incompatibility. Won't it be confusing to\n> > e.g. have have the ryu floating point conversion speedups in the compat\n> > section, but not in the \"General Performance\" section?\n> \n> Yes, it can be. What I did with the btree item was to split out the max\n> length change with the larger changes. We can do the same for other\n> items. As you rightly stated, it is for cases where the incompatibility\n> is minor compared to the change. Do you have a list of the ones that\n> need this treatment?\n\nI was concretely thinking of:\n- floating point output changes, which are primarily about performance\n- recovery.conf changes where I'd merge:\n - Do not allow multiple different recovery_target specificatios\n - Allow some recovery parameters to be changed with reload\n - Cause recovery to advance to the latest timeline by default\n - Add an explicit value of current for guc-recovery-target-time\n into on entry on the feature side.\n\nAfter having to move recovery settings to a different file, disallowing\nmultiple targets isn't really a separate config break imo. And all the\nother changes are also fallout from the recovery.conf GUCification.\n\n\n> > <para>\n> > Remove the special behavior of <link\n> > linkend=\"datatype-oid\">OID</link> columns (Andres Freund,\n> > John Naylor)\n> > </para>\n> > \n> > Should we mention that tables with OIDs have to have their oids removed\n> > before they can be upgraded?\n> \n> Uh, is that true? pg_upgrade? pg_dump?\n\nYes.\n\n\n> > <para>\n> > Refactor <link linkend=\"functions-geometry\">geometric\n> > functions</link> and operators (Emre Hasegeli)\n> > </para>\n> > \n> > <para>\n> > This could lead to more accurate, but slightly different, results\n> > from previous releases.\n> > </para>\n> > </listitem>\n> > <listitem>\n> > <!--\n> > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > 2018-08-16 [c4c340088] Use the built-in float datatypes to implement geometric \n> > -->\n> > \n> > <para>\n> > Restructure <link linkend=\"datatype-geometric\">geometric\n> > types</link> to handle NaN, underflow, overflow and division by\n> > zero more consistently (Emre Hasegeli)\n> > </para>\n> > </listitem>\n> > \n> > <listitem>\n> > <!--\n> > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > 2018-09-26 [2e2a392de] Fix problems in handling the line data type\n> > -->\n> > \n> > <para>\n> > Improve behavior and error reporting for the <link\n> > linkend=\"datatype-geometric\">line data type</link> (Emre Hasegeli)\n> > </para>\n> > </listitem>\n> > \n> > Is that sufficient explanation? Feels like we need to expand a bit\n> > more. In particular, is it possible that a subset of the changes here\n> > require reindexing?\n> > \n> > Also, aren't three different entries a bit too much?\n> \n> The 'line' item related to more errors than just the ones listed for the\n> geometric data types, so I was not clear how to do that as a single\n> entry. I think there is a much larger compatibility breakage\n> possibility with 'line'.\n> \n> > <listitem>\n> > <!--\n> > Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > 2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during \n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n> > -->\n> > \n> > <para>\n> > Improve speed of btree index insertions (Peter Geoghegan,\n> > Alexander Korotkov)\n> > </para>\n> > \n> > <para>\n> > The new code improves the space-efficiency of page splits,\n> > reduces locking overhead, and gives better performance for\n> > <command>UPDATE</command>s and <command>DELETE</command>s on\n> > indexes with many duplicates.\n> > </para>\n> > </listitem>\n> > \n> > <listitem>\n> > <!--\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-20 [fab250243] Consider secondary factors during nbtree splits.\n> > -->\n> > \n> > <para>\n> > Have new btree indexes sort duplicate index entries in heap-storage\n> > order (Peter Geoghegan, Heikki Linnakangas)\n> > </para>\n> > \n> > <para>\n> > Indexes <application>pg_upgraded</application> from previous\n> > releases will not have this ordering.\n> > </para>\n> > </listitem>\n> > \n> > I'm not sure that the grouping here is quite right. And the second entry\n> > probably should have some explanation about the benefits?\n> \n> Agreed.\n> \n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter_e@gmx.net>\n> > 2018-11-14 [1b5d797cd] Lower lock level for renaming indexes\n> > -->\n> > \n> > <para>\n> > Reduce locking requirements for index renaming (Peter Eisentraut)\n> > </para>\n> > </listitem>\n> > \n> > Should we specify the newly required lock level? Because it's quire\n> > relevant for users what exactly they're now able to do concurrently in\n> > operation?\n> \n> Sure.\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n> > -->\n> > \n> > <para>\n> > Add support for <link linkend=\"sql-createfunction\">function\n> > selectivity</link> (Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > Hm, that message doesn't seem like an accurate description of that\n> > commit (if anything it's a391ff3c?). Given that it all requires C\n> > hackery, perhaps we ought to move it to the source code section? And\n> > isn't the most important part of this set of changes\n> > \n> > commit 74dfe58a5927b22c744b29534e67bfdd203ac028\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: 2019-02-11 21:26:08 -0500\n> > \n> > Allow extensions to generate lossy index conditions.\n> \n> Uh, I missed that as an important item. Can someone give me some text?\n> \n> > <listitem>\n> > <!--\n> > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > 2019-01-29 [36a1281f8] Separate per-batch and per-tuple memory contexts in COPY\n> > Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> > 2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\n> > Author: Andres Freund <andres@anarazel.de>\n> > 2019-01-26 [a9c35cf85] Change function call information to be variable length.\n> > -->\n> > \n> > <para>\n> > Greatly reduce memory consumption of <xref linkend=\"sql-copy\"/>\n> > and function calls (Andres Freund, Tomas Vondra, Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > Grouping these three changes together makes no sense to me.\n> > \n> > I think the first commit just ought not to be mentioned separately, it's\n> > just a fix for a memory leak in 31f3817402, essentially a 12 only bugfix?\n> \n> Oh, I was not aware of that.\n> \n> > The second commit is about position() etc, which seems not to match that\n> > description either?\n> \n> Ugh.\n> \n> > The third is probably more appropriate to be in the source code\n> > section. While it does speed up function calls a bit (in particular\n> > plpgsql which is very function call heavy), it also is a breaking change\n> > for some external code? Not sure why Tom is listed with this entry?\n> \n> The order of names is just a guess when multiple commits are merged ---\n> this needs help.\n> \n> > <listitem>\n> > <!--\n> > Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> > 2019-01-25 [9556aa01c] Use single-byte Boyer-Moore-Horspool search even with mu\n> > -->\n> > \n> > <para>\n> > Improve search performance for multi-byte characters (Heikki\n> > Linnakangas)\n> > </para>\n> > </listitem>\n> > \n> > That's the second reference to the commit. I suspect this is much better\n> > separate, so I'd just remove it from above.\n> \n> Done.\n> \n> > <listitem>\n> > <!--\n> > Author: Stephen Frost <sfrost@snowman.net>\n> > 2019-04-02 [4d0e994ee] Add support for partial TOAST decompression\n> > -->\n> > \n> > <para>\n> > Allow <link linkend=\"storage-toast\"><literal>TOAST</literal></link>\n> > values to be minimally decompressed (Paul Ramsey)\n> > </para>\n> > \n> > I'd s/minimal/partial/ - I don't think the code guarantees anything\n> > about it being minimal? And \"minimally decompressed\" also is somewhat\n> > confusing, because it sounds like it's about the compression quality\n> > rather than only decompressing part of the data.\n> \n> It is confusing. Is \"partially decompressed\" better?\n> \n> > <listitem>\n> > <!--\n> > Author: Michael Paquier <michael@paquier.xyz>\n> > 2018-08-10 [f841ceb26] Improve TRUNCATE by avoiding early lock queue\n> > -->\n> > \n> > <para>\n> > Prevent <xref linkend=\"sql-truncate\"/> from requesting a lock on\n> > tables for which it lacks permission (Michaël Paquier)\n> > </para>\n> > \n> > <para>\n> > This prevents unauthorized locking delays.\n> > </para>\n> > </listitem>\n> > \n> > <listitem>\n> > <!--\n> > Author: Michael Paquier <michael@paquier.xyz>\n> > 2018-08-27 [a556549d7] Improve VACUUM and ANALYZE by avoiding early lock queue\n> > -->\n> > \n> > <para>\n> > Prevent <command>VACUUM</command> and <command>ANALYZE</command>\n> > from requesting a lock on tables for which it lacks permission\n> > (Michaël Paquier)\n> > </para>\n> > \n> > <para>\n> > This prevents unauthorized locking delays.\n> > </para>\n> > </listitem>\n> > \n> > \n> > I don't think this should be in the <title><acronym>Authentication</acronym></title>\n> > section.\n> \n> I put it in that section since I thought the motivation was to prevent\n> people from locking up connecting to the database if someone has a\n> pending VACUUM/ANALYZE. No?\n> \n> > Also perhaps, s/it/the user/, or \"the caller\"?\n> \n> Agreed, \"the user\".\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-03-10 [cbccac371] Reduce the default value of autovacuum_vacuum_cost_delay\n> > -->\n> > \n> > <para>\n> > Reduce the default value of <xref\n> > linkend=\"guc-autovacuum-vacuum-cost-delay\"/> to 2ms (Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > I think this needs to explain that this can increase autovacuum's IO\n> > throughput considerably.\n> \n> Uh, well, do we normally document the effect of a change like this? It\n> will cause vacuum to be more agressive, and increase I/O? Do we want to\n> re-educate on what this paramater does?\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> > -->\n> > \n> > <para>\n> > Allow <xref linkend=\"guc-vacuum-cost-delay\"/> to specify\n> > sub-millisecond delays (Tom Lane)\n> > </para>\n> > \n> > <para>\n> > Floating-point values can also now be specified.\n> > </para>\n> > </listitem>\n> > \n> > And this should be merged with the previous entry?\n> \n> Uh, I thought the change of default and its range were different enough\n> that combining them would add confusion.\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> > -->\n> > \n> > <para>\n> > Allow time-based server variables to use <link\n> > linkend=\"config-setting\">micro-seconds</link> (us) (Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-03-11 [1a83a80a2] Allow fractional input values for integer GUCs, and impr\n> > -->\n> > \n> > <para>\n> > Allow fractional input for integer server variables (Tom Lane)\n> > </para>\n> > \n> > <para>\n> > For example, <command>SET work_mem = '30.1GB'</command>.\n> > </para>\n> > </listitem>\n> > \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-03-10 [caf626b2c] Convert [autovacuum_]vacuum_cost_delay into floating-poi\n> > -->\n> > \n> > <para>\n> > Allow units to be specified for floating-point server variables\n> > (Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > Can't we combine these? Seems excessively detailed in comparison to the\n> > rest of the entries.\n> \n> See above. It seems confusing to combine them but please propose text\n> if you think it is possible.\n> \n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2019-01-11 [ff8530605] Add value 'current' for recovery_target_timeline\n> > -->\n> > \n> > <para>\n> > Add an explicit value of <literal>current</literal> for <xref\n> > linkend=\"guc-recovery-target-time\"/> (Peter Eisentraut)\n> > </para>\n> > </listitem>\n> > \n> > Seems like this should be combined with the earlier \"Cause recovery to\n> > advance to the latest timeline by default\" entry.\n> \n> The odd part is that the old default was 'current' but there was no way\n> to specify current --- you just specified nothing. That seemed\n> confusing enough that having them combined would add confusion, but if\n> you have some suggested text?\n> \n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2019-03-30 [fc22b6623] Generated columns\n> > -->\n> > \n> > <para>\n> > Add support for <link linkend=\"sql-createtable\">generated\n> > columns</link> (Peter Eisentraut)\n> > </para>\n> > \n> > <para>\n> > Rather than storing a value only at row creation time, generated\n> > columns are also modified during updates, and can reference other\n> > table columns.\n> > </para>\n> > </listitem>\n> > \n> > I find this description confusing. How about cribbing from the commit?\n> > Roughly like\n> > \n> > This allows creating columns that are computed from expressions,\n> > including references to other columns in the same table, rather than\n> > having to be specified by the inserter/updater.\n> > \n> > Think we also ought to mention that this is only stored generated\n> > columns, given that the SQL feature also includes virtual columns?\n> \n> OK, new text is:\n> \n> The content of generated columns are computed from expressions\n> (including references to other columns in the same table)\n> rather than being specified by <command>INSERT</command> or\n> <command>UPDATE</command> commands.\n> > \n> > <listitem>\n> > <!--\n> > Author: Fujii Masao <fujii@postgresql.org>\n> > 2019-04-08 [119dcfad9] Add vacuum_truncate reloption.\n> > Author: Fujii Masao <fujii@postgresql.org>\n> > 2019-05-08 [b84dbc8eb] Add TRUNCATE parameter to VACUUM.\n> > -->\n> > \n> > <para>\n> > Add <xref linkend=\"sql-vacuum\"/> and <command>CREATE\n> > TABLE</command> options to prevent <command>VACUUM</command>\n> > from truncating trailing empty pages (Tsunakawa Takayuki)\n> > </para>\n> > \n> > <para>\n> > The options are <varname>vacuum_truncate</varname> and\n> > <varname>toast.vacuum_truncate</varname>. This reduces vacuum\n> > locking requirements.\n> > </para>\n> > </listitem>\n> > \n> > Maybe add something like: \"This can be helpful to avoid query\n> > cancellations on standby that are not avoided by hot_standby_feedback.\"?\n> \n> So you turn off truncate on the primary becaues the replay of the\n> truncate on the standby might cause a cancelation? I was not aware that\n> was a common problem.\n> \n> > <listitem>\n> > <!--\n> > Author: Robert Haas <rhaas@postgresql.org>\n> > 2019-04-04 [a96c41fee] Allow VACUUM to be run with index cleanup disabled.\n> > -->\n> > \n> > <para>\n> > Allow vacuum to avoid index cleanup with the\n> > <literal>INDEX_CLEANUP</literal> option (Masahiko Sawada)\n> > </para>\n> > </listitem>\n> > \n> > I think we ought to expand a bit more on why one would do that,\n> > including perhaps some caveat?\n> \n> I actually have no idea why someone would want to do that.\n> \n> > <listitem>\n> > <!--\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > 2019-03-19 [590a87025] Ignore attempts to add TOAST table to shared or catalog \n> > -->\n> > \n> > <para>\n> > Allow modifications of system tables using <xref\n> > linkend=\"sql-altertable\"/> (Peter Eisentraut)\n> > </para>\n> > \n> > <para>\n> > This allows modifications of <literal>reloptions</literal> and\n> > autovacuum settings.\n> > </para>\n> > </listitem>\n> > \n> > I think the first paragraph is a bit dangerous. This does *not*\n> > generally allow modifications of system tables using ALTER TABLE.\n> \n> OK, new text added \"options\":\n> \n> Allow modifications of system table options using <xref\n> linkend=\"sql-altertable\"/> (Peter Eisentraut)\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2018-09-25 [5b7e03670] Avoid unnecessary precision loss for pgbench's - -rate ta\n> > -->\n> > \n> > <para>\n> > Compute behavior based on pgbench's <option>--rate</option>\n> > value more precisely (Tom Lane)\n> > </para>\n> > </listitem>\n> > \n> > \"Computing behavior\" sounds a bit odd. Maybe \"Improve precision of\n> > pgbench's <option>--rate</option>\" option?\n> \n> Done.\n> \n> > <listitem>\n> > <!--\n> > Author: Thomas Munro <tmunro@postgresql.org>\n> > 2018-07-13 [387a5cfb9] Add pg_dump - -on-conflict-do-nothing option.\n> > -->\n> > \n> > <para>\n> > Allow restoration of an <command>INSERT</command>-statement dump\n> > to skip rows which would cause conflicts (Surafel Temesgen)\n> > </para>\n> > \n> > <para>\n> > The <application>pg_dump</application> option is\n> > <option>--on-conflict-do-nothing</option>.\n> > </para>\n> > </listitem>\n> > \n> > Hm, this doesn't seem that clear. It's not really a restoration time\n> > option, and it sounds a bit like that in the above. How about instead saying something\n> > like:\n> > Allow pg_dump to emit INSERT ... ON CONFLICT DO NOTHING (Surafel).\n> \n> Done.\n> \n> > <listitem>\n> > <!--\n> > Author: Andrew Dunstan <andrew@dunslane.net>\n> > 2019-02-18 [af25bc03e] Provide an extra-float-digits setting for pg_dump / pg_d\n> > -->\n> > \n> > <para>\n> > Allow the number of float digits to be specified\n> > for <application>pg_dump</application> and\n> > <application>pg_dumpall</application> (Andrew Dunstan)\n> > </para>\n> > \n> > <para>\n> > This allows the float digit output to match previous dumps.\n> > </para>\n> > \n> > Hm, feels like that should be combined with the ryu compat entry?\n> \n> Uh, but it relates to this specific command, and it is a new feature\n> rather than a compatibility.\n> \n> > <para>\n> > Add <xref linkend=\"sql-create-access-method\"/> command to create\n> > new table types (Haribabu Kommi, Andres Freund, Álvaro Herrera,\n> > Dimitri Dolgov)\n> > </para>\n> > \n> > A few points:\n> > \n> > 1) Is this really source code, given that CREATE ACCESS METHOD TYPE\n> > TABLE is a DDL command, and USING (...) for CREATE TABLE etc is an\n> > option to DDL commands?\n> \n> I struggled with this. It is a new command, but it has no use yet to\n> users, so if we move it out of \"source code\" we need to be clear it has\n> no useful purpose yet. Can we do that clearly?\n> \n> \n> > 2) I think the description sounds a bit too much like it's about new\n> > forms of tables, rather than their storage. How about something\n> > roughly like:\n> > \n> > Allow different <link linkend=\"tableam\">table access methods</> to be\n> > <link linkend=\"sql-create-access-method>created</> and <link\n> > linkend=\"sql-createtable-method\">used</>. This allows to develop and\n> > use new ways of storing and accessing table data, optimized for\n> > different use-cases, without having to modify\n> > PostgreSQL. The existing <literal>heap</literal> access method\n> > remains the default.\n> \n> I added a new detail paragraph:\n> \n> This enables the development of new <link linkend=\"tableam\">table\n> access methods</>, which can optimize storage for different\n> use-cases. The existing <literal>heap</literal> access method\n> remains the default.\n> \n> > 3) This misses a large set of commits around making tableam possible, in\n> > particular the commits around\n> > \n> > commit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\n> > Author: Andres Freund <andres@anarazel.de>\n> > Date: 2018-11-16 16:35:11 -0800\n> > \n> > Make TupleTableSlots extensible, finish split of existing slot type.\n> > \n> > Given that those commits entail an API break relevant for extensions,\n> > should we have them as a separate \"source code\" note?\n> \n> I have added this commit to the table-am item. I don't know if this is\n> something that extension people care about, but if so, we should\n> certainly add it.\n> \n> > 4) I think the attribution isn't quite right. For one, a few names with\n> > substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n> > Alexander Korotkov), and the order doesn't quite seem right. On the\n> > latter part I might be somewhat petty, but I spend *many* months of\n> > my life on this.\n> > \n> > How about:\n> > Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n> > if we keep 3) separate and\n> \n> I used the above list since I combined 3 so far.\n> \n> > Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n> > otherwise?\n> > \n> > I think it might actually make sense to take David off this list,\n> > because his tableam work is essentially part of it's own entry, as\n> \n> > <!--\n> > Author: Peter Eisentraut <peter_e@gmx.net>\n> > 2018-08-01 [0d5f05cde] Allow multi-inserts during COPY into a partitioned table\n> > -->\n> > \n> > <para>\n> > Improve speed of <command>COPY</command> into partitioned tables\n> > (David Rowley)\n> > </para>\n> > \n> > since his copy.c portions of 86b85044e823a largely are a rewrite of\n> > the above commit.\n> > \n> \n> OK, David removed.\n> \n> \n> \n> > <!--\n> > Author: Greg Stark <stark@mit.edu>\n> > 2018-10-09 [36e9d413a] Add \"B\" suffix for bytes to docs\n> > -->\n> > \n> > <para>\n> > Document that the <literal>B</literal>/bytes units can be specified\n> > for <link linkend=\"config-setting\">server variables</link>\n> > (Greg Stark)\n> > </para>\n> > </listitem>\n> > \n> > Given how large changes we skip over in the release notes, I don't\n> > really see a point in including changes like this. Feels like we'd at\n> > the very least also have to include larger changes with typo/grammar\n> > fixes etc?\n> \n> I mentioned it since it was added in a prior release, but was not\n> documented, so effectively there was no way for someone to know it was\n> possible before, so I thought it made sense to mention it.\n> \n> I have only corrected a small number of issues above and look for\n> guidance to finish the rest. I will reply to the other emails in this\n> thread now.\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 12:57:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:08:25AM +0100, Andrew Gierth wrote:\n> >>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> \n> Andres> Any chance for you to propose a text?\n> \n> This is what I posted before; I'm not 100% happy with it but it's still\n> better than any of the other versions:\n> \n> * Output REAL and DOUBLE PRECISION values in shortest-exact format by\n> default, and change the behavior of extra_float_digits\n> \n> Previously, float values were output rounded to 6 or 15 decimals by\n> default, with the number of decimals adjusted by extra_float_digits.\n> The previous rounding behavior is no longer the default, and is now\n> done only if extra_float_digits is set to zero or less; if the value\n> is greater than zero (which it is by default), a shortest-precise\n> representation is output (for a substantial performance improvement).\n> This representation preserves the exact binary value when correctly\n> read back in, even though the trailing digits will usually differ\n> from the output generated by previous versions when\n> extra_float_digits=3.\n\nHow is this?\n\n <para>\n Improve performance by changing the default number of trailing digits\n output for <link linkend=\"datatype-float\"><type>REAL</type></link>\n and <type>DOUBLE PRECISION</type> values (Andrew Gierth)\n </para>\n\n <para>\n Previously, float values were output rounded to 6 or 15 decimals\n by default. Now, only the number of digits required to preserve\n the exact binary value is output. The previous behavior can be\n restored by setting <xref linkend=\"guc-extra-float-digits\"> to zero.\n </para>\n\nAm I missing something?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 16:28:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 20, 2019 at 06:56:50PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Note that I've added a few questions to individuals involved with\n> > specific points. If you're in the To: list, please search for your name.\n> \n> I'm not sure which of my commits you want me to opine on, other than\n> \n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2019-01-30 [5f5c01459] Allow RECORD and RECORD[] to be specified in function co\n> > -->\n> > <para>\n> > Allow <type>RECORD</type> and <type>RECORD[]</type> to be specified\n> > as a function <link linkend=\"sql-createfunction\">return-value\n> > record</link> (Elvis Pranskevichus)\n> > </para>\n> > <para>\n> > DETAIL?\n> > </para>\n> > </listitem>\n> \n> > This description doesn't sound accurate to me. Tom?\n> \n> Yeah, maybe better\n> \n> Allow <type>RECORD</type> and <type>RECORD[]</type> to be used\n> as column types in a query's column definition list for a\n> table function that is declared to return <type>RECORD</type>\n> (Elvis Pranskevichus)\n> \n> You could link to \"queries-tablefunctions\" which describes the column\n> definition business; it's much more specific than \"sql-createfunction\".\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 16:35:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:04:50PM +1200, David Rowley wrote:\n> On Tue, 21 May 2019 at 10:17, Andres Freund <andres@anarazel.de> wrote:\n> > commit 4da597edf1bae0cf0453b5ed6fc4347b6334dfe1\n> > Author: Andres Freund <andres@anarazel.de>\n> > Date: 2018-11-16 16:35:11 -0800\n> >\n> > Make TupleTableSlots extensible, finish split of existing slot type.\n> >\n> > Given that those commits entail an API break relevant for extensions,\n> > should we have them as a separate \"source code\" note?\n> >\n> > 4) I think the attribution isn't quite right. For one, a few names with\n> > substantial work are missing (Amit Khandekar, Ashutosh Bapat,\n> > Alexander Korotkov), and the order doesn't quite seem right. On the\n> > latter part I might be somewhat petty, but I spend *many* months of\n> > my life on this.\n> >\n> > How about:\n> > Andres Freund, Haribabu Kommi, Alvaro Herrera, Alexander Korotkov, David Rowley, Dimitri Golgov\n> > if we keep 3) separate and\n> > Andres Freund, Haribabu Kommi, Alvaro Herrera, Ashutosh Bapat, Alexander Korotkov, Amit Khandekar, David Rowley, Dimitri Golgov\n> > otherwise?\n> >\n> > I think it might actually make sense to take David off this list,\n> > because his tableam work is essentially part of it's own entry, as\n> \n> Yeah, please take me off that one. My focus there was mostly on\n> keeping COPY fast with partitioned tables, to which, as Andres\n> mentioned is listed somewhere else.\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 16:36:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 20, 2019 at 08:48:15PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-20 18:56:50 -0400, Tom Lane wrote:\n> >> I'm not sure which of my commits you want me to opine on, other than\n> \n> > That was one of the main ones. I'm also specifically wondering about:\n> \n> >> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> >> 2019-02-09 [1fb57af92] Create the infrastructure for planner support functions.\n> >> <para>\n> >> Add support for <link linkend=\"sql-createfunction\">function\n> >> selectivity</link> (Tom Lane)\n> >> </para>\n> >> </listitem>\n> >> \n> >> Hm, that message doesn't seem like an accurate description of that\n> >> commit (if anything it's a391ff3c?). Given that it all requires C\n> >> hackery, perhaps we ought to move it to the source code section?\n> \n> Yes, this should be in \"source code\". I think it should be merged\n> with a391ff3c and 74dfe58a into something like\n> \n> \tAllow extensions to create planner support functions that\n> \tcan provide function-specific selectivity, cost, and\n> row-count estimates that can depend on the function arguments.\n> Support functions can also transform WHERE clauses involving\n> an extension's functions and operators into indexable clauses\n> in ways that the core code cannot for lack of detailed semantic\n> \tknowledge of those functions/operators.\n\nThe new text is:\n\n Add support function capability to improve optimizer estimates\n for functions (Tom Lane)\n\n This allows extensions to create planner support functions that\n can provide function-specific selectivity, cost, and row-count\n estimates that can depend on the function arguments. Also, improve\n in-core estimates for <function>generate_series()</function>,\n <function>unnest()</function>, and functions that return boolean\n values.\n\nNotice that there are some improvments in in-core functions. Should this\nstill be moved to the source code section?\n\n> > and perhaps you could opine on whether we ought to include\n> \n> >> <listitem>\n> >> <!--\n> >> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> >> 2019-02-11 [1d92a0c9f] Redesign the partition dependency mechanism.\n> >> -->\n> >> \n> >> <para>\n> >> Improve handling of partition dependency (Tom Lane)\n> >> </para>\n> >> \n> >> <para>\n> >> This prevents the creation of inconsistent partition hierarchies\n> >> in rare cases.\n> >> </para>\n> >> </listitem>\n> \n> It's probably worth mentioning, but I'd say something like\n> \n> Fix bugs that could cause ALTER TABLE DETACH PARTITION\n> to not drop objects that should be dropped, such as\n> automatically-created child indexes.\n> \n> The rest of it is not terribly interesting from a user's standpoint,\n> I think.\n\nDone.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 16:47:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, May 20, 2019 at 05:48:50PM -0700, Peter Geoghegan wrote:\n> On Mon, May 20, 2019 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > <!--\n> > Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > 2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n> > -->\n> >\n> > <para>\n> > Improve speed of btree index insertions (Peter Geoghegan,\n> > Alexander Korotkov)\n> > </para>\n> \n> My concern here (which I believe Alexander shares) is that it doesn't\n> make sense to group these two items together. They're two totally\n> unrelated pieces of work. Alexander's work does more or less help with\n> lock contention with writes, whereas the feature that that was merged\n> with is about preventing index bloat, which is mostly helpful for\n> reads (it helps writes to the extent that writes are also reads).\n> \n> The release notes go on to say that this item \"gives better\n> performance for UPDATEs and DELETEs on indexes with many duplicates\",\n> which is wrong. That is something that should have been listed below,\n> under the \"duplicate index entries in heap-storage order\" item.\n\nOK, I understand how the lock stuff improves things, but I have\nforgotten how indexes are made smaller. Is it because of better page\nsplit logic?\n\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-20 [fab250243] Consider secondary factors during nbtree splits.\n> > -->\n> >\n> > <para>\n> > Have new btree indexes sort duplicate index entries in heap-storage\n> > order (Peter Geoghegan, Heikki Linnakangas)\n> > </para>\n> \n> > I'm not sure that the grouping here is quite right. And the second entry\n> > probably should have some explanation about the benefits?\n> \n> It could stand to say something about the benefits. As I said, there\n> is already a little bit about the benefits, but that ended up being\n> tied to the \"Improve speed of btree index insertions\" item. Moving\n> that snippet to the correct item would be a good start.\n\nAs I remember the benefit currently is that you can find update and\ndeleted rows faster, right?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 16:51:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, May 20, 2019 at 08:48:15PM -0400, Tom Lane wrote:\n>> Yes, this should be in \"source code\". I think it should be merged\n>> with a391ff3c and 74dfe58a into something like\n>> \n>> Allow extensions to create planner support functions that\n>> can provide function-specific selectivity, cost, and\n>> row-count estimates that can depend on the function arguments.\n>> Support functions can also transform WHERE clauses involving\n>> an extension's functions and operators into indexable clauses\n>> in ways that the core code cannot for lack of detailed semantic\n>> knowledge of those functions/operators.\n\n> The new text is:\n\n> Add support function capability to improve optimizer estimates\n> for functions (Tom Lane)\n\n> This allows extensions to create planner support functions that\n> can provide function-specific selectivity, cost, and row-count\n> estimates that can depend on the function arguments. Also, improve\n> in-core estimates for <function>generate_series()</function>,\n> <function>unnest()</function>, and functions that return boolean\n> values.\n\nUh ... you completely lost the business about custom indexable clauses.\nI agree with Andres that that's the most important aspect of this.\n\n> Notice that there are some improvments in in-core functions. Should this\n> still be moved to the source code section?\n\nI doubt that that's worth mentioning at all. It certainly isn't a\nreason not to move this to the source-code section, because that's\nwhere we generally put things that are of interest for improving\nextensions, which is what this mainly is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 17:00:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 1:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > My concern here (which I believe Alexander shares) is that it doesn't\n> > make sense to group these two items together. They're two totally\n> > unrelated pieces of work. Alexander's work does more or less help with\n> > lock contention with writes, whereas the feature that that was merged\n> > with is about preventing index bloat, which is mostly helpful for\n> > reads (it helps writes to the extent that writes are also reads).\n> >\n> > The release notes go on to say that this item \"gives better\n> > performance for UPDATEs and DELETEs on indexes with many duplicates\",\n> > which is wrong. That is something that should have been listed below,\n> > under the \"duplicate index entries in heap-storage order\" item.\n>\n> OK, I understand how the lock stuff improves things, but I have\n> forgotten how indexes are made smaller. Is it because of better page\n> split logic?\n\nThat is clearly the main reason, though suffix truncation (which\nrepresents that trailing/suffix columns in index tuples from branch\npages have \"negative infinity\" sentinel values) also contributes to\nmaking indexes smaller.\n\nThe page split stuff was mostly added by commit fab250243 (\"Consider\nsecondary factors during nbtree splits\"), but commit f21668f32 (\"Add\n\"split after new tuple\" nbtree optimization\") added to that in a way\nthat really helped the TPC-C indexes. The TPC-C indexes are about 40%\nsmaller now.\n\n> > > Author: Peter Geoghegan <pg@bowt.ie>\n> > > 2019-03-20 [dd299df81] Make heap TID a tiebreaker nbtree index column.\n\n> As I remember the benefit currently is that you can find update and\n> deleted rows faster, right?\n\nYes, that's true when writing to the index. But more importantly, it\nreally helps VACUUM when there are lots of duplicates, which is fairly\ncommon in the real world (imagine an index where 20% of the rows are\nNULL, for example). In effect, there are no duplicates anymore,\nbecause all index tuples are unique internally.\n\nIndexes with lots of duplicates group older rows together, and new\nrows together, because treating heap TID as a tiebreaker naturally has\nthat effect. VACUUM will generally dirty far fewer pages, because bulk\ndeletions tend to be correlated with heap TID. And, VACUUM has a much\nbetter chance of deleting entire leaf pages, because dead tuples end\nup getting grouped together.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 May 2019 14:22:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 5/12/19 5:33 AM, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n\nIn section \"Authentication\":\n\n https://www.postgresql.org/docs/devel/release-12.html#id-1.11.6.5.5.3.7\n\nthe last two items are performance improvements not related to authentication;\npresumably the VACUUM item would be better off in the \"Utility Commands\"\nsection and the TRUNCATE item in \"General Performance\"?\n\nIn section \"Source code\":\n\n https://www.postgresql.org/docs/devel/release-12.html#id-1.11.6.5.5.12\n\nthe item \"Add CREATE ACCESS METHOD command\" doesn't seem related to the\nsource code itself, though I'm not sure where it should go.\n\n\nRegards\n\nIan Barwick\n\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 09:19:53 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 3:49 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, May 20, 2019 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > <!--\n> > Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > 2018-07-28 [d2086b08b] Reduce path length for locking leaf B-tree pages during\n> > Author: Peter Geoghegan <pg@bowt.ie>\n> > 2019-03-25 [f21668f32] Add \"split after new tuple\" nbtree optimization.\n> > -->\n> >\n> > <para>\n> > Improve speed of btree index insertions (Peter Geoghegan,\n> > Alexander Korotkov)\n> > </para>\n>\n> My concern here (which I believe Alexander shares) is that it doesn't\n> make sense to group these two items together. They're two totally\n> unrelated pieces of work. Alexander's work does more or less help with\n> lock contention with writes, whereas the feature that that was merged\n> with is about preventing index bloat, which is mostly helpful for\n> reads (it helps writes to the extent that writes are also reads).\n>\n> The release notes go on to say that this item \"gives better\n> performance for UPDATEs and DELETEs on indexes with many duplicates\",\n> which is wrong. That is something that should have been listed below,\n> under the \"duplicate index entries in heap-storage order\" item.\n\n+1\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 22 May 2019 06:11:51 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, May 22, 2019 at 09:19:53AM +0900, Ian Barwick wrote:\n> the last two items are performance improvements not related to authentication;\n> presumably the VACUUM item would be better off in the \"Utility Commands\"\n> section and the TRUNCATE item in \"General Performance\"?\n\nI agree with removing them from authentication, but these are not\nperformance-related items. Instead I think that \"Utility commands\" is\na place where they can live better.\n\nI am wondering if we should insist on the DOS attacks on a server, as\nnon-authorized users are basically able to block any tables, and\nauthorization is only a part of it, one of the worst parts\nactually... Anyway, I think that \"This prevents unauthorized locking\ndelays.\" does not provide enough details. What about reusing the\nfirst paragraph of the commits? Here is an idea:\n\"A caller of TRUNCATE/VACUUM/ANALYZE could previously queue for an\naccess exclusive lock on a relation it may not have permission to\ntruncate/vacuum/analyze, potentially interfering with users authorized\nto work on it. This could prevent users from accessing some relations\nthey have access to, in some cases preventing authentication if a\ncritical catalog relation was blocked.\"\n--\nMichael",
"msg_date": "Wed, 22 May 2019 16:26:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 5/22/19 4:26 PM, Michael Paquier wrote:\n> On Wed, May 22, 2019 at 09:19:53AM +0900, Ian Barwick wrote:\n>> the last two items are performance improvements not related to authentication;\n>> presumably the VACUUM item would be better off in the \"Utility Commands\"\n>> section and the TRUNCATE item in \"General Performance\"?\n> \n> I agree with removing them from authentication, but these are not\n> performance-related items. Instead I think that \"Utility commands\" is\n> a place where they can live better.\n> \n> I am wondering if we should insist on the DOS attacks on a server, as\n> non-authorized users are basically able to block any tables, and\n> authorization is only a part of it, one of the worst parts\n> actually... Anyway, I think that \"This prevents unauthorized locking\n> delays.\" does not provide enough details. What about reusing the\n> first paragraph of the commits? Here is an idea:\n> \"A caller of TRUNCATE/VACUUM/ANALYZE could previously queue for an\n> access exclusive lock on a relation it may not have permission to\n> truncate/vacuum/analyze, potentially interfering with users authorized\n> to work on it. This could prevent users from accessing some relations\n> they have access to, in some cases preventing authentication if a\n> critical catalog relation was blocked.\"\n\nAh, if that's the intent behind/use for those changes (I haven't looked at them\nin any detail, was just scanning the release notes) then it certainly needs some\nexplanation along those lines.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 16:50:14 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 05:00:31PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, May 20, 2019 at 08:48:15PM -0400, Tom Lane wrote:\n> >> Yes, this should be in \"source code\". I think it should be merged\n> >> with a391ff3c and 74dfe58a into something like\n> >> \n> >> Allow extensions to create planner support functions that\n> >> can provide function-specific selectivity, cost, and\n> >> row-count estimates that can depend on the function arguments.\n> >> Support functions can also transform WHERE clauses involving\n> >> an extension's functions and operators into indexable clauses\n> >> in ways that the core code cannot for lack of detailed semantic\n> >> knowledge of those functions/operators.\n> \n> > The new text is:\n> \n> > Add support function capability to improve optimizer estimates\n> > for functions (Tom Lane)\n> \n> > This allows extensions to create planner support functions that\n> > can provide function-specific selectivity, cost, and row-count\n> > estimates that can depend on the function arguments. Also, improve\n> > in-core estimates for <function>generate_series()</function>,\n> > <function>unnest()</function>, and functions that return boolean\n> > values.\n> \n> Uh ... you completely lost the business about custom indexable clauses.\n> I agree with Andres that that's the most important aspect of this.\n\nOh, I see what you mean now. I have updated the docs and moved the item\nto Source Code:\n\n <para>\n Add support function capability to improve optimizer estimates,\n inlining, and indexing for functions (Tom Lane)\n </para>\n\n <para>\n This allows extensions to create planner support functions that\n can provide function-specific selectivity, cost, and row-count\n estimates that can depend on the function's arguments. Support\n functions can also supply simplified representations and index\n conditions, greatly expanding optimization possibilities.\n </para>\n\n> > Notice that there are some improvments in in-core functions. Should this\n> > still be moved to the source code section?\n> \n> I doubt that that's worth mentioning at all. It certainly isn't a\n> reason not to move this to the source-code section, because that's\n> where we generally put things that are of interest for improving\n> extensions, which is what this mainly is.\n\nIn-core function mention removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 22 May 2019 11:21:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "Hi,\n\n <listitem>\n<!--\nAuthor: Michael Paquier <michael@paquier.xyz>\n2019-02-08 [3677a0b26] Add pg_partition_root to display top-most parent of a pa\n-->\n\n <para>\n Add function <link\n linkend=\"functions-info-partition\"><function>pg_partition_root()</function></link>\n to return top-most parent of a partition tree (Micha�l Paquier)\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n2019-03-04 [b96f6b194] pg_partition_ancestors\n-->\n\n <para>\n Add function <link\n linkend=\"functions-info-partition\"><function>pg_partition_ancestors()</function></link>\n to report all ancestors of a partition (�lvaro Herrera)\n </para>\n </listitem>\n\n <listitem>\n<!--\nAuthor: Michael Paquier <michael@paquier.xyz>\n2018-10-30 [d5eec4eef] Add pg_partition_tree to display information about parti\n-->\n\n <para>\n Add function <link\n linkend=\"functions-info-partition\"><function>pg_partition_tree()</function></link>\n to display information about partitions (Amit Langote)\n </para>\n </listitem>\n\nCan we combine these three?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 May 2019 08:58:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 2019/05/29 0:58, Andres Freund wrote:\n> Hi,\n> \n> <listitem>\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2019-02-08 [3677a0b26] Add pg_partition_root to display top-most parent of a pa\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_root()</function></link>\n> to return top-most parent of a partition tree (Michaël Paquier)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> 2019-03-04 [b96f6b194] pg_partition_ancestors\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_ancestors()</function></link>\n> to report all ancestors of a partition (Álvaro Herrera)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2018-10-30 [d5eec4eef] Add pg_partition_tree to display information about parti\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_tree()</function></link>\n> to display information about partitions (Amit Langote)\n> </para>\n> </listitem>\n> \n> Can we combine these three?\n\n+1\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 29 May 2019 09:51:05 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 5/11/19 10:33 PM, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n> \n\nHello,\n\nBy looking to a user request to add greek in our FTS [1], I suggest to mention\nwhich languages has been added in fd582317e.\n\nPatch attached.\n\nI hesitate to also mention these changes?\n\n> These all work in UTF8, and the indonesian and irish ones also work in LATIN1.\n\n> The non-UTF8 version of the hungarian stemmer now works in LATIN2 not LATIN1.\n\n\n1:\nhttps://www.postgresql.org/message-id/trinity-f09793a1-8c13-4b56-94fe-10779e96c87e-1559896268438%403c-app-mailcom-bs16\n\nCheers,\n\n-- \nAdrien",
"msg_date": "Fri, 7 Jun 2019 12:04:33 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 02:22:53PM -0700, Peter Geoghegan wrote:\n> On Tue, May 21, 2019 at 1:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > My concern here (which I believe Alexander shares) is that it doesn't\n> > > make sense to group these two items together. They're two totally\n> > > unrelated pieces of work. Alexander's work does more or less help with\n> > > lock contention with writes, whereas the feature that that was merged\n> > > with is about preventing index bloat, which is mostly helpful for\n> > > reads (it helps writes to the extent that writes are also reads).\n> > >\n> > > The release notes go on to say that this item \"gives better\n> > > performance for UPDATEs and DELETEs on indexes with many duplicates\",\n> > > which is wrong. That is something that should have been listed below,\n> > > under the \"duplicate index entries in heap-storage order\" item.\n> >\n> > OK, I understand how the lock stuff improves things, but I have\n> > forgotten how indexes are made smaller. Is it because of better page\n> > split logic?\n> \n> That is clearly the main reason, though suffix truncation (which\n> represents that trailing/suffix columns in index tuples from branch\n> pages have \"negative infinity\" sentinel values) also contributes to\n> making indexes smaller.\n> \n> The page split stuff was mostly added by commit fab250243 (\"Consider\n> secondary factors during nbtree splits\"), but commit f21668f32 (\"Add\n> \"split after new tuple\" nbtree optimization\") added to that in a way\n> that really helped the TPC-C indexes. The TPC-C indexes are about 40%\n> smaller now.\n\nFirst, my apologies in getting to this so late. Peter Geoghegan\nsupplied me with slides and a video to study, and I now understand how\ncomplex the btree improvements are. Here is a video of Peter's\npresentation at PGCon:\n\n\thttps://www.youtube.com/watch?v=p5RaATILoiE\n\nWhat I would like to do is to type them out here, and if I got it right,\nI can then add these details to the release notes.\n\nThe over-arching improvement to btree in PG 12 is the ordering of index\nentries by tid so all entries are unique. As Peter has stated, many\noptimizations come out of that:\n\n1. Since all index tuples are ordered, you can move from one leaf page\nto the next without keeping a lock on the internal page that references\nit, increasing concurrency.\n\n2. When inserting a duplicate value in the index, we used to try a few\npages to see if there was space, then \"get tired\" and just split a page\ncontaining duplicates. Now that there are no duplicates (because\nduplicate key fields are sorted by tid) the system knows exactly what\npage the index tuple belongs on, and inserts or splits the page as\nnecessary.\n\n3. Pivot tuples are used on internal pages and as min/max indicators on\nleaf pages. These pivot tuples are now trimmed if their trailing key\nfields are not significant. For example, if an index is\nfield1/field2/field3, and the page contains values for which field1==5\nand none that field1==6, there is no need to include field2 and field3\nin the pivot tuple --- it can just list the pivot as field1==5,\nfield2=infinity. This is called suffix truncation.\n\nPage splits used to just split the page in half, which minimizes the\nnumber of page splits, but sometimes causes lots of wasted space. The\nnew code tries to split to reduce the length of pivot tuples, which ends\nup being more efficient in space savings because the pivot tuples are\nshorter, and the leaf pages end up being more tightly packed. This is\nparticularly true for ever-increasing keys because we often end up\ncreating a new empty page, rather than splitting an existing page.\n\n4. Vacuum's removal of index tuples in indexes with many duplicates is\nfaster since it can more quickly find desired tuples.\n\nDid I cover everything?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 16:16:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:57:56PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-21 15:47:34 -0400, Bruce Momjian wrote:\n> > On Mon, May 20, 2019 at 03:17:19PM -0700, Andres Freund wrote:\n> > > Hi,\n> > > \n> > > Note that I've added a few questions to individuals involved with\n> > > specific points. If you're in the To: list, please search for your name.\n> > > \n> > > \n> > > On 2019-05-11 16:33:24 -0400, Bruce Momjian wrote:\n> > > > I have posted a draft copy of the PG 12 release notes here:\n> > > >\n> > > > \thttp://momjian.us/pgsql_docs/release-12.html\n> > > > They are committed to git.\n> > > \n> > > Thanks!\n> > > \n> > > <title>Migration to Version 12</title>\n> > > \n> > > There's a number of features in the compat section that are more general\n> > > improvements with a side of incompatibility. Won't it be confusing to\n> > > e.g. have have the ryu floating point conversion speedups in the compat\n> > > section, but not in the \"General Performance\" section?\n> > \n> > Yes, it can be. What I did with the btree item was to split out the max\n> > length change with the larger changes. We can do the same for other\n> > items. As you rightly stated, it is for cases where the incompatibility\n> > is minor compared to the change. Do you have a list of the ones that\n> > need this treatment?\n> \n> I was concretely thinking of:\n> - floating point output changes, which are primarily about performance\n\nIf we split out the compatibility change, we don't have much left but\n\"performance\", and that doesn't seem long enough for an entry.\n\n> - recovery.conf changes where I'd merge:\n> - Do not allow multiple different recovery_target specificatios\n> - Allow some recovery parameters to be changed with reload\n> - Cause recovery to advance to the latest timeline by default\n> - Add an explicit value of current for guc-recovery-target-time\n> into on entry on the feature side.\n> \n> After having to move recovery settings to a different file, disallowing\n> multiple targets isn't really a separate config break imo. And all the\n> other changes are also fallout from the recovery.conf GUCification.\n\nEven though we moved the recovery.conf values into postgresql.conf, I\nthink people will assume they just behave the same and copy them into\nthe new file. If their behavior changes, they need to know that\nexplicitly.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:06:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, May 22, 2019 at 04:50:14PM +0900, Ian Barwick wrote:\n> On 5/22/19 4:26 PM, Michael Paquier wrote:\n> > On Wed, May 22, 2019 at 09:19:53AM +0900, Ian Barwick wrote:\n> > > the last two items are performance improvements not related to authentication;\n> > > presumably the VACUUM item would be better off in the \"Utility Commands\"\n> > > section and the TRUNCATE item in \"General Performance\"?\n> > \n> > I agree with removing them from authentication, but these are not\n> > performance-related items. Instead I think that \"Utility commands\" is\n> > a place where they can live better.\n> > \n> > I am wondering if we should insist on the DOS attacks on a server, as\n> > non-authorized users are basically able to block any tables, and\n> > authorization is only a part of it, one of the worst parts\n> > actually... Anyway, I think that \"This prevents unauthorized locking\n> > delays.\" does not provide enough details. What about reusing the\n> > first paragraph of the commits? Here is an idea:\n> > \"A caller of TRUNCATE/VACUUM/ANALYZE could previously queue for an\n> > access exclusive lock on a relation it may not have permission to\n> > truncate/vacuum/analyze, potentially interfering with users authorized\n> > to work on it. This could prevent users from accessing some relations\n> > they have access to, in some cases preventing authentication if a\n> > critical catalog relation was blocked.\"\n> \n> Ah, if that's the intent behind/use for those changes (I haven't looked at them\n> in any detail, was just scanning the release notes) then it certainly needs some\n> explanation along those lines.\n\nSince we did not backpatch this fix, I am hesitant to spell out exactly\nhow to exploit this DOS attack. Yes, people can read it in the email\narchives, and commit messages, but I don't see the value in spelling it\nout the release notes too.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:25:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, May 28, 2019 at 08:58:23AM -0700, Andres Freund wrote:\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2019-02-08 [3677a0b26] Add pg_partition_root to display top-most parent of a pa\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_root()</function></link>\n> to return top-most parent of a partition tree (Micha�l Paquier)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> 2019-03-04 [b96f6b194] pg_partition_ancestors\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_ancestors()</function></link>\n> to report all ancestors of a partition (�lvaro Herrera)\n> </para>\n> </listitem>\n> \n> <listitem>\n> <!--\n> Author: Michael Paquier <michael@paquier.xyz>\n> 2018-10-30 [d5eec4eef] Add pg_partition_tree to display information about parti\n> -->\n> \n> <para>\n> Add function <link\n> linkend=\"functions-info-partition\"><function>pg_partition_tree()</function></link>\n> to display information about partitions (Amit Langote)\n> </para>\n> </listitem>\n> \n> Can we combine these three?\n\nGood idea, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:36:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 12:04:33PM +0200, Adrien Nayrat wrote:\n> On 5/11/19 10:33 PM, Bruce Momjian wrote:\n> > I have posted a draft copy of the PG 12 release notes here:\n> > \n> > \thttp://momjian.us/pgsql_docs/release-12.html\n> > \n> > They are committed to git. It includes links to the main docs, where\n> > appropriate. Our official developer docs will rebuild in a few hours.\n> > \n> \n> Hello,\n> \n> By looking to a user request to add greek in our FTS [1], I suggest to mention\n> which languages has been added in fd582317e.\n> \n> Patch attached.\n> \n> I hesitate to also mention these changes?\n> \n> > These all work in UTF8, and the indonesian and irish ones also work in LATIN1.\n> \n> > The non-UTF8 version of the hungarian stemmer now works in LATIN2 not LATIN1.\n> \n> \n> 1:\n> https://www.postgresql.org/message-id/trinity-f09793a1-8c13-4b56-94fe-10779e96c87e-1559896268438%403c-app-mailcom-bs16\n\nGood idea, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:46:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 1:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> First, my apologies in getting to this so late. Peter Geoghegan\n> supplied me with slides and a video to study, and I now understand how\n> complex the btree improvements are. Here is a video of Peter's\n> presentation at PGCon:\n>\n> https://www.youtube.com/watch?v=p5RaATILoiE\n\nThank you for going to the trouble of researching the B-Tree stuff in\ndetail! I wouldn't ask that of anybody in the position of writing\nrelease notes, so it's appreciated. It is awkward to take the work\nthat I've done and make it into multiple bullet points; I have a hard\ntime with that myself.\n\n> The over-arching improvement to btree in PG 12 is the ordering of index\n> entries by tid so all entries are unique.\n\nRight. Everything good happens as a direct or indirect result of the\nTID-as-column thing. That is really the kernel of the whole thing,\nbecause it means that the implementation now *fully* follows the\nLehman and Yao design.\n\n> 1. Since all index tuples are ordered, you can move from one leaf page\n> to the next without keeping a lock on the internal page that references\n> it, increasing concurrency.\n\nI'm not sure what you mean here. We've never had to keep a lock on an\ninternal page while holding a lock on a leaf page -- such \"lock\ncoupling\" was necessary in earlier B-Tree designs, but Lehman and\nYao's algorithm avoids that. Of course, that isn't new.\n\nI think that you're talking about the way that we now check the high\nkey during index scans, and find that we don't have to move to the\nnext leaf page, per this commit:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=29b64d1de7c77ffb5cb10696693e6ed8a6fc481c\n\nAll of the suffix truncation stuff is good because it makes separator\nkeys in internal pages smaller, but it's also good because the\nseparator keys are simultaneously more \"discriminating\". We tend to\nget a nice \"clean break\" between leaf pages, so checking the high key\nbefore moving right to find additional matches (once we've already\nreturned some tuples to the executor) is surprisingly effective. It\nwould have been possible to add this optimization even without the\nsuffix truncation stuff, but truncation makes it effective.\n\nIf you had to cut one thing from this list, then I would suggest that\nit be this item. It's nice, but it's also very obvious, which makes it\nhard to explain. I mean, why should there be any ambiguity at all?\nUnless we have to return *hundreds* of items to the index scan, then a\nsimple \"select * from foo where bar = ?\" style query should only have\nto visit one leaf page, even when the constant happens to be on the\nboundary of a leaf page (maybe a concurrent page split could make this\nhappen, but that should be rare).\n\n> 2. When inserting a duplicate value in the index, we used to try a few\n> pages to see if there was space, then \"get tired\" and just split a page\n> containing duplicates. Now that there are no duplicates (because\n> duplicate key fields are sorted by tid) the system knows exactly what\n> page the index tuple belongs on, and inserts or splits the page as\n> necessary.\n\nRight -- inserts will descend straight to the correct leaf page, and\nthe \"getting tired\" dance isn't used anymore. This makes insertions\nfaster, but more importantly is a better high level strategy for\nstoring lots of duplicates. We'll dirty far fewer pages, because\ninsertions automatically end up inserting around the same place as we\ninserted to a moment ago. Insertions of duplicates behave like\nserial/auto-incrementing insertions, which was already\nfast/well-optimized in various ways.\n\nIt's easy to measure this by looking at index bloat when inserting\nlots of duplicates -- I came up with the 16% figure in the talk based\non a simple insert-only test.\n\n> 3. Pivot tuples are used on internal pages and as min/max indicators on\n> leaf pages. These pivot tuples are now trimmed if their trailing key\n> fields are not significant. For example, if an index is\n> field1/field2/field3, and the page contains values for which field1==5\n> and none that field1==6, there is no need to include field2 and field3\n> in the pivot tuple --- it can just list the pivot as field1==5,\n> field2=infinity. This is called suffix truncation.\n\nRight -- that's exactly how it works. Users may find that indexes with\nlots of extra columns at the end won't get so bloated in Postgres 12.\nIndexing many columns is typically seen when index-only scans are\nimportant. Of course, you could have made those indexes INCLUDE\nindexes on v11, which is actually a closely related idea, but that\nmakes it impossible to use the trailing/suffix columns in an ORDER BY.\nAnd, you have to know about INCLUDE indexes, and remember to use them.\n\n(This must be why Oracle can get away with not having INCLUDE indexes.)\n\n> Page splits used to just split the page in half, which minimizes the\n> number of page splits, but sometimes causes lots of wasted space. The\n> new code tries to split to reduce the length of pivot tuples, which ends\n> up being more efficient in space savings because the pivot tuples are\n> shorter, and the leaf pages end up being more tightly packed. This is\n> particularly true for ever-increasing keys because we often end up\n> creating a new empty page, rather than splitting an existing page.\n\nRight. We need to be somewhat cleverer about precisely where we split\nleaf pages in order to make suffix truncation work well. But, it turns\nout that there is another case where being *very* clever about leaf\npage split points matters a lot, which is targeted by the \"split after\nnew tuple\" optimization. The \"split after new tuple\" optimization was\njust a bonus for me.\n\n> 4. Vacuum's removal of index tuples in indexes with many duplicates is\n> faster since it can more quickly find desired tuples.\n\nThis may be true, but the interesting way that the TID-as-column thing\nhelps VACUUM is related to locality (spatial and temporal locality).\nVACUUM can delete/remove whole pages at a time, though only when\nthey're completely empty (just one remaining tuple blocks page\ndeletion. because VACUUM cannot merge non-empty pages together). This\nis much more likely to occur when we group like with like. Heap TID is\noften correlated with primary key value, or with a timestamp, so we\ncan easily imagine VACUUM deleting more pages filled with duplicates\nbecause they're grouped together in a way that's logical (instead of\nmore-or-less random, which is what \"getting tired\" left us with).\n\nEven without page deletion occurring, we can reuse ranges in the index\nwhen there are lots of duplicates. We delete some rows in a table, and\nVACUUM ultimately removes the rows from both table and indexes --\nincluding some interesting index that stores lots of duplicates, like\nan index on a enum field. VACUUM makes it possible to recycle the heap\nTIDs/space *everywhere*, not just in the table, as before. In the\nfairly likely event of a future insert that recycles a heap TID also\nhaving the same value for the index with duplicates (say its an enum\nvalue), the dup index tuple can go in exactly the same place as the\nearlier, now-deleted dup tuple. The \"recycling\" works at the index\nlevel, too. In practice this kind of locality is rather common. It's\nespecially likely with non-HOT updates where the duplicate value isn't\nchanged, though simple inserts and deletes can see the same benefit.\n\nObviously you'll need to boil all that down -- good luck!\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 12 Jun 2019 15:06:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 03:06:34PM -0700, Peter Geoghegan wrote:\n> On Wed, Jun 12, 2019 at 1:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > First, my apologies in getting to this so late. Peter Geoghegan\n> > supplied me with slides and a video to study, and I now understand how\n> > complex the btree improvements are. Here is a video of Peter's\n> > presentation at PGCon:\n> >\n> > https://www.youtube.com/watch?v=p5RaATILoiE\n> \n> Thank you for going to the trouble of researching the B-Tree stuff in\n> detail! I wouldn't ask that of anybody in the position of writing\n> release notes, so it's appreciated. It is awkward to take the work\n> that I've done and make it into multiple bullet points; I have a hard\n> time with that myself.\n\nI had become so confused by this item that I needed a few weeks to\nsettle on what was actually going on.\n\n> > The over-arching improvement to btree in PG 12 is the ordering of index\n> > entries by tid so all entries are unique.\n> \n> Right. Everything good happens as a direct or indirect result of the\n> TID-as-column thing. That is really the kernel of the whole thing,\n> because it means that the implementation now *fully* follows the\n> Lehman and Yao design.\n\nRight.\n\n> > 1. Since all index tuples are ordered, you can move from one leaf page\n> > to the next without keeping a lock on the internal page that references\n> > it, increasing concurrency.\n> \n> I'm not sure what you mean here. We've never had to keep a lock on an\n> internal page while holding a lock on a leaf page -- such \"lock\n> coupling\" was necessary in earlier B-Tree designs, but Lehman and\n> Yao's algorithm avoids that. Of course, that isn't new.\n> \n> I think that you're talking about the way that we now check the high\n> key during index scans, and find that we don't have to move to the\n> next leaf page, per this commit:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=29b64d1de7c77ffb5cb10696693e6ed8a6fc481c\n\nI was wrong. I was thinking of this commit:\n\n commit d2086b08b0\n Author: Alexander Korotkov <akorotkov@postgresql.org>\n Date: Sat Jul 28 00:31:40 2018 +0300\n\n Reduce path length for locking leaf B-tree pages during insertion\n\n In our B-tree implementation appropriate leaf page for new tuple\n insertion is acquired using _bt_search() function. This function always\n returns leaf page locked in shared mode. In order to obtain exclusive\n lock, caller have to relock the page.\n\n This commit makes _bt_search() function lock leaf page immediately in\n exclusive mode when needed. That removes unnecessary relock and, in\n turn reduces lock contention for B-tree leaf pages. Our experiments\n on multi-core systems showed acceleration up to 4.5 times in corner\n case.\n\nbut got it confused by an optimization you mentioned in the video where\nyou were talking about the need to perhaps recheck the internal page\nwhen moving right. We certainly don't keep the internal page locked.\n\n> All of the suffix truncation stuff is good because it makes separator\n> keys in internal pages smaller, but it's also good because the\n> separator keys are simultaneously more \"discriminating\". We tend to\n> get a nice \"clean break\" between leaf pages, so checking the high key\n> before moving right to find additional matches (once we've already\n> returned some tuples to the executor) is surprisingly effective. It\n> would have been possible to add this optimization even without the\n> suffix truncation stuff, but truncation makes it effective.\n> \n> If you had to cut one thing from this list, then I would suggest that\n> it be this item. It's nice, but it's also very obvious, which makes it\n> hard to explain. I mean, why should there be any ambiguity at all?\n> Unless we have to return *hundreds* of items to the index scan, then a\n> simple \"select * from foo where bar = ?\" style query should only have\n> to visit one leaf page, even when the constant happens to be on the\n> boundary of a leaf page (maybe a concurrent page split could make this\n> happen, but that should be rare).\n\nRight. The commit mentioned a 4.5x speedup in a rare benchmark, so I\nadded it lower on the list.\n\n> > 2. When inserting a duplicate value in the index, we used to try a few\n> > pages to see if there was space, then \"get tired\" and just split a page\n> > containing duplicates. Now that there are no duplicates (because\n> > duplicate key fields are sorted by tid) the system knows exactly what\n> > page the index tuple belongs on, and inserts or splits the page as\n> > necessary.\n> \n> Right -- inserts will descend straight to the correct leaf page, and\n> the \"getting tired\" dance isn't used anymore. This makes insertions\n> faster, but more importantly is a better high level strategy for\n> storing lots of duplicates. We'll dirty far fewer pages, because\n> insertions automatically end up inserting around the same place as we\n> inserted to a moment ago. Insertions of duplicates behave like\n> serial/auto-incrementing insertions, which was already\n> fast/well-optimized in various ways.\n\nYes, locality.\n\n> It's easy to measure this by looking at index bloat when inserting\n> lots of duplicates -- I came up with the 16% figure in the talk based\n> on a simple insert-only test.\n> \n> > 3. Pivot tuples are used on internal pages and as min/max indicators on\n> > leaf pages. These pivot tuples are now trimmed if their trailing key\n> > fields are not significant. For example, if an index is\n> > field1/field2/field3, and the page contains values for which field1==5\n> > and none that field1==6, there is no need to include field2 and field3\n> > in the pivot tuple --- it can just list the pivot as field1==5,\n> > field2=infinity. This is called suffix truncation.\n> \n> Right -- that's exactly how it works. Users may find that indexes with\n> lots of extra columns at the end won't get so bloated in Postgres 12.\n> Indexing many columns is typically seen when index-only scans are\n> important. Of course, you could have made those indexes INCLUDE\n> indexes on v11, which is actually a closely related idea, but that\n> makes it impossible to use the trailing/suffix columns in an ORDER BY.\n> And, you have to know about INCLUDE indexes, and remember to use them.\n> \n> (This must be why Oracle can get away with not having INCLUDE indexes.)\n> \n> > Page splits used to just split the page in half, which minimizes the\n> > number of page splits, but sometimes causes lots of wasted space. The\n> > new code tries to split to reduce the length of pivot tuples, which ends\n> > up being more efficient in space savings because the pivot tuples are\n> > shorter, and the leaf pages end up being more tightly packed. This is\n> > particularly true for ever-increasing keys because we often end up\n> > creating a new empty page, rather than splitting an existing page.\n> \n> Right. We need to be somewhat cleverer about precisely where we split\n> leaf pages in order to make suffix truncation work well. But, it turns\n> out that there is another case where being *very* clever about leaf\n> page split points matters a lot, which is targeted by the \"split after\n> new tuple\" optimization. The \"split after new tuple\" optimization was\n> just a bonus for me.\n> \n> > 4. Vacuum's removal of index tuples in indexes with many duplicates is\n> > faster since it can more quickly find desired tuples.\n> \n> This may be true, but the interesting way that the TID-as-column thing\n> helps VACUUM is related to locality (spatial and temporal locality).\n> VACUUM can delete/remove whole pages at a time, though only when\n> they're completely empty (just one remaining tuple blocks page\n> deletion. because VACUUM cannot merge non-empty pages together). This\n> is much more likely to occur when we group like with like. Heap TID is\n> often correlated with primary key value, or with a timestamp, so we\n> can easily imagine VACUUM deleting more pages filled with duplicates\n> because they're grouped together in a way that's logical (instead of\n> more-or-less random, which is what \"getting tired\" left us with).\n> \n> Even without page deletion occurring, we can reuse ranges in the index\n> when there are lots of duplicates. We delete some rows in a table, and\n> VACUUM ultimately removes the rows from both table and indexes --\n> including some interesting index that stores lots of duplicates, like\n> an index on a enum field. VACUUM makes it possible to recycle the heap\n> TIDs/space *everywhere*, not just in the table, as before. In the\n> fairly likely event of a future insert that recycles a heap TID also\n> having the same value for the index with duplicates (say its an enum\n> value), the dup index tuple can go in exactly the same place as the\n> earlier, now-deleted dup tuple. The \"recycling\" works at the index\n> level, too. In practice this kind of locality is rather common. It's\n> especially likely with non-HOT updates where the duplicate value isn't\n> changed, though simple inserts and deletes can see the same benefit.\n\nYes, I can see index space reuse as more closely matching the heap.\n\n> Obviously you'll need to boil all that down -- good luck!\n\nAttached is an updated patch. I might have missed something, but I\nthink it might be close.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Wed, 12 Jun 2019 20:22:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 5:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I had become so confused by this item that I needed a few weeks to\n> settle on what was actually going on.\n\nI put a lot of time into my pgCon talk, especially on the diagrams.\nSeems like that paid off. Even Heikki was confused by my explanations\nat one point.\n\nI should go add a similar diagram to our documentation, under \"Chapter\n63. B-Tree Indexes\", because diagrams are the only sensible way to\nexplain the concepts.\n\n> I was wrong. I was thinking of this commit:\n>\n> commit d2086b08b0\n> Author: Alexander Korotkov <akorotkov@postgresql.org>\n> Date: Sat Jul 28 00:31:40 2018 +0300\n>\n> Reduce path length for locking leaf B-tree pages during insertion\n\n> > If you had to cut one thing from this list, then I would suggest that\n> > it be this item. It's nice, but it's also very obvious, which makes it\n> > hard to explain.\n\n> Right. The commit mentioned a 4.5x speedup in a rare benchmark, so I\n> added it lower on the list.\n\nMy remark about cutting an item referred to a lesser item that I\nworked on (the 'Add nbtree high key \"continuescan\" optimization'\ncommit), not Alexander independent B-Tree work. I think that\nAlexander's optimization is also quite effective. Though FWIW the 4.5x\nimprovement concerned a case involving lots of duplicates...cases with\na lot of duplicates will be far far better in Postgres 12. (I never\ntested my patch without Alexander's commit, since it went in early in\nthe v12 cycle.)\n\n> Yes, locality.\n\n\"Locality\" is one of my favorite words.\n\n> Attached is an updated patch. I might have missed something, but I\n> think it might be close.\n\nThis looks great. I do have a few things:\n\n* I would put \"Improve performance and space utilization of btree\nindexes with many duplicates\" first (before \"Allow multi-column btree\nindexes to be smaller\"). I think that this is far more common than we\ntend to assume, and is also where the biggest benefits are.\n\n* The wording of the \"many duplicates\" item itself is almost perfect,\nthough the \"...and inefficiency when VACUUM needed to find a row for\nremoval\" part seems a bit off -- this is really about the\neffectiveness of VACUUM, not the speed at which the VACUUM completes\n(it's a bit faster, but that's not that important). Perhaps that part\nshould read: \"...and often failed to efficiently recycle space made\navailable by VACUUM\". Something like that.\n\n* The \"Allow multi-column btree indexes to be smaller\" item is about\nboth suffix truncation, and about the \"Split after new tuple\"\noptimization. I think that that makes it more complicated than it\nneeds to be. While the improvements that we saw with TPC-C on account\nof the \"Split after new tuple\" optimization were nice, I doubt that\nusers will be looking out for it. I would be okay if you dropped any\nmention of the \"Split after new tuple\" optimization, in the interest\nof making the description more useful to users. We can just lose that.\n\n* Once you simplify the item by making it all about suffix truncation,\nit would make sense to change the single line summary to \"Reduce the\nnumber of branch blocks needed for multi-column indexes\". Then go on\nto talk about how we now only store those columns that are necessary\nto guide index scans in tuples stored in branch pages (we tend to call\nbranch pages internal pages, but branch pages seems friendlier to me).\nNote that the user docs of other database systems reference these\ndetails, even in their introductory material on how B-Tree indexes\nwork. The term \"suffix truncation\" isn't something users have heard\nof, and we shouldn't use it here, but the *idea* of suffix truncation\nis very well established. As I mentioned, it matters for things like\ncovering indexes (indexes that are designed to be used by index-only\nscans, which are not necessarily INCLUDE indexes).\n\nThanks!\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 12 Jun 2019 18:34:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 06:34:27PM -0700, Peter Geoghegan wrote:\n> > I was wrong. I was thinking of this commit:\n> >\n> > commit d2086b08b0\n> > Author: Alexander Korotkov <akorotkov@postgresql.org>\n> > Date: Sat Jul 28 00:31:40 2018 +0300\n> >\n> > Reduce path length for locking leaf B-tree pages during insertion\n> \n> > > If you had to cut one thing from this list, then I would suggest that\n> > > it be this item. It's nice, but it's also very obvious, which makes it\n> > > hard to explain.\n> \n> > Right. The commit mentioned a 4.5x speedup in a rare benchmark, so I\n> > added it lower on the list.\n> \n> My remark about cutting an item referred to a lesser item that I\n> worked on (the 'Add nbtree high key \"continuescan\" optimization'\n> commit), not Alexander independent B-Tree work. I think that\n> Alexander's optimization is also quite effective. Though FWIW the 4.5x\n> improvement concerned a case involving lots of duplicates...cases with\n> a lot of duplicates will be far far better in Postgres 12. (I never\n> tested my patch without Alexander's commit, since it went in early in\n> the v12 cycle.)\n\nOK, good to know.\n\n> > Attached is an updated patch. I might have missed something, but I\n> > think it might be close.\n> \n> This looks great. I do have a few things:\n> \n> * I would put \"Improve performance and space utilization of btree\n> indexes with many duplicates\" first (before \"Allow multi-column btree\n> indexes to be smaller\"). I think that this is far more common than we\n> tend to assume, and is also where the biggest benefits are.\n\nOK, done, I was wondering about that.\n\n> * The wording of the \"many duplicates\" item itself is almost perfect,\n> though the \"...and inefficiency when VACUUM needed to find a row for\n> removal\" part seems a bit off -- this is really about the\n> effectiveness of VACUUM, not the speed at which the VACUUM completes\n> (it's a bit faster, but that's not that important). Perhaps that part\n> should read: \"...and often failed to efficiently recycle space made\n> available by VACUUM\". Something like that.\n\nAh, I see what you mean --- recycle entire pages. I have updated the\npatch.\n\n> * The \"Allow multi-column btree indexes to be smaller\" item is about\n> both suffix truncation, and about the \"Split after new tuple\"\n> optimization. I think that that makes it more complicated than it\n> needs to be. While the improvements that we saw with TPC-C on account\n> of the \"Split after new tuple\" optimization were nice, I doubt that\n> users will be looking out for it. I would be okay if you dropped any\n> mention of the \"Split after new tuple\" optimization, in the interest\n> of making the description more useful to users. We can just lose that.\n\nOK, done.\n\n> * Once you simplify the item by making it all about suffix truncation,\n> it would make sense to change the single line summary to \"Reduce the\n> number of branch blocks needed for multi-column indexes\". Then go on\n> to talk about how we now only store those columns that are necessary\n> to guide index scans in tuples stored in branch pages (we tend to call\n> branch pages internal pages, but branch pages seems friendlier to me).\n> Note that the user docs of other database systems reference these\n> details, even in their introductory material on how B-Tree indexes\n> work. The term \"suffix truncation\" isn't something users have heard\n> of, and we shouldn't use it here, but the *idea* of suffix truncation\n> is very well established. As I mentioned, it matters for things like\n> covering indexes (indexes that are designed to be used by index-only\n> scans, which are not necessarily INCLUDE indexes).\n\nOK, I mentioned something about increased locality now. Patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Wed, 12 Jun 2019 22:29:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 7:29 PM Bruce Momjian <bruce@momjian.us> wrote:\n> OK, I mentioned something about increased locality now. Patch attached.\n\nLooks good -- locality is a good catch-all term.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 12 Jun 2019 19:42:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 07:42:31PM -0700, Peter Geoghegan wrote:\n> On Wed, Jun 12, 2019 at 7:29 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > OK, I mentioned something about increased locality now. Patch attached.\n> \n> Looks good -- locality is a good catch-all term.\n\nGreat, patch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 12 Jun 2019 22:48:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 05:25:37PM -0400, Bruce Momjian wrote:\n> Since we did not backpatch this fix, I am hesitant to spell out exactly\n> how to exploit this DOS attack. Yes, people can read it in the email\n> archives, and commit messages, but I don't see the value in spelling it\n> out the release notes too.\n\nWe could go for a more general version of that, for the reason that it\ninvolves all relations, like:\n\"A caller of TRUNCATE or VACUUM could previously queue for an access\nexclusive lock on a relation it may not have permission to truncate or\nvacuum, leading to relations to be blocked from being accessed.\"\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 15:33:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 03:33:48PM +0900, Michael Paquier wrote:\n> On Wed, Jun 12, 2019 at 05:25:37PM -0400, Bruce Momjian wrote:\n> > Since we did not backpatch this fix, I am hesitant to spell out exactly\n> > how to exploit this DOS attack. Yes, people can read it in the email\n> > archives, and commit messages, but I don't see the value in spelling it\n> > out the release notes too.\n> \n> We could go for a more general version of that, for the reason that it\n> involves all relations, like:\n> \"A caller of TRUNCATE or VACUUM could previously queue for an access\n> exclusive lock on a relation it may not have permission to truncate or\n> vacuum, leading to relations to be blocked from being accessed.\"\n\nUh, that still seems to suggest an attack and I am not sure that\ninformation is very useful to users.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:11:08 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 09:11:08AM -0400, Bruce Momjian wrote:\n> On Thu, Jun 13, 2019 at 03:33:48PM +0900, Michael Paquier wrote:\n> > On Wed, Jun 12, 2019 at 05:25:37PM -0400, Bruce Momjian wrote:\n> > > Since we did not backpatch this fix, I am hesitant to spell out exactly\n> > > how to exploit this DOS attack. Yes, people can read it in the email\n> > > archives, and commit messages, but I don't see the value in spelling it\n> > > out the release notes too.\n> > \n> > We could go for a more general version of that, for the reason that it\n> > involves all relations, like:\n> > \"A caller of TRUNCATE or VACUUM could previously queue for an access\n> > exclusive lock on a relation it may not have permission to truncate or\n> > vacuum, leading to relations to be blocked from being accessed.\"\n> \n> Uh, that still seems to suggest an attack and I am not sure that\n> information is very useful to users.\n\nI went with this wording:\n\n This prevents unauthorized locking, which could interfere with\n user queries.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:12:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 09:12:58AM -0400, Bruce Momjian wrote:\n> I went with this wording:\n> \n> This prevents unauthorized locking, which could interfere with\n> user queries.\n\nOkay, fine for me. Thanks for updating the notes.\n--\nMichael",
"msg_date": "Fri, 14 Jun 2019 10:16:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "I noticed a small typo in the release notes in the list of languages\nwith new stemmers (see attached)\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 15 Jul 2019 22:18:07 +0700",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Sat, 2019-05-11 at 16:33 -0400, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n\nI wonder if commits 0ba06e0bf and 40cfe8606 are worth mentioning\nin the release notes. They make \"pg_test_fsync\" work correctly\non Windows for the first time.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 15 Jul 2019 20:51:34 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 10:18:07PM +0700, John Naylor wrote:\n> I noticed a small typo in the release notes in the list of languages\n> with new stemmers (see attached)\n\nSorry, fixed, thanks.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 15 Jul 2019 21:19:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 7:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Great, patch applied.\n\nI think that it would make sense to have a v12 release note item for\namcheck's new \"rootdescend\" verification option:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c1afd175b5b2e5c44f6da34988342e00ecdfb518\n\nIt is a user facing feature, which increments the amcheck extension\nversion number.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 15 Jul 2019 18:21:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 08:51:34PM +0200, Laurenz Albe wrote:\n> I wonder if commits 0ba06e0bf and 40cfe8606 are worth mentioning\n> in the release notes. They make \"pg_test_fsync\" work correctly\n> on Windows for the first time.\n\nI don't know about this point specifically. Improving support for\npg_test_fsync on Windows is just a side effect of the first commit\nwhich benefits all frontend tools (the second commit is an\nembarrassing bug fix for the first one). And at the same time we\ndon't really add in the release notes low-level improvements like\nthese ones.\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 15:26:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 08:51:34PM +0200, Laurenz Albe wrote:\n> On Sat, 2019-05-11 at 16:33 -0400, Bruce Momjian wrote:\n> > I have posted a draft copy of the PG 12 release notes here:\n> \n> I wonder if commits 0ba06e0bf and 40cfe8606 are worth mentioning\n> in the release notes. They make \"pg_test_fsync\" work correctly\n> on Windows for the first time.\n\nOh, I missed adding that. I applied the attached patch, with updated\nwording, and moved it to the Server Applications section. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 25 Jul 2019 21:06:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 03:26:31PM +0900, Michael Paquier wrote:\n> On Mon, Jul 15, 2019 at 08:51:34PM +0200, Laurenz Albe wrote:\n> > I wonder if commits 0ba06e0bf and 40cfe8606 are worth mentioning\n> > in the release notes. They make \"pg_test_fsync\" work correctly\n> > on Windows for the first time.\n> \n> I don't know about this point specifically. Improving support for\n> pg_test_fsync on Windows is just a side effect of the first commit\n> which benefits all frontend tools (the second commit is an\n> embarrassing bug fix for the first one). And at the same time we\n> don't really add in the release notes low-level improvements like\n> these ones.\n\nWell, if we were reporting incorrect results before, that seems like a\nfix, with updated wording, of course, to mention just the fix.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 25 Jul 2019 21:07:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 06:21:59PM -0700, Peter Geoghegan wrote:\n> On Wed, Jun 12, 2019 at 7:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Great, patch applied.\n> \n> I think that it would make sense to have a v12 release note item for\n> amcheck's new \"rootdescend\" verification option:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c1afd175b5b2e5c44f6da34988342e00ecdfb518\n> \n> It is a user facing feature, which increments the amcheck extension\n> version number.\n\nAttached patch applied, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 25 Jul 2019 21:37:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 6:37 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Attached patch applied, thanks.\n\nThanks Bruce,\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Jul 2019 18:39:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 5/11/19 4:33 PM, Bruce Momjian wrote:\n> I have posted a draft copy of the PG 12 release notes here:\n> \n> \thttp://momjian.us/pgsql_docs/release-12.html\n> \n> They are committed to git. It includes links to the main docs, where\n> appropriate. Our official developer docs will rebuild in a few hours.\n> \n\nThanks again for compiling and writing up the release notes. I know it\nis no small effort.\n\nAttached is a patch proposing items for the major items section. This is\nworking off of the ongoing draft of the press release[1]. Feedback\nwelcome. With respect to the linking, I tried I to give a bunch of\njumping off points for users to explore the features, but visually tried\nto ensure the readability was consistent.\n\nI also attached a patch addressing the \"MENTION ITS AFFECT ON ORDERING?\"\nin which I choose to answer it \"yes\" and added a comment :)\n\nThanks,\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/c56eeb88-4a8c-2c6c-b5f1-9d46872c247c%40postgresql.org",
"msg_date": "Mon, 2 Sep 2019 13:02:31 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "\n\n> 2 сент. 2019 г., в 22:02, Jonathan S. Katz <jkatz@postgresql.org> написал(а):\n> \n> \n> Attached is a patch proposing items for the major items section. This is\n> working off of the ongoing draft of the press release[1]. Feedback\n> welcome. With respect to the linking, I tried I to give a bunch of\n> jumping off points for users to explore the features, but visually tried\n> to ensure the readability was consistent.\n\n+ <para>\n+ Reduction of <acronym>WAL</acronym> overhead of\n+ <link linkend=\"gist\">GiST</link>, <link linkend=\"gin\">GIN</link>, and\n+ <link linkend=\"spgist\">SP-GiST</link> indexes and added support\n+ for covering indexes via the <link linkend=\"sql-createindex\">\n+ <literal>INCLUDE</literal></link> clause for\n+ <link linkend=\"spgist\">SP-GiST</link> indexes\n+ </para>\n\nMaybe I'm missing something, but covering indexes are supported in GiST, not in SP-GiST.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 2 Sep 2019 22:37:23 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 9/2/19 1:37 PM, Andrey Borodin wrote:\n> \n> \n>> 2 сент. 2019 г., в 22:02, Jonathan S. Katz <jkatz@postgresql.org> написал(а):\n>>\n>>\n>> Attached is a patch proposing items for the major items section. This is\n>> working off of the ongoing draft of the press release[1]. Feedback\n>> welcome. With respect to the linking, I tried I to give a bunch of\n>> jumping off points for users to explore the features, but visually tried\n>> to ensure the readability was consistent.\n> \n> + <para>\n> + Reduction of <acronym>WAL</acronym> overhead of\n> + <link linkend=\"gist\">GiST</link>, <link linkend=\"gin\">GIN</link>, and\n> + <link linkend=\"spgist\">SP-GiST</link> indexes and added support\n> + for covering indexes via the <link linkend=\"sql-createindex\">\n> + <literal>INCLUDE</literal></link> clause for\n> + <link linkend=\"spgist\">SP-GiST</link> indexes\n> + </para>\n> \n> Maybe I'm missing something, but covering indexes are supported in GiST, not in SP-GiST.\n\nNope, you're absolutely correct: that was a typo as a result of\ncopying/pasting formatting. Attached is a revision that correctly\nspecifies covering indexes for GiST.\n\nThanks!\n\nJonathan",
"msg_date": "Mon, 2 Sep 2019 14:11:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
},
{
"msg_contents": "On 5/12/19 11:42 PM, Bruce Momjian wrote:\n> On Sun, May 12, 2019 at 10:49:07AM -0400, Jonathan Katz wrote:\n>> Hi Bruce,\n>>\n>> On 5/11/19 4:33 PM, Bruce Momjian wrote:\n>>> I have posted a draft copy of the PG 12 release notes here:\n>>>\n>>> \thttp://momjian.us/pgsql_docs/release-12.html\n>>>\n>>> They are committed to git. It includes links to the main docs, where\n>>> appropriate. Our official developer docs will rebuild in a few hours.\n>>\n>> Thank you for working on this, I know it's a gargantuan task.\n>>\n>> I have a small modification for a section entitled \"Source Code\" which\n>> is a repeat of the previous section. Based on the bullet points in that\n>> part, I thought \"Documentation\" might be a more appropriate name; please\n>> see attached.\n> \n> Yes, I saw that myself and just updated it. Thanks.\n\nGreat!\n\nThere is a placeholder for PG_COLORS:\n\n <para>\n This is enabled with by setting environment variable\n <envar>PG_COLORS</envar>. EXAMPLE?\n </para>\n\nI've attached the following that provides an example of how to use this\nenvironmental variable.\n\nThanks!\n\nJonathan",
"msg_date": "Mon, 2 Sep 2019 15:55:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG 12 draft release notes"
}
] |
[
{
"msg_contents": "Returning -1 from a function with bool as return value is the same as\nreturning true. Now, the code is dead (since elog(ERROR, ...) does not\nreturn) so it doesn't matter to the compiler, but changing to false is less\nconfusing for the programmer. Appologies if this is seen as unnecessary\nchurn.\n\nThe same code is present since 9.4, but perhaps it's not really worth\nbackporting since it is more of an aesthetic change?",
"msg_date": "Sun, 12 May 2019 03:18:08 +0200",
"msg_from": "Rikard Falkeborn <rikard.falkeborn@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong dead return value in jsonb_utils.c"
},
{
"msg_contents": "On Sun, May 12, 2019 at 03:18:08AM +0200, Rikard Falkeborn wrote:\n> Returning -1 from a function with bool as return value is the same as\n> returning true. Now, the code is dead (since elog(ERROR, ...) does not\n> return) so it doesn't matter to the compiler, but changing to false is less\n> confusing for the programmer. Appologies if this is seen as unnecessary\n> churn.\n> \n> The same code is present since 9.4, but perhaps it's not really worth\n> backporting since it is more of an aesthetic change?\n\nThis is an aesthetic change in the fact that elog(ERROR) would not\ncause -1 to be returned, still I agree that it is cleaner to do things\nthe way your patch does. And the origin of the issue is I think that\nthe code of equalsJsonbScalarValue() has been copy-pasted from\ncompareJsonbScalarValue().\n\nAs that's really cosmetic, I would just change that on HEAD, or\nperhaps others feel differently?\n--\nMichael",
"msg_date": "Sun, 12 May 2019 18:07:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong dead return value in jsonb_utils.c"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> As that's really cosmetic, I would just change that on HEAD, or\n> perhaps others feel differently?\n\n+1 for a HEAD-only change. I think the only really good arguments\nfor back-patching would be if this were causing compiler warnings\n(but we've seen none) or if we thought it would likely lead to\nhazards for back-patching future bug fixes (but the adjacent lines\nseem unlikely to change).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 May 2019 10:15:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong dead return value in jsonb_utils.c"
},
{
"msg_contents": "On Sun, May 12, 2019 at 10:15:05AM -0400, Tom Lane wrote:\n> +1 for a HEAD-only change. I think the only really good arguments\n> for back-patching would be if this were causing compiler warnings\n> (but we've seen none) or if we thought it would likely lead to\n> hazards for back-patching future bug fixes (but the adjacent lines\n> seem unlikely to change).\n\nIf it were to generate warnings, we would have already caught them as\nthis comes from 1171dbd. Committed to HEAD.\n--\nMichael",
"msg_date": "Mon, 13 May 2019 09:17:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Wrong dead return value in jsonb_utils.c"
}
] |
[
{
"msg_contents": "Hello dev,\n\nWhile writing some libpq code, I found it quite irritating that the \ndocumentation is not navigable, so when a function appears in a \ndescription of another function and you are interested, there is no direct \nway to find it, you have to go to the index or to guess in which section \nit is going to appear.\n\nAttached:\n - a first patch to add a few missing \"id\"\n - a script which adds the references\n - a second patch which is the result of applying the script\n on top of the first patch, so that all PQ* functions are\n replaced by links to their documentation.\n\nWhile doing this, I noticed that two functions are not documented: \nPQregisterThreadLock and PQsetResultInstanceData.\n\n-- \nFabien.",
"msg_date": "Sun, 12 May 2019 11:02:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "make libpq documentation navigable between functions"
},
{
"msg_contents": "On 2019-05-12 11:02, Fabien COELHO wrote:\n> While writing some libpq code, I found it quite irritating that the \n> documentation is not navigable, so when a function appears in a \n> description of another function and you are interested, there is no direct \n> way to find it, you have to go to the index or to guess in which section \n> it is going to appear.\n> \n> Attached:\n> - a first patch to add a few missing \"id\"\n> - a script which adds the references\n> - a second patch which is the result of applying the script\n> on top of the first patch, so that all PQ* functions are\n> replaced by links to their documentation.\n\nI think this is a good idea.\n\nThe rendering ends up a bit inconsistent depending on the context of the\nlink target. Sometimes it's monospaced, sometimes it's not, sometimes\nin the same sentence. I think we should improve that a bit. One\napproach for making the currently non-monospaced ones into monospace\nwould be to make the xref targets point to <function> elements but\n*don't* put xreflabels on those. This will currently produce a warning\n\nDon't know what gentext to create for xref to: \"function\"\n\nbut we can write a template\n\n<xsl:template match=\"function\" mode=\"xref-to\">\n\nand then we can control the output format of that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jul 2019 09:48:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: make libpq documentation navigable between functions"
},
{
"msg_contents": "Hello Peter,\n\nThanks for the review. I could use some more input:-)\n\n>> While writing some libpq code, I found it quite irritating that the\n>> documentation is not navigable, so when a function appears in a\n>> description of another function and you are interested, there is no direct\n>> way to find it, you have to go to the index or to guess in which section\n>> it is going to appear.\n>>\n>> Attached:\n>> - a first patch to add a few missing \"id\"\n>> - a script which adds the references\n>> - a second patch which is the result of applying the script\n>> on top of the first patch, so that all PQ* functions are\n>> replaced by links to their documentation.\n>\n> I think this is a good idea.\n>\n> The rendering ends up a bit inconsistent depending on the context of the\n> link target. Sometimes it's monospaced, sometimes it's not, sometimes\n> in the same sentence. I think we should improve that a bit.\n\nYep, I noticed. Why not.\n\n> One approach for making the currently non-monospaced ones into monospace \n> would be to make the xref targets point to <function> elements\n> but *don't* put xreflabels on those.\n\nI understand that you mean turning function usages:\n\n <function>PQbla</function>\n\ninto:\n\n <xref linkend=\"libpq-fun-pqbla\"/>\n\nso that it points to function definitions that would look like:\n\n <function id=\"libpq-fun-pqbla\">PQbla</function>...\n\n(note: \"libpq-pqbla\" ids are already taken).\n\n> This will currently produce a warning Don't know what gentext to create \n> for xref to: \"function\"\n\nIndeed.\n\n> but we can write a template\n>\n> <xsl:template match=\"function\" mode=\"xref-to\">\n>\n> and then we can control the output format of that.\n\nThis step is (well) beyond my current XSLT proficiency, which is null \nbeyond knowing that it transforms XML into whatever. Also I'm unsure into \nwhich of the 11 xsl file the definition should be included and what should \nbe written precisely.\n\nThe attached script does the transformation basic, and the patch is the \nresult of applying the script to libpq.sgml in master. As I'm currently \nlost about the required xslt changes, so the html generated sets \"???\" \neverywhere, and there is a warning.\n\nA little more help would be appreciated, eg a pointer to an example to \nfollow, and which file(s) should be changed?\n\nThanks in advance,\n\n-- \nFabien.",
"msg_date": "Wed, 10 Jul 2019 09:51:49 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: make libpq documentation navigable between functions"
},
{
"msg_contents": "On 2019-07-10 09:51, Fabien COELHO wrote:\n>> One approach for making the currently non-monospaced ones into monospace \n>> would be to make the xref targets point to <function> elements\n>> but *don't* put xreflabels on those.\n> \n> I understand that you mean turning function usages:\n> \n> <function>PQbla</function>\n> \n> into:\n> \n> <xref linkend=\"libpq-fun-pqbla\"/>\n> \n> so that it points to function definitions that would look like:\n> \n> <function id=\"libpq-fun-pqbla\">PQbla</function>...\n> \n> (note: \"libpq-pqbla\" ids are already taken).\n\nWhat I really meant was that you determine the best link target in each\ncase. If there already is an id on a <varlistentry>, then use that. If\nnot, then make an id on something else, most likely the <function> element.\n\nWhat you have now puts ids on both the <varlistentry> and the\n<function>, which seems unnecessary and confusing.\n\nFor some weird reason this setup with link targets in both\n<varlistentry> and enclosed <function> breaks the PDF build, but if you\nchange it the way I suggest then those errors go away.\n\n>> This will currently produce a warning Don't know what gentext to create \n>> for xref to: \"function\"\n> \n> Indeed.\n> \n>> but we can write a template\n>>\n>> <xsl:template match=\"function\" mode=\"xref-to\">\n>>\n>> and then we can control the output format of that.\n> \n> This step is (well) beyond my current XSLT proficiency, which is null \n> beyond knowing that it transforms XML into whatever. Also I'm unsure into \n> which of the 11 xsl file the definition should be included and what should \n> be written precisely.\n\nSee attached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Jul 2019 14:10:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: make libpq documentation navigable between functions"
},
{
"msg_contents": "Hello Peter,\n\n> What I really meant was that you determine the best link target in each\n> case. If there already is an id on a <varlistentry>, then use that. If\n> not, then make an id on something else, most likely the <function> element.\n\nOk, sorry I misunderstood.\n\n>> This step is (well) beyond my current XSLT proficiency, which is null\n>> beyond knowing that it transforms XML into whatever. Also I'm unsure into\n>> which of the 11 xsl file the definition should be included and what should\n>> be written precisely.\n>\n> See attached patch.\n\nThanks!\n\nAttached script does, hopefully, the expected transformation. It adds ids \nto <function> occurrences when the id is not defined elsewhere.\n\nAttached v3 is the result of applying your kindly provided xslt patch plus\nthe script on \"libpq.sgml\".\n\nThree functions are ignored because no documentation is found: \nPQerrorField (does not exist anywhere in the sources), \nPQsetResultInstanceData (idem) and PQregisterThreadLock (it exists).\n\nDoc build works for me and looks ok.\n\n-- \nFabien.",
"msg_date": "Mon, 22 Jul 2019 20:56:15 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: make libpq documentation navigable between functions"
},
{
"msg_contents": "On 2019-07-22 22:56, Fabien COELHO wrote:\n> Attached script does, hopefully, the expected transformation. It adds ids \n> to <function> occurrences when the id is not defined elsewhere.\n> \n> Attached v3 is the result of applying your kindly provided xslt patch plus\n> the script on \"libpq.sgml\".\n> \n> Three functions are ignored because no documentation is found: \n> PQerrorField (does not exist anywhere in the sources), \n> PQsetResultInstanceData (idem) and PQregisterThreadLock (it exists).\n> \n> Doc build works for me and looks ok.\n\nI have committed this with some additions.\n\nI have changed all the function-related id attributes in libpq.sgml to\nbe mixed case, for easier readability. So if you run your script again,\nyou can omit the lc() call.\n\nI also needed to make some changes to the markup in some places to\nremove extra whitespace that would have appeared in the generated link.\n(This was already happening in some places, but your patch would have\nrepeated it in many places.)\n\nAlso, due to some mysterious problems with the PDF toolchain I had to\nremove some links. Your script would find those, so I won't list them\nhere. If you put those back in, the PDF build breaks. If you want to\nwork out why, feel free to submit more patches. Otherwise I'm happy to\nleave it as is now; it's very useful.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jul 2019 11:49:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: make libpq documentation navigable between functions"
},
{
"msg_contents": "\nHello Peter,\n\n> I have committed this with some additions.\n\nThanks for the push. It was really a pain to write a small libpq app \nwithout navigation.\n\n> Also, due to some mysterious problems with the PDF toolchain I had to\n> remove some links. Your script would find those, so I won't list them\n> here. If you put those back in, the PDF build breaks. If you want to\n> work out why, feel free to submit more patches. Otherwise I'm happy to\n> leave it as is now; it's very useful.\n\nOk fine with me for now as well. I'm not keen to invest more time on the \ndocumentation tool chain.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 26 Jul 2019 10:28:45 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: make libpq documentation navigable between functions"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed in\nhttps://www.postgresql.org/message-id/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com,\ncurrent coding in reindexdb.c is error prone, and\nreindex_system_catalogs() is also not really required.\n\nI attach two patches to fix both (it could be squashed in a single\ncommit as both are straightforward), for upcoming v13.",
"msg_date": "Sun, 12 May 2019 11:16:28 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "cleanup & refactoring on reindexdb.c"
},
{
"msg_contents": "On Sun, May 12, 2019 at 11:16:28AM +0200, Julien Rouhaud wrote:\n> I attach two patches to fix both (it could be squashed in a single\n> commit as both are straightforward), for upcoming v13.\n\nSquashing both patches together makes the most sense in my opinion as\nthe same areas are reworked. I can notice that you have applied\npgindent, but the indentation got a bit messed up because the new enum\nReindexType is missing from typedefs.list.\n\nI have reworked a bit your patch as per the attached, tweaking a\ncouple of places like reordering the elements in ReindexType,\nreviewing the indentation, etc. At the end I can see more reasons to\nuse multiple switch/case points as if we add more options in the\nfuture then we have more code paths to take care of. These would\nunlikely get forgotten, but there is no point to take this risk\neither, and that would simplify future patches. It is also possible\nto group some types together when assigning the object name similarly\nto what's on HEAD.\n--\nMichael",
"msg_date": "Mon, 13 May 2019 12:09:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup & refactoring on reindexdb.c"
},
{
"msg_contents": "On Mon, May 13, 2019 at 5:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, May 12, 2019 at 11:16:28AM +0200, Julien Rouhaud wrote:\n> > I attach two patches to fix both (it could be squashed in a single\n> > commit as both are straightforward), for upcoming v13.\n>\n> Squashing both patches together makes the most sense in my opinion as\n> the same areas are reworked. I can notice that you have applied\n> pgindent, but the indentation got a bit messed up because the new enum\n> ReindexType is missing from typedefs.list.\n>\n> I have reworked a bit your patch as per the attached, tweaking a\n> couple of places like reordering the elements in ReindexType,\n> reviewing the indentation, etc. At the end I can see more reasons to\n> use multiple switch/case points as if we add more options in the\n> future then we have more code paths to take care of. These would\n> unlikely get forgotten, but there is no point to take this risk\n> either, and that would simplify future patches. It is also possible\n> to group some types together when assigning the object name similarly\n> to what's on HEAD.\n\nThanks! I'm fine with the changes.\n\nThe patch does not apply anymore, so here's a rebased version.",
"msg_date": "Fri, 28 Jun 2019 09:25:00 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cleanup & refactoring on reindexdb.c"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 09:25:00AM +0200, Julien Rouhaud wrote:\n> The patch does not apply anymore, so here's a rebased version.\n\nThanks for the rebase (and the reminder..). I'll look at that once\nv13 opens for business.\n--\nMichael",
"msg_date": "Sat, 29 Jun 2019 11:24:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup & refactoring on reindexdb.c"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 11:24:49AM +0900, Michael Paquier wrote:\n> Thanks for the rebase (and the reminder..). I'll look at that once\n> v13 opens for business.\n\nAnd applied.\n--\nMichael",
"msg_date": "Tue, 2 Jul 2019 11:44:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup & refactoring on reindexdb.c"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 4:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jun 29, 2019 at 11:24:49AM +0900, Michael Paquier wrote:\n> > Thanks for the rebase (and the reminder..). I'll look at that once\n> > v13 opens for business.\n>\n> And applied.\n\nThanks!\n\n\n",
"msg_date": "Tue, 2 Jul 2019 07:12:38 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cleanup & refactoring on reindexdb.c"
}
] |
[
{
"msg_contents": "Hi,\n\nNow that a draft of the release notes are available[1] this seems like a\ngood time to begin determining what features we want to highlight prior\nto the Beta 1 announcement. First, a small editorial :)\n\nReading through the list a few times, it is quite impressive the breadth\nof features that are available for PostgreSQL 12 and the impact they can\nhave on our user workloads. I think this is very exciting and I think\nour users will be very impressed with this release :) It also presents\nsome challenges for coming up with features to highlight, but I call\nthis a \"good problem.\"\n\n(I am less inclined to \"trim the list\" for the sake of doing so for a\nBeta 1 announcement, as based on an analysis of the data, often what\npeople read are the announcements itself and not the release notes, so\ntrying to get as much info in front of people without making it too\ntedious is the goal.)\n\nKnowing that the target audience of the announcements are users of\nPostgreSQL, and knowing the main goals of the beta announcement is to\nboth make people aware of features and to encourage testing, I think we\nneed to divide things into a few groups:\n\n- Feature Highlights\n- Changes that could affect existing operating environments\n\nAlso note below that the way I am listing them out does not constitute a\nrank order as this list is just an initial compilation.\n\nWith further ado...\n\n# Feature Highlights\n\n1. Partitioning Improvements\n\n- Performance, e.g. enhanced partition pruning, COPY performance, ATTACH\nPARTITION\n- Foreign Keys\n- Partition bounds now support expressions\n\n2. Query parallelism is now supported in SERIALIZABLE transaction mode\n\n3. Indexing\n\n- Improvements overall performance to standard (B-tree) indexes with\nwrites as well as with bloat\n- REINDEX CONCURRENTLY\n- GiST indexes now support covering indexes (INCLUDE clause)\n- SP-GiST indexes now support K-NN queries\n- WAL overhead reduced on GiST, GIN, & SP-GiST index creation\n\n4. CREATE STATISTICS now supports most-common value statistics, which\nleads to improved query plans for distributions that are non-uniform\n\n5. WITH queries (CTEs) can now be inlined, subject to certain restrictions\n\n6. Support for JSON path queries per the SQL/JSON standard as well as\nsupport for indexing on equality expressions\n\n7. Introduction of generated columns that compute and store an\nexpression as a value on the table\n\n8. Enable / disable page checksums for an offline cluster\n\n9. Authentication\n\n- GSSAPI client/server encryption support\n- LDAP server discovery\n\n10. Introduction of CREATE ACCESS METHOD that permits the addition of\nnew table storage types\n\n# Changes That Can Affect Existing Operating Environments\n\n1. recovery.conf merged into postgresql.conf;\nrecovery.signal/standby.signal being used for switching into non-primary\nmode\n\n2. JIT enabled by default\n\nAs always, constructive feedback welcome. With the goal in mind that\nthis will be turned into a Beta 1 announcement, please indicate if you\nbelieve something is missing or if something does not belong on this list.\n\nThanks!\n\nJonathan\n\n[1] https://www.postgresql.org/docs/devel/release-12.html",
"msg_date": "Sun, 12 May 2019 11:28:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-12 8:28 a.m., Jonathan S. Katz wrote:\n> 7. Introduction of generated columns that compute and store an\n> expression as a value on the table\n\nIt seems to me that one of the things that's valuable about this feature is that \none can make as regular visible data any calculated-from-a-row value that is \nused for indexing purposes rather than that being hidden or not possible. I \nassume or hope that generated columns can be used in all index types like \nregular columns right? -- Darren Duncan\n\n\n",
"msg_date": "Sun, 12 May 2019 14:04:06 -0700",
"msg_from": "Darren Duncan <darren@darrenduncan.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Mon, 13 May 2019 at 03:28, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> - Performance, e.g. enhanced partition pruning, COPY performance, ATTACH\n\nI don't think it's very accurate to say that the performance of\npartition pruning has been improved. Really the improvement there is\ndue to the change in the order of operations, where we now perform\npruning before fetching partition meta-data. Pruning itself, I don't\nbelieve became any faster in PG12. There were, however various tweaks\nto improve performance of some operations around run-time partition\npruning both in the planner and during execution, these, however, are\nnot improvements to pruning itself, but more the operations around\nsetting up pruning and handling what happens after pruning takes\nplace. Bruce has now changed the release notes to mention \"Improve\nperformance of many operations on partitioned tables\", which seems\nlike a more accurate generalisation of what was improved, although, I\nstill think it's overly vague.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 11:47:12 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-13 09:47, David Rowley wrote:\n> On Mon, 13 May 2019 at 03:28, Jonathan S. Katz <jkatz@postgresql.org> \n> wrote:\n>> - Performance, e.g. enhanced partition pruning, COPY performance, \n>> ATTACH\n> \n> I don't think it's very accurate to say that the performance of\n> partition pruning has been improved. Really the improvement there is\n> due to the change in the order of operations, where we now perform\n> pruning before fetching partition meta-data. Pruning itself, I don't\n> believe became any faster in PG12. There were, however various tweaks\n> to improve performance of some operations around run-time partition\n> pruning both in the planner and during execution, these, however, are\n> not improvements to pruning itself, but more the operations around\n> setting up pruning and handling what happens after pruning takes\n> place. Bruce has now changed the release notes to mention \"Improve\n> performance of many operations on partitioned tables\", which seems\n> like a more accurate generalisation of what was improved, although, I\n> still think it's overly vague.\n\nSounds like \"partition pruning is now more efficient\". eg less memory\nusage (?), with a side effect of better performance leading from that \n(?)\n\n\n",
"msg_date": "Mon, 13 May 2019 11:50:00 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Mon, 13 May 2019 at 13:50, Justin Clift <justin@postgresql.org> wrote:\n>\n> On 2019-05-13 09:47, David Rowley wrote:\n> > On Mon, 13 May 2019 at 03:28, Jonathan S. Katz <jkatz@postgresql.org>\n> > wrote:\n> >> - Performance, e.g. enhanced partition pruning, COPY performance,\n> >> ATTACH\n> >\n> > I don't think it's very accurate to say that the performance of\n> > partition pruning has been improved. Really the improvement there is\n> > due to the change in the order of operations, where we now perform\n> > pruning before fetching partition meta-data. Pruning itself, I don't\n> > believe became any faster in PG12. There were, however various tweaks\n> > to improve performance of some operations around run-time partition\n> > pruning both in the planner and during execution, these, however, are\n> > not improvements to pruning itself, but more the operations around\n> > setting up pruning and handling what happens after pruning takes\n> > place. Bruce has now changed the release notes to mention \"Improve\n> > performance of many operations on partitioned tables\", which seems\n> > like a more accurate generalisation of what was improved, although, I\n> > still think it's overly vague.\n>\n> Sounds like \"partition pruning is now more efficient\". eg less memory\n> usage (?), with a side effect of better performance leading from that\n> (?)\n\nI think the headline item needs to be about the fact that partitioning\ncan now more easily handle larger numbers of partitions. To say\npruning is more efficient is just a chapter in the story. The big\nusers of partitioning want and need the entire book.\n\nPerhaps something along the lines of:\n\n- Improve optimizer and executor to allow them to more easily handle\nlarger numbers of partitions.\n- Allow ATTACH PARTITION to work without blocking concurrent DML.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 14:19:45 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Hi David,\n\nOn 2019/05/13 11:19, David Rowley wrote:\n> On Mon, 13 May 2019 at 13:50, Justin Clift <justin@postgresql.org> wrote:\n>>\n>> On 2019-05-13 09:47, David Rowley wrote:\n>>> On Mon, 13 May 2019 at 03:28, Jonathan S. Katz <jkatz@postgresql.org>\n>>> wrote:\n>>>> - Performance, e.g. enhanced partition pruning, COPY performance,\n>>>> ATTACH\n>>>\n>>> I don't think it's very accurate to say that the performance of\n>>> partition pruning has been improved. Really the improvement there is\n>>> due to the change in the order of operations, where we now perform\n>>> pruning before fetching partition meta-data. Pruning itself, I don't\n>>> believe became any faster in PG12. There were, however various tweaks\n>>> to improve performance of some operations around run-time partition\n>>> pruning both in the planner and during execution, these, however, are\n>>> not improvements to pruning itself, but more the operations around\n>>> setting up pruning and handling what happens after pruning takes\n>>> place. Bruce has now changed the release notes to mention \"Improve\n>>> performance of many operations on partitioned tables\", which seems\n>>> like a more accurate generalisation of what was improved, although, I\n>>> still think it's overly vague.\n>>\n>> Sounds like \"partition pruning is now more efficient\". eg less memory\n>> usage (?), with a side effect of better performance leading from that\n>> (?)\n> \n> I think the headline item needs to be about the fact that partitioning\n> can now more easily handle larger numbers of partitions. To say\n> pruning is more efficient is just a chapter in the story. The big\n> users of partitioning want and need the entire book.\n> \n> Perhaps something along the lines of:\n> \n> - Improve optimizer and executor to allow them to more easily handle\n> larger numbers of partitions.\n\nIt's true that optimizer and executor can now handle larger number of\npartitions efficiently, but the improvements in this release will only be\nmeaningful to workloads where partition pruning is crucial, so I don't see\nwhy mentioning \"pruning\" is so misleading. Perhaps, it would be slightly\nmisleading to not mention it, because readers might think that queries\nlike this one:\n\n select count(*) from partitioned_table;\n\nare now faster in v12, whereas AFAIK, they perform perform more or less\nthe same as in v11.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 13 May 2019 15:37:05 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sun, May 12, 2019 at 6:28 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> Hi,\n>\n> Now that a draft of the release notes are available[1] this seems like a\n> good time to begin determining what features we want to highlight prior\n> to the Beta 1 announcement. First, a small editorial :)\n>\n> Reading through the list a few times, it is quite impressive the breadth\n> of features that are available for PostgreSQL 12 and the impact they can\n> have on our user workloads. I think this is very exciting and I think\n> our users will be very impressed with this release :) It also presents\n> some challenges for coming up with features to highlight, but I call\n> this a \"good problem.\"\n>\n> (I am less inclined to \"trim the list\" for the sake of doing so for a\n> Beta 1 announcement, as based on an analysis of the data, often what\n> people read are the announcements itself and not the release notes, so\n> trying to get as much info in front of people without making it too\n> tedious is the goal.)\n>\n> Knowing that the target audience of the announcements are users of\n> PostgreSQL, and knowing the main goals of the beta announcement is to\n> both make people aware of features and to encourage testing, I think we\n> need to divide things into a few groups:\n>\n> - Feature Highlights\n> - Changes that could affect existing operating environments\n>\n> Also note below that the way I am listing them out does not constitute a\n> rank order as this list is just an initial compilation.\n>\n> With further ado...\n>\n> # Feature Highlights\n>\n> 1. Partitioning Improvements\n>\n> - Performance, e.g. enhanced partition pruning, COPY performance, ATTACH\n> PARTITION\n> - Foreign Keys\n> - Partition bounds now support expressions\n>\n> 2. Query parallelism is now supported in SERIALIZABLE transaction mode\n>\n> 3. Indexing\n>\n> - Improvements overall performance to standard (B-tree) indexes with\n> writes as well as with bloat\n> - REINDEX CONCURRENTLY\n> - GiST indexes now support covering indexes (INCLUDE clause)\n> - SP-GiST indexes now support K-NN queries\n> - WAL overhead reduced on GiST, GIN, & SP-GiST index creation\n>\n> 4. CREATE STATISTICS now supports most-common value statistics, which\n> leads to improved query plans for distributions that are non-uniform\n>\n> 5. WITH queries (CTEs) can now be inlined, subject to certain restrictions\n>\n> 6. Support for JSON path queries per the SQL/JSON standard as well as\n> support for indexing on equality expressions\n\nSupport for JSON path queries per the SQL/JSON specification in SQL-2016, which\ncan be accelerated by existing (on disk) indexes.\n\n>\n> 7. Introduction of generated columns that compute and store an\n> expression as a value on the table\n>\n> 8. Enable / disable page checksums for an offline cluster\n>\n> 9. Authentication\n>\n> - GSSAPI client/server encryption support\n> - LDAP server discovery\n>\n> 10. Introduction of CREATE ACCESS METHOD that permits the addition of\n> new table storage types\n>\n> # Changes That Can Affect Existing Operating Environments\n>\n> 1. recovery.conf merged into postgresql.conf;\n> recovery.signal/standby.signal being used for switching into non-primary\n> mode\n>\n> 2. JIT enabled by default\n>\n> As always, constructive feedback welcome. With the goal in mind that\n> this will be turned into a Beta 1 announcement, please indicate if you\n> believe something is missing or if something does not belong on this list.\n>\n> Thanks!\n>\n> Jonathan\n>\n> [1] https://www.postgresql.org/docs/devel/release-12.html\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 13 May 2019 12:14:48 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Mon, 13 May 2019 at 18:37, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> It's true that optimizer and executor can now handle larger number of\n> partitions efficiently, but the improvements in this release will only be\n> meaningful to workloads where partition pruning is crucial, so I don't see\n> why mentioning \"pruning\" is so misleading. Perhaps, it would be slightly\n> misleading to not mention it, because readers might think that queries\n> like this one:\n>\n> select count(*) from partitioned_table;\n>\n> are now faster in v12, whereas AFAIK, they perform perform more or less\n> the same as in v11.\n\nThis is true, but whether partitions are pruned or not is only\nrelevant to one of the many items the headline feature is talking\nabout. I'm not sure how you'd briefly enough mention that fact without\ngoing into detail about which features are which and which are\naffected by partition pruning.\n\nI think these are the sorts of details that can be mentioned away from\nthe headline features, which is why I think lumping these all in one\nin the main release notes is a bad idea as it's pretty hard to do that\nwhen they're all lumped in as one item.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 22:50:59 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-12 17:28, Jonathan S. Katz wrote:\n> - Partition bounds now support expressions\n\nThis is relatively trivial compared to the rest and probably doesn't\nbelong in a highlights list.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 13:39:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-12 23:04, Darren Duncan wrote:\n> On 2019-05-12 8:28 a.m., Jonathan S. Katz wrote:\n>> 7. Introduction of generated columns that compute and store an\n>> expression as a value on the table\n> \n> It seems to me that one of the things that's valuable about this feature is that \n> one can make as regular visible data any calculated-from-a-row value that is \n> used for indexing purposes rather than that being hidden or not possible. I \n> assume or hope that generated columns can be used in all index types like \n> regular columns right? -- Darren Duncan\n\nThe answer to your question is that, yes, they can, but I don't\nunderstand what your first sentence is trying to say.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 13:40:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 5/13/19 7:39 AM, Peter Eisentraut wrote:\n> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>> - Partition bounds now support expressions\n> \n> This is relatively trivial compared to the rest and probably doesn't\n> belong in a highlights list.\n\nWhy? This is incredibly helpful from a development standpoint; it\ngreatly expands the possibilities of the types of partition bounds that\ncan be utilized.\n\nJonathan",
"msg_date": "Mon, 13 May 2019 08:54:47 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-13 14:54, Jonathan S. Katz wrote:\n> On 5/13/19 7:39 AM, Peter Eisentraut wrote:\n>> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>>> - Partition bounds now support expressions\n>>\n>> This is relatively trivial compared to the rest and probably doesn't\n>> belong in a highlights list.\n> \n> Why? This is incredibly helpful from a development standpoint; it\n> greatly expands the possibilities of the types of partition bounds that\n> can be utilized.\n\nHow so?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 18:21:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-12 17:28, Jonathan S. Katz wrote:\n> # Feature Highlights\n\nThe ability to create case-insensitive and accent-insensitive collations\nis probably of interest to many users.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 18:22:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Mon, May 13, 2019 at 9:22 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n> > # Feature Highlights\n>\n> The ability to create case-insensitive and accent-insensitive collations\n> is probably of interest to many users.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 May 2019 09:23:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "El 2019-05-13 12:23, Peter Geoghegan escribió:\n> On Mon, May 13, 2019 at 9:22 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>> > # Feature Highlights\n>> \n>> The ability to create case-insensitive and accent-insensitive \n>> collations\n>> is probably of interest to many users.\n> \n> +1\n+1\n\nRgds,\nGilberto Castillo\n\n\n",
"msg_date": "Mon, 13 May 2019 12:25:53 -0400",
"msg_from": "gilberto.castillo@etecsa.cu",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-13 08:54:47 -0400, Jonathan S. Katz wrote:\n> On 5/13/19 7:39 AM, Peter Eisentraut wrote:\n> > On 2019-05-12 17:28, Jonathan S. Katz wrote:\n> >> - Partition bounds now support expressions\n> > \n> > This is relatively trivial compared to the rest and probably doesn't\n> > belong in a highlights list.\n\n+1\n\n\n> Why? This is incredibly helpful from a development standpoint; it\n> greatly expands the possibilities of the types of partition bounds that\n> can be utilized.\n\nYou can say that about a lot of features. But we've limited space in the\ntop items... It doesn't strike me as a enabling that many cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 May 2019 09:37:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 5/13/19 12:25 PM, gilberto.castillo@etecsa.cu wrote:\n> El 2019-05-13 12:23, Peter Geoghegan escribió:\n>> On Mon, May 13, 2019 at 9:22 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>>> > # Feature Highlights\n>>>\n>>> The ability to create case-insensitive and accent-insensitive collations\n>>> is probably of interest to many users.\n>>\n>> +1\n> +1\n\nThat sounds like a great feature and one we should promote, sorry I\nmissed it on my pass in the release notes. Where I can find more\ninformation on it?\n\nThanks,\n\nJonathan",
"msg_date": "Mon, 13 May 2019 13:19:19 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 5/13/19 12:37 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-13 08:54:47 -0400, Jonathan S. Katz wrote:\n>> On 5/13/19 7:39 AM, Peter Eisentraut wrote:\n>>> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>>>> - Partition bounds now support expressions\n>>>\n>>> This is relatively trivial compared to the rest and probably doesn't\n>>> belong in a highlights list.\n> \n> +1\n> \n> \n>> Why? This is incredibly helpful from a development standpoint; it\n>> greatly expands the possibilities of the types of partition bounds that\n>> can be utilized.\n> \n> You can say that about a lot of features. But we've limited space in the\n> top items... It doesn't strike me as a enabling that many cases.\n\nI've been bit by it when trying to create some partitions on my own, but\nthat's not a good enough reason when there are other things to highlight.\n\nWith grouping things together, it could be something that's mentioned in\npassing -- it's not a bullet point on its own for sure. However, if the\nfeeling is to drop it completely, I can drop it completely -- I don't\nfeel strongly enough to argue for it.\n\nThanks,\n\nJonathan",
"msg_date": "Mon, 13 May 2019 13:21:30 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-05-13 19:19, Jonathan S. Katz wrote:\n> On 5/13/19 12:25 PM, gilberto.castillo@etecsa.cu wrote:\n>> El 2019-05-13 12:23, Peter Geoghegan escribió:\n>>> On Mon, May 13, 2019 at 9:22 AM Peter Eisentraut\n>>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>>> On 2019-05-12 17:28, Jonathan S. Katz wrote:\n>>>>> # Feature Highlights\n>>>>\n>>>> The ability to create case-insensitive and accent-insensitive collations\n>>>> is probably of interest to many users.\n>>>\n>>> +1\n>> +1\n> \n> That sounds like a great feature and one we should promote, sorry I\n> missed it on my pass in the release notes. Where I can find more\n> information on it?\n\nhttps://www.postgresql.org/docs/devel/collation.html#COLLATION-NONDETERMINISTIC\n\nThat chapter might need to be reorganized a bit, because it's difficult\nto get a direct link to this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 21:50:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Mon, May 13, 2019 at 10:50:59PM +1200, David Rowley wrote:\n> On Mon, 13 May 2019 at 18:37, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > It's true that optimizer and executor can now handle larger number of\n> > partitions efficiently, but the improvements in this release will only be\n> > meaningful to workloads where partition pruning is crucial, so I don't see\n> > why mentioning \"pruning\" is so misleading. Perhaps, it would be slightly\n> > misleading to not mention it, because readers might think that queries\n> > like this one:\n> >\n> > select count(*) from partitioned_table;\n> >\n> > are now faster in v12, whereas AFAIK, they perform perform more or less\n> > the same as in v11.\n> \n> This is true, but whether partitions are pruned or not is only\n> relevant to one of the many items the headline feature is talking\n> about. I'm not sure how you'd briefly enough mention that fact without\n> going into detail about which features are which and which are\n> affected by partition pruning.\n> \n> I think these are the sorts of details that can be mentioned away from\n> the headline features, which is why I think lumping these all in one\n> in the main release notes is a bad idea as it's pretty hard to do that\n> when they're all lumped in as one item.\n\nI think the point is that partition pruning and tuple _routing_ to the\nright partition is also improved. I updated the release note items to\nsay:\n\n\tTables with thousands of child partitions can now be processed\n\tefficiently.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 13 May 2019 22:59:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019/05/14 11:59, Bruce Momjian wrote:\n> On Mon, May 13, 2019 at 10:50:59PM +1200, David Rowley wrote:\n>> On Mon, 13 May 2019 at 18:37, Amit Langote\n>> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>> It's true that optimizer and executor can now handle larger number of\n>>> partitions efficiently, but the improvements in this release will only be\n>>> meaningful to workloads where partition pruning is crucial, so I don't see\n>>> why mentioning \"pruning\" is so misleading. Perhaps, it would be slightly\n>>> misleading to not mention it, because readers might think that queries\n>>> like this one:\n>>>\n>>> select count(*) from partitioned_table;\n>>>\n>>> are now faster in v12, whereas AFAIK, they perform perform more or less\n>>> the same as in v11.\n>>\n>> This is true, but whether partitions are pruned or not is only\n>> relevant to one of the many items the headline feature is talking\n>> about. I'm not sure how you'd briefly enough mention that fact without\n>> going into detail about which features are which and which are\n>> affected by partition pruning.\n>>\n>> I think these are the sorts of details that can be mentioned away from\n>> the headline features, which is why I think lumping these all in one\n>> in the main release notes is a bad idea as it's pretty hard to do that\n>> when they're all lumped in as one item.\n> \n> I think the point is that partition pruning and tuple _routing_ to the\n> right partition is also improved. I updated the release note items to\n> say:\n> \n> \tTables with thousands of child partitions can now be processed\n> \tefficiently.\n\nConsidering the quoted discussion here, maybe it's a good idea to note\nthat only the operations that need to touch a small number of partitions\nare now processed efficiently, which covers both SELECT/UPDATE/DELETE that\nbenefit from improved pruning efficiency and INSERT that benefit from\nimproved tuple routing efficiency. So, maybe:\n\n\tTables with thousands of child partitions can now be processed\n\tefficiently by operations that only need to touch a small number\n of partitions.\n\nThat is, as I mentioned above, as opposed to queries that need to process\nall partitions (such as, select count(*) from partitioned_table), which\ndon't perform any faster in v12 than in v11. The percentage of users who\nrun such workloads on PostgreSQL may be much smaller today, but perhaps\nit's not a good idea to mislead them into thinking that *everything* with\npartitioned tables is now faster even with thousands of partitions.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 14 May 2019 18:25:43 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Tue, May 14, 2019 at 06:25:43PM +0900, Amit Langote wrote:\n> Considering the quoted discussion here, maybe it's a good idea to note\n> that only the operations that need to touch a small number of partitions\n> are now processed efficiently, which covers both SELECT/UPDATE/DELETE that\n> benefit from improved pruning efficiency and INSERT that benefit from\n> improved tuple routing efficiency. So, maybe:\n> \n> \tTables with thousands of child partitions can now be processed\n> \tefficiently by operations that only need to touch a small number\n> of partitions.\n> \n> That is, as I mentioned above, as opposed to queries that need to process\n> all partitions (such as, select count(*) from partitioned_table), which\n> don't perform any faster in v12 than in v11. The percentage of users who\n> run such workloads on PostgreSQL may be much smaller today, but perhaps\n> it's not a good idea to mislead them into thinking that *everything* with\n> partitioned tables is now faster even with thousands of partitions.\n\nAgreed, I changed it to your wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 14 May 2019 09:19:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019/05/14 22:19, Bruce Momjian wrote:\n> On Tue, May 14, 2019 at 06:25:43PM +0900, Amit Langote wrote:\n>> Considering the quoted discussion here, maybe it's a good idea to note\n>> that only the operations that need to touch a small number of partitions\n>> are now processed efficiently, which covers both SELECT/UPDATE/DELETE that\n>> benefit from improved pruning efficiency and INSERT that benefit from\n>> improved tuple routing efficiency. So, maybe:\n>>\n>> \tTables with thousands of child partitions can now be processed\n>> \tefficiently by operations that only need to touch a small number\n>> of partitions.\n>>\n>> That is, as I mentioned above, as opposed to queries that need to process\n>> all partitions (such as, select count(*) from partitioned_table), which\n>> don't perform any faster in v12 than in v11. The percentage of users who\n>> run such workloads on PostgreSQL may be much smaller today, but perhaps\n>> it's not a good idea to mislead them into thinking that *everything* with\n>> partitioned tables is now faster even with thousands of partitions.\n> \n> Agreed, I changed it to your wording.\n\nThank you.\n\nRegards,\nAmit\n\n\n\n\n",
"msg_date": "Wed, 15 May 2019 09:34:10 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Hi,\n\nOn 5/12/19 11:28 AM, Jonathan S. Katz wrote:\n> Hi,\n> \n> Now that a draft of the release notes are available[1] this seems like a\n> good time to begin determining what features we want to highlight prior\n> to the Beta 1 announcement.\n\nThank you everyone for your feedback. Below is v2 of the list for the\nBeta 1 announcement. A few things to note:\n\n- Based on feedback I made several INSERT/UPDATE/DELETE operations on\nthe list ;)\n\n- I know it's listing out a lot of features, but this is the Beta 1\nrelease, which, among several goals, is to make people aware of as many\nimpactful features as possible for purposes of awareness and getting\npeople to help test.\n\n- The press release will be better worded - this is just a list :)\n\nWithout further ado:\n\n# Feature Highlights\n\n1. Indexing\n\n- Improvements overall performance to standard (B-tree) indexes with\nwrites as well as with bloat\n- REINDEX CONCURRENTLY\n- GiST indexes now support covering indexes (INCLUDE clause)\n- SP-GiST indexes now support K-NN queries\n- WAL overhead reduced on GiST, GIN, & SP-GiST index creation\n\n2. Partitioning Improvements\n\n- Improved partition pruning, which improves performance on queries over\ntables with thousands of partitions that only need to use a few partitions\n- Improvements to COPY performance and ATTACH PARTITION\n- Allow foreign keys to reference partitioned tables\n\n3. WITH queries (CTEs) can now be inlined, subject to certain restrictions\n\n4. Support for JSON path queries per the SQL/JSON specification in the\nSQL:2016 standard. A subset of these expressions can be accelerated with\non-disk indexes.\n\n5. Support for case-insensitive and accent-insensitive collations\n\n6. CREATE STATISTICS now supports most-common value statistics, which\nleads to improved query plans for distributions that are non-uniform\n\n7. Introduction of generated columns that compute and store an\nexpression as a value on the table\n\n8. Introduction of CREATE ACCESS METHOD that permits the addition of new\ntable storage types\n\n9. Enable / disable page checksums for an offline cluster\n\n10. Authentication\n\n- GSSAPI client/server encryption support\n- LDAP server discovery\n\n# Changes That Can Affect Existing Operating Environments\n\n1. recovery.conf merged into postgresql.conf;\nrecovery.signal/standby.signal being used for switching into non-primary\nmode\n\n2. JIT enabled by default\n\nThanks,\n\nJonathan",
"msg_date": "Tue, 14 May 2019 22:03:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Hi Jonathan,\n\nThanks for the updated draft.\n\nOn 2019/05/15 11:03, Jonathan S. Katz wrote:\n> Without further ado:\n> \n> # Feature Highlights\n> \n> 1. Indexing\n> \n> - Improvements overall performance to standard (B-tree) indexes with\n> writes as well as with bloat\n> - REINDEX CONCURRENTLY\n> - GiST indexes now support covering indexes (INCLUDE clause)\n> - SP-GiST indexes now support K-NN queries\n> - WAL overhead reduced on GiST, GIN, & SP-GiST index creation\n> \n> 2. Partitioning Improvements\n> \n> - Improved partition pruning, which improves performance on queries over\n> tables with thousands of partitions that only need to use a few partitions\n> - Improvements to COPY performance and ATTACH PARTITION\n> - Allow foreign keys to reference partitioned tables\n\nAbout the 1st item in \"Partitioning Improvements\", it's not just partition\npruning that's gotten better. How about writing as:\n\n - Improved performance of processing tables with thousands of partitions\n for operations that only need to touch a small number of partitions\n\nPer discussion upthread, that covers improvements to both partition\npruning and tuple routing.\n\nAlso, could the title \"2. Partitioning Improvements\" be trimmed down to\n\"2. Partitioning\", to look like \"1. Indexing\" for consistency?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 15 May 2019 11:31:32 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Wed, 15 May 2019 at 14:31, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/05/15 11:03, Jonathan S. Katz wrote:\n> > 2. Partitioning Improvements\n> >\n> > - Improved partition pruning, which improves performance on queries over\n> > tables with thousands of partitions that only need to use a few partitions\n> > - Improvements to COPY performance and ATTACH PARTITION\n> > - Allow foreign keys to reference partitioned tables\n>\n> About the 1st item in \"Partitioning Improvements\", it's not just partition\n> pruning that's gotten better. How about writing as:\n>\n> - Improved performance of processing tables with thousands of partitions\n> for operations that only need to touch a small number of partitions\n>\n> Per discussion upthread, that covers improvements to both partition\n> pruning and tuple routing.\n\n+1. Amit's words look fine to me. I think if we're including this as\na headline feature then it should be to inform people that\npartitioning is generally more useable with larger numbers of\npartitions, not just that SELECT/UPDATE/DELETE now perform better when\npruning many partitions. The work done there and what's done to speed\nup tuple routing in INSERT/COPY both complement each other, so they're\nlikely good to keep as one in the headline features list. Mentioning\none without the other might leave some people guessing about if we've\naddressed the other deficiencies with partitioning performance.\nSadly, they can't refer to the release notes for more information\nsince they don't detail what's changed in this area :(\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 14:59:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Hi David,\n\nOn 2019/05/15 11:59, David Rowley wrote:\n> Sadly, they can't refer to the release notes for more information\n> since they don't detail what's changed in this area :(\n\nBruce seemed to insist on summarizing all the performance-related\nimprovements into one item. Initial wording mentioned just pruning, which\nafter some back and forth has finally been turned into:\n\n Improve performance of many operations on partitioned tables\n (Amit Langote, David Rowley, Tom Lane, Álvaro Herrera)\n\n Tables with thousands of child partitions can now be processed\n efficiently by operations that only need to touch a small number of\n partitions.\n\nI hear you saying that the description may be too vague *for release\nnotes* even though it now covers *all* the performance improvements made\nto address the use cases where small number of partitions are touched.\nMaybe, you're saying that we should've mentioned individual items\n(partition pruning, tuple routing, etc.) at least in the release notes,\nwhich I did suggest on -committers [1], but maybe Bruce didn't think it\nwas necessary to list up individual items.\n\nThat leaves a few other commits mentioned in the release-12.sgml next to\nthis item that are really not related to this headline description, which\nboth you and I have pointed out in different threads [2][3], but I haven't\nreally caught Bruce's position about them.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/d5267ae5-bd4a-3e96-c21b-56bfa9fec7e8%40lab.ntt.co.jp\n\n[2]\nhttps://www.postgresql.org/message-id/3f0333be-fd32-55f2-9817-5853a6bbd233%40lab.ntt.co.jp\n\n[3]\nhttps://www.postgresql.org/message-id/CAKJS1f8R6DC45bauzeGF-QMaQ90B_NFSJB9mvVOuhKVDkajehg%40mail.gmail.com\n\n\n\n",
"msg_date": "Wed, 15 May 2019 17:36:32 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Wed, May 15, 2019 at 05:36:32PM +0900, Amit Langote wrote:\n> Hi David,\n> \n> On 2019/05/15 11:59, David Rowley wrote:\n> > Sadly, they can't refer to the release notes for more information\n> > since they don't detail what's changed in this area :(\n> \n> Bruce seemed to insist on summarizing all the performance-related\n> improvements into one item. Initial wording mentioned just pruning, which\n> after some back and forth has finally been turned into:\n> \n> Improve performance of many operations on partitioned tables\n> (Amit Langote, David Rowley, Tom Lane, �lvaro Herrera)\n> \n> Tables with thousands of child partitions can now be processed\n> efficiently by operations that only need to touch a small number of\n> partitions.\n> \n> I hear you saying that the description may be too vague *for release\n> notes* even though it now covers *all* the performance improvements made\n> to address the use cases where small number of partitions are touched.\n> Maybe, you're saying that we should've mentioned individual items\n> (partition pruning, tuple routing, etc.) at least in the release notes,\n> which I did suggest on -committers [1], but maybe Bruce didn't think it\n> was necessary to list up individual items.\n> \n> That leaves a few other commits mentioned in the release-12.sgml next to\n> this item that are really not related to this headline description, which\n> both you and I have pointed out in different threads [2][3], but I haven't\n> really caught Bruce's position about them.\n\nI think the more specific we make the partition description, the more\nlimited it will appear to be. I think almost all partition operations\nwill appear to be faster with PG 12, even if users can't articulate\nexactly why.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 16 May 2019 14:26:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-May-16, Bruce Momjian wrote:\n\n> I think the more specific we make the partition description, the more\n> limited it will appear to be. I think almost all partition operations\n> will appear to be faster with PG 12, even if users can't articulate\n> exactly why.\n\nI don't understand why you want to make things such black boxes. It is\nwhat it is, and some users (not all) will want to understand. People\nwho have already tested pg11 and found it insufficient may give pg12 a\nsecond look if we tell them we have fixed the deficiencies they found.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 May 2019 15:08:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "\nOn 2019-05-16 16:08, Alvaro Herrera wrote:\n> On 2019-May-16, Bruce Momjian wrote:\n>\n>> I think the more specific we make the partition description, the more\n>> limited it will appear to be. I think almost all partition operations\n>> will appear to be faster with PG 12, even if users can't articulate\n>> exactly why.\n> I don't understand why you want to make things such black boxes. It is\n> what it is, and some users (not all) will want to understand. People\n> who have already tested pg11 and found it insufficient may give pg12 a\n> second look if we tell them we have fixed the deficiencies they found.\n\nyep, I agree with Alvaro, I'm one of these users...\n\n\n\n\n",
"msg_date": "Thu, 16 May 2019 16:18:42 -0300",
"msg_from": "Stephen Amell <mrstephenamell@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 12:08 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I don't understand why you want to make things such black boxes. It is\n> what it is, and some users (not all) will want to understand. People\n> who have already tested pg11 and found it insufficient may give pg12 a\n> second look if we tell them we have fixed the deficiencies they found.\n\n+1. The major features list should be accessible, but otherwise I\nthink that we should add more detail.\n\n80% - 90% of the items listed in the release notes aren't interesting\nto users, but *which* 80% - 90% it is varies. The release notes don't\nhave to function as a press release, because we'll have an actual\npress release instead.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 May 2019 12:35:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 12:35:25PM -0700, Peter Geoghegan wrote:\n> On Thu, May 16, 2019 at 12:08 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I don't understand why you want to make things such black boxes. It is\n>> what it is, and some users (not all) will want to understand. People\n>> who have already tested pg11 and found it insufficient may give pg12 a\n> second look if we tell them we have fixed the deficiencies they found.\n> \n> +1. The major features list should be accessible, but otherwise I\n> think that we should add more detail.\n\n+1. Another thing which may matter is that if some people do not see\nsome improvements they were expecting then they can come back to pg13\nand help around with having these added. Things are never going to be\nperfect for all users, and it makes little sense to make it sound like\nthings are actually perfect.\n--\nMichael",
"msg_date": "Fri, 17 May 2019 08:52:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 4:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> +1. Another thing which may matter is that if some people do not see\n> some improvements they were expecting then they can come back to pg13\n> and help around with having these added. Things are never going to be\n> perfect for all users, and it makes little sense to make it sound like\n> things are actually perfect.\n\nSpeaking of imperfections: it is well known that most performance\nenhancements have the potential to hurt certain workloads in order to\nhelp others. Maybe that was explicitly considered to be a good\ntrade-off when the feature went in, and maybe that was the right\ndecision at the time, but more often it was something that was simply\noverlooked. There is value in leaving information that helps users\ndiscover where a problem may have been introduced -- they can tell us\nabout it, and we can fix it.\n\nI myself sometimes look through the release notes in an effort to spot\nsomething that might have had undesirable side-effects, especially in\nareas of the code that I am less knowledgeable about, or have only a\nvague recollection of. Sometimes that is a reasonable starting point.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 May 2019 17:05:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 03:08:06PM -0400, Alvaro Herrera wrote:\n> On 2019-May-16, Bruce Momjian wrote:\n> \n> > I think the more specific we make the partition description, the more\n> > limited it will appear to be. I think almost all partition operations\n> > will appear to be faster with PG 12, even if users can't articulate\n> > exactly why.\n> \n> I don't understand why you want to make things such black boxes. It is\n> what it is, and some users (not all) will want to understand. People\n> who have already tested pg11 and found it insufficient may give pg12 a\n> second look if we tell them we have fixed the deficiencies they found.\n\nThe release notes are written so it is clear to the average reader, and\nis as short as it can be while remaining clear. If there is some other\ncriteria the community wants to use, I suggest you start a thread on the\nhackers list.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 16 May 2019 21:27:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 08:52:50AM +0900, Michael Paquier wrote:\n> On Thu, May 16, 2019 at 12:35:25PM -0700, Peter Geoghegan wrote:\n> > On Thu, May 16, 2019 at 12:08 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >> I don't understand why you want to make things such black boxes. It is\n> >> what it is, and some users (not all) will want to understand. People\n> >> who have already tested pg11 and found it insufficient may give pg12 a\n> > second look if we tell them we have fixed the deficiencies they found.\n> > \n> > +1. The major features list should be accessible, but otherwise I\n> > think that we should add more detail.\n> \n> +1. Another thing which may matter is that if some people do not see\n> some improvements they were expecting then they can come back to pg13\n> and help around with having these added. Things are never going to be\n> perfect for all users, and it makes little sense to make it sound like\n> things are actually perfect.\n\nWhat changes in the current description would cause people not to retest\nto see if it is better? If anything, the description is too broad.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 16 May 2019 21:29:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 05:05:13PM -0700, Peter Geoghegan wrote:\n> On Thu, May 16, 2019 at 4:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > +1. Another thing which may matter is that if some people do not see\n> > some improvements they were expecting then they can come back to pg13\n> > and help around with having these added. Things are never going to be\n> > perfect for all users, and it makes little sense to make it sound like\n> > things are actually perfect.\n> \n> Speaking of imperfections: it is well known that most performance\n> enhancements have the potential to hurt certain workloads in order to\n> help others. Maybe that was explicitly considered to be a good\n> trade-off when the feature went in, and maybe that was the right\n> decision at the time, but more often it was something that was simply\n> overlooked. There is value in leaving information that helps users\n> discover where a problem may have been introduced -- they can tell us\n> about it, and we can fix it.\n> \n> I myself sometimes look through the release notes in an effort to spot\n> something that might have had undesirable side-effects, especially in\n> areas of the code that I am less knowledgeable about, or have only a\n> vague recollection of. Sometimes that is a reasonable starting point.\n\nThe release notes are written for the _average_ reader. The commits are\nthere as comments and making them more visible would be a nice\nimprovement.\n\nWe have already adjusted the partition item, and the btree item. If\nthere is more detail people want, I suggest you post something to the\nhackers list and we will see if can be added in a consistent way. You\nmust also get agreement that the rules I am using need to be adjusted so\nall future release notes will have that consistency.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 16 May 2019 21:32:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 6:32 PM Bruce Momjian <bruce@momjian.us> wrote:\n> The release notes are written for the _average_ reader. The commits are\n> there as comments and making them more visible would be a nice\n> improvement.\n\nI don't think that making it possible to read the release notes from\nstart to finish is a useful goal. Users probably just skim them. I\ncertainly don't read them from start to finish, nor do I expect to\nunderstand every item.\n\n> We have already adjusted the partition item, and the btree item.\n\nI was making a general point. This comes up every year.\n\n> If there is more detail people want, I suggest you post something to the\n> hackers list and we will see if can be added in a consistent way.\n\nI think that it should be discussed at the developer meeting.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 May 2019 18:49:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 06:49:29PM -0700, Peter Geoghegan wrote:\n> On Thu, May 16, 2019 at 6:32 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > The release notes are written for the _average_ reader. The commits are\n> > there as comments and making them more visible would be a nice\n> > improvement.\n> \n> I don't think that making it possible to read the release notes from\n> start to finish is a useful goal. Users probably just skim them. I\n> certainly don't read them from start to finish, nor do I expect to\n> understand every item.\n\nI think we expect people to read them fully because how would they know\nwhat changed and what new features were added. I do think allowing more\ndetail by showing the commit messages would be helpful for people who\nwant to drill down on an item.\n\n> > We have already adjusted the partition item, and the btree item.\n> \n> I was making a general point. This comes up every year.\n> \n> > If there is more detail people want, I suggest you post something to the\n> > hackers list and we will see if can be added in a consistent way.\n> \n> I think that it should be discussed at the developer meeting.\n\nOK, that makes sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 16 May 2019 22:21:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019-May-16, Bruce Momjian wrote:\n\n> The release notes are written for the _average_ reader.\n\nI disagree with this assertion, and frankly I cannot understand why you\nthink that's the most useful thing to do. The release notes are not a\npress release, where you have to make things pretty or understandable to\neveryone. Users can skip items they don't understand or don't care\nabout; but would at least be given the option. If we don't document,\nwe're making the decision for them that they must not care.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 May 2019 22:34:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 10:21:35PM -0400, Bruce Momjian wrote:\n> I think we expect people to read them fully because how would they know\n> what changed and what new features were added. I do think allowing more\n> detail by showing the commit messages would be helpful for people who\n> want to drill down on an item.\n\nThis point is interesting. The release notes include in the sgml\nsources references to the related commits. Could it make sense to\nhave links to the main commits of a feature which can be clicked on\ndirectly from the html docs or such? I understand that producing the\nnotes is a daunting task already, and Bruce does an awesome job. Just\nwondering if we could make that automated in some way if that helps\nwith the user experience... Anyway, this is unrelated to the topic of\nthis thread. My apologies for the digression.\n--\nMichael",
"msg_date": "Fri, 17 May 2019 12:50:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019/05/17 3:26, Bruce Momjian wrote:\n> I think the more specific we make the partition description, the more\n> limited it will appear to be. I think almost all partition operations\n> will appear to be faster with PG 12, even if users can't articulate\n> exactly why.\n\nI agree that the current description captures at a high level the many\nchanges that made it possible. Although, a couple of commits listed with\nthis item don't have much to do with that description, AFAICT. Especially\n959d00e9d [1], which taught the planner to leverage the ordering imposed\non partitions by range partitioning. With that commit, getting ordered\noutput from partitioned tables is now much quicker, especially with a\nLIMIT clause. You can tell that it sounds clearly unrelated to the\ndescription we have now, which is \"processing thousands of partitions is\nquicker when only small numbers of partitions are touched\". Some users of\npartitioning may not be interested in the use case described as vastly\nimproved, but they may be delighted to hear about items such as the\naforementioned commit. Also, I suspect that the users whose use cases\npushed them to use partitioning in the first place may also be the ones\nwho do some of their own research about partitioning and eventually know\nmany optimizations that are possible with it. So, it's perhaps a good\nidea to let them know about such optimizations through release notes if\nit's the only place to put them, which I believe is the case with this\nparticular item. There are not that many commits to be taken out of the\nexisting item and described separately, just this one I think.\n\nThat said, my intention is only to point out that the commit is being\nmixed with an unrelated item. Whether or not to list it as a separate\nitem, I can only give my vote in its favor.\n\nThanks,\nAmit\n\n[1] [959d00e9d] Use Append rather than MergeAppend for scanning ordered\n\n\n\n",
"msg_date": "Fri, 17 May 2019 19:56:55 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "Long time lurker, first time poster. :)\n\n>\n> 80% - 90% of the items listed in the release notes aren't interesting\n> to users, but *which* 80% - 90% it is varies. The release notes don't\n> have to function as a press release, because we'll have an actual\n> press release instead.\n> --\n> Peter Geoghegan\n>\n\nI think Peter's point here is really important. I read the release notes\nand the press releases, both, from a very different perspective than\nothers, but I find both very valuable. The release notes benefit from\ncompleteness and detail, in my observer's opinion.\n\nBack to my cave.\n\nEvan\n\n-- \nEvan Macbeth - Director of Support - Crunchy Data\n+1 443-421-0343 - evan.macbeth@crunchydata.com\n\nLong time lurker, first time poster. :) 80% - 90% of the items listed in the release notes aren't interesting\nto users, but *which* 80% - 90% it is varies. The release notes don't\nhave to function as a press release, because we'll have an actual\npress release instead.\n-- \nPeter GeogheganI think Peter's point here is really important. I read the release notes and the press releases, both, from a very different perspective than others, but I find both very valuable. The release notes benefit from completeness and detail, in my observer's opinion.Back to my cave.Evan -- Evan Macbeth - Director of Support - Crunchy Data+1 443-421-0343 - evan.macbeth@crunchydata.com",
"msg_date": "Fri, 17 May 2019 08:14:33 -0400",
"msg_from": "Evan Macbeth <evan.macbeth@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Thu, May 16, 2019 at 10:34:13PM -0400, Alvaro Herrera wrote:\n> On 2019-May-16, Bruce Momjian wrote:\n> \n> > The release notes are written for the _average_ reader.\n> \n> I disagree with this assertion, and frankly I cannot understand why you\n> think that's the most useful thing to do. The release notes are not a\n> press release, where you have to make things pretty or understandable to\n> everyone. Users can skip items they don't understand or don't care\n> about; but would at least be given the option. If we don't document,\n> we're making the decision for them that they must not care.\n\nThe press release is not an exhaustive list of all features, so we can't\njust fall back on the press release as a way for non-internals readers\nto understand all the features in this release.\n\nFrankly, when I am reading a document, if I hit a few items I don't\nunderstand, I stop reading. This is why I tend to write in a\ngenerally-accessible level of detail. You can see this in all my\nwritings, e.g., blogs. I don't know how to write differently without\nfeeling I am being inconsiderate to the reader.\n\nAlso, when I say I write for the average reader, I write for the average\nperson who is likely to read the document, not for the average person in\ngeneral.\n\nI suggest you look at how Tom Lane writes the minor release notes for an\nexample that is better or worse than my style.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 10:16:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 12:50:12PM +0900, Michael Paquier wrote:\n> On Thu, May 16, 2019 at 10:21:35PM -0400, Bruce Momjian wrote:\n> > I think we expect people to read them fully because how would they know\n> > what changed and what new features were added. I do think allowing more\n> > detail by showing the commit messages would be helpful for people who\n> > want to drill down on an item.\n> \n> This point is interesting. The release notes include in the sgml\n> sources references to the related commits. Could it make sense to\n> have links to the main commits of a feature which can be clicked on\n> directly from the html docs or such? I understand that producing the\n> notes is a daunting task already, and Bruce does an awesome job. Just\n> wondering if we could make that automated in some way if that helps\n> with the user experience... Anyway, this is unrelated to the topic of\n> this thread. My apologies for the digression.\n\nYes, I am sure it can be done. The commits are always in a the same\nstyle as output by src/tools/git_changelog, and always to the far left.\nThis has been discussed before, but no one felt it was urgent enough.\n\nThe good news is that once we do this, all release notes, even the\nprevious ones, will have this feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 10:18:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 08:14:33AM -0400, Evan Macbeth wrote:\n> Long time lurker, first time poster. :)�\n> \n> \n> 80% - 90% of the items listed in the release notes aren't interesting\n> to users, but *which* 80% - 90% it is varies. The release notes don't\n> have to function as a press release, because we'll have an actual\n> press release instead.\n> --\n> Peter Geoghegan\n> \n> \n> I think Peter's point here is really important. I read the release notes and\n> the press releases, both, from a very different perspective than others, but I\n> find both very valuable. The release notes benefit from completeness and\n> detail, in my observer's opinion.\n\nAs I just stated, if the press release was exhaustive, we could have the\nrelease notes be more detailed, but this is not the case. I don't think\nwe want to get into a case where the items listed on the preess release\nget a different level of detail from the items not listed.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 10:19:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 5/14/19 10:31 PM, Amit Langote wrote:\n> Hi Jonathan,\n> \n> Thanks for the updated draft.\n> \n> On 2019/05/15 11:03, Jonathan S. Katz wrote:\n>> Without further ado:\n>>\n>> # Feature Highlights\n>>\n>> 1. Indexing\n>>\n>> - Improvements overall performance to standard (B-tree) indexes with\n>> writes as well as with bloat\n>> - REINDEX CONCURRENTLY\n>> - GiST indexes now support covering indexes (INCLUDE clause)\n>> - SP-GiST indexes now support K-NN queries\n>> - WAL overhead reduced on GiST, GIN, & SP-GiST index creation\n>>\n>> 2. Partitioning Improvements\n>>\n>> - Improved partition pruning, which improves performance on queries over\n>> tables with thousands of partitions that only need to use a few partitions\n>> - Improvements to COPY performance and ATTACH PARTITION\n>> - Allow foreign keys to reference partitioned tables\n> \n> About the 1st item in \"Partitioning Improvements\", it's not just partition\n> pruning that's gotten better. How about writing as:\n> \n> - Improved performance of processing tables with thousands of partitions\n> for operations that only need to touch a small number of partitions\n\nThanks, I will use some wording like this:\n\n- Improved performance of processing tables with thousands of partitions\nfor operations that only need to use a small number of partitions\n\n> \n> Per discussion upthread, that covers improvements to both partition\n> pruning and tuple routing.\n> \n> Also, could the title \"2. Partitioning Improvements\" be trimmed down to\n> \"2. Partitioning\", to look like \"1. Indexing\" for consistency?\n\nThese are just my rough notes and do not reflect how it will read in the\nPR itself; however I will keep that in mind.\n\nThanks,\n\nJonathan",
"msg_date": "Fri, 17 May 2019 10:20:29 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 5/17/19 10:19 AM, Bruce Momjian wrote:\n> On Fri, May 17, 2019 at 08:14:33AM -0400, Evan Macbeth wrote:\n>> Long time lurker, first time poster. :) \n>>\n>>\n>> 80% - 90% of the items listed in the release notes aren't interesting\n>> to users, but *which* 80% - 90% it is varies. The release notes don't\n>> have to function as a press release, because we'll have an actual\n>> press release instead.\n>> --\n>> Peter Geoghegan\n>>\n>>\n>> I think Peter's point here is really important. I read the release notes and\n>> the press releases, both, from a very different perspective than others, but I\n>> find both very valuable. The release notes benefit from completeness and\n>> detail, in my observer's opinion.\n> \n> As I just stated, if the press release was exhaustive, we could have the\n> release notes be more detailed, but this is not the case. I don't think\n> we want to get into a case where the items listed on the preess release\n> get a different level of detail from the items not listed.\n\nTo step back, it's important to understand the intended goals of the\npress release.\n\nI view the primary goal of the press release for a major version to be a\nspringboard to further dive into what is in the release. It helps to\ngive some global context to new features / changes and is an effective\n\"getting started\" point. It needs just enough details for people to be\ninterested, and if they want more info, they can go to the release notes\n+ docs.\n\nThe release notes then can provide additional details on what said\nfeatures (effectively a readable \"diff\" between versions) are, and, if\nneeded, the documentation can provide even greater details.\n\n(The \"diff\" point is important: multiple times in recent weeks I've had\nto refer back to the PG10 notes to highlight how the behavior for set\nreturning functions changed in the SELECT lists. So having that there\nwas certainly helpful.)\n\nIn my more heavy practitioner days, I needed both to help analyze and\nmake decisions on say, when to upgrade to a major version, or\n\nBringing it full circle, I view the primary goal of the Beta 1 press\nrelease to be a springboard to _testing_ the release: we need people to\ntest to ensure the GA release is as stable as possible. It should be a\nbit more exhaustive than the final GA press release - ideally I want\npeople to think of uses cases they can test the new features in -- but\nsure, it should not be too much more exhaustive.\n\nThanks,\n\nJonathan",
"msg_date": "Fri, 17 May 2019 10:30:58 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 07:56:55PM +0900, Amit Langote wrote:\n> On 2019/05/17 3:26, Bruce Momjian wrote:\n> > I think the more specific we make the partition description, the more\n> > limited it will appear to be. I think almost all partition operations\n> > will appear to be faster with PG 12, even if users can't articulate\n> > exactly why.\n> \n> I agree that the current description captures at a high level the many\n> changes that made it possible. Although, a couple of commits listed with\n> this item don't have much to do with that description, AFAICT. Especially\n> 959d00e9d [1], which taught the planner to leverage the ordering imposed\n> on partitions by range partitioning. With that commit, getting ordered\n> output from partitioned tables is now much quicker, especially with a\n> LIMIT clause. You can tell that it sounds clearly unrelated to the\n> description we have now, which is \"processing thousands of partitions is\n\nYes, it does. I added this text and moved the commit item:\n\n Avoid sorting when partitions are already being scanned in the\n necessary order (David Rowley)\n\nI certainly strugged to understand the maze of commits related to\npartitioning.\n\n> quicker when only small numbers of partitions are touched\". Some users of\n> partitioning may not be interested in the use case described as vastly\n> improved, but they may be delighted to hear about items such as the\n> aforementioned commit. Also, I suspect that the users whose use cases\n> pushed them to use partitioning in the first place may also be the ones\n> who do some of their own research about partitioning and eventually know\n> many optimizations that are possible with it. So, it's perhaps a good\n> idea to let them know about such optimizations through release notes if\n> it's the only place to put them, which I believe is the case with this\n> particular item. There are not that many commits to be taken out of the\n> existing item and described separately, just this one I think.\n\nYes.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 11:36:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sat, 18 May 2019 at 03:36, Bruce Momjian <bruce@momjian.us> wrote:\n> Yes, it does. I added this text and moved the commit item:\n>\n> Avoid sorting when partitions are already being scanned in the\n> necessary order (David Rowley)\n\nThank you for moving that out.\n\n> I certainly strugged to understand the maze of commits related to\n> partitioning.\n\nWith the utmost respect, because I certainly do appreciate the work\nyou did on the release note, I think if that's the case, then it\nshould only make you more willing to listen to the advice from the\npeople who are closer to those commits. However I understand that\nconsistency is also important, so listening to the heckling of\nindividuals sometimes won't lead to a good overall outcome. That\nbeing said, I don't think that's what happened here, as most of the\npeople who had a gripe about it were pretty close to the work that was\ndone.\n\nFWIW, I do think the release notes should be meant as a source of\ninformation which give a brief view on changes made that have a\nreasonable possibility of affecting people (either negative or\npositively) who are upgrading. Leaving out important details because\nthey might confuse a small group of people seems wrong-headed, as it\nmeans the people who are not in that group are left to look at the\ncommit history to determine what's changed, or worse, they might just\nassume that the thing has not changed which could either cause them to\n1) Skip the upgrade because it's not interesting to them, or; 2)\nhaving something break because we didn't list some incompatibility.\nI know you know better than most people that extracting a useful\nsummary from the commit history is a pretty mammoth task, so I doubt\nwe should put that upon the masses.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 18 May 2019 13:38:49 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 6:39 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> FWIW, I do think the release notes should be meant as a source of\n> information which give a brief view on changes made that have a\n> reasonable possibility of affecting people (either negative or\n> positively) who are upgrading. Leaving out important details because\n> they might confuse a small group of people seems wrong-headed\n\nIf the only thing that comes out of the developer meeting discussion\nitem on release notes is that we make the commits behind each listing\n*discoverable* from the web, then that will still be a big\nimprovement. I would prefer it if we went further, but that seems like\nit solves a lot of problems without creating new ones.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 May 2019 18:47:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sat, May 18, 2019 at 01:38:49PM +1200, David Rowley wrote:\n> On Sat, 18 May 2019 at 03:36, Bruce Momjian <bruce@momjian.us> wrote:\n> > Yes, it does. I added this text and moved the commit item:\n> >\n> > Avoid sorting when partitions are already being scanned in the\n> > necessary order (David Rowley)\n> \n> Thank you for moving that out.\n> \n> > I certainly strugged to understand the maze of commits related to\n> > partitioning.\n> \n> With the utmost respect, because I certainly do appreciate the work\n> you did on the release note, I think if that's the case, then it\n> should only make you more willing to listen to the advice from the\n> people who are closer to those commits. However I understand that\n> consistency is also important, so listening to the heckling of\n> individuals sometimes won't lead to a good overall outcome. That\n> being said, I don't think that's what happened here, as most of the\n> people who had a gripe about it were pretty close to the work that was\n> done.\n> \n> FWIW, I do think the release notes should be meant as a source of\n> information which give a brief view on changes made that have a\n> reasonable possibility of affecting people (either negative or\n> positively) who are upgrading. Leaving out important details because\n> they might confuse a small group of people seems wrong-headed, as it\n> means the people who are not in that group are left to look at the\n> commit history to determine what's changed, or worse, they might just\n> assume that the thing has not changed which could either cause them to\n> 1) Skip the upgrade because it's not interesting to them, or; 2)\n> having something break because we didn't list some incompatibility.\n> I know you know better than most people that extracting a useful\n> summary from the commit history is a pretty mammoth task, so I doubt\n> we should put that upon the masses.\n\nNo one has suggested new wording, so I don't know what you are\ncomplaining about. In fact, the wording we have now is by Amit Langote.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 22:30:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, May 17, 2019 at 4:31 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 5/17/19 10:19 AM, Bruce Momjian wrote:\n> > On Fri, May 17, 2019 at 08:14:33AM -0400, Evan Macbeth wrote:\n> >> Long time lurker, first time poster. :)\n> >>\n> >>\n> >> 80% - 90% of the items listed in the release notes aren't\n> interesting\n> >> to users, but *which* 80% - 90% it is varies. The release notes\n> don't\n> >> have to function as a press release, because we'll have an actual\n> >> press release instead.\n> >> --\n> >> Peter Geoghegan\n> >>\n> >>\n> >> I think Peter's point here is really important. I read the release\n> notes and\n> >> the press releases, both, from a very different perspective than\n> others, but I\n> >> find both very valuable. The release notes benefit from completeness and\n> >> detail, in my observer's opinion.\n> >\n> > As I just stated, if the press release was exhaustive, we could have the\n> > release notes be more detailed, but this is not the case. I don't think\n> > we want to get into a case where the items listed on the preess release\n> > get a different level of detail from the items not listed.\n>\n> To step back, it's important to understand the intended goals of the\n> press release.\n>\n> I view the primary goal of the press release for a major version to be a\n> springboard to further dive into what is in the release. It helps to\n> give some global context to new features / changes and is an effective\n> \"getting started\" point. It needs just enough details for people to be\n> interested, and if they want more info, they can go to the release notes\n> + docs.\n>\n> The release notes then can provide additional details on what said\n> features (effectively a readable \"diff\" between versions) are, and, if\n> needed, the documentation can provide even greater details.\n>\n\nThere is in fact a third layer to consider as well, which is the release\nannouncement. This can (and sometimes are) different from the press release.\n\nSo you have the press release. Then more details can be added to the\nrelease announcement. Then even more details can be added to the release\nnotes.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, May 17, 2019 at 4:31 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 5/17/19 10:19 AM, Bruce Momjian wrote:\n> On Fri, May 17, 2019 at 08:14:33AM -0400, Evan Macbeth wrote:\n>> Long time lurker, first time poster. :) \n>>\n>>\n>> 80% - 90% of the items listed in the release notes aren't interesting\n>> to users, but *which* 80% - 90% it is varies. The release notes don't\n>> have to function as a press release, because we'll have an actual\n>> press release instead.\n>> --\n>> Peter Geoghegan\n>>\n>>\n>> I think Peter's point here is really important. I read the release notes and\n>> the press releases, both, from a very different perspective than others, but I\n>> find both very valuable. The release notes benefit from completeness and\n>> detail, in my observer's opinion.\n> \n> As I just stated, if the press release was exhaustive, we could have the\n> release notes be more detailed, but this is not the case. I don't think\n> we want to get into a case where the items listed on the preess release\n> get a different level of detail from the items not listed.\n\nTo step back, it's important to understand the intended goals of the\npress release.\n\nI view the primary goal of the press release for a major version to be a\nspringboard to further dive into what is in the release. It helps to\ngive some global context to new features / changes and is an effective\n\"getting started\" point. It needs just enough details for people to be\ninterested, and if they want more info, they can go to the release notes\n+ docs.\n\nThe release notes then can provide additional details on what said\nfeatures (effectively a readable \"diff\" between versions) are, and, if\nneeded, the documentation can provide even greater details.There is in fact a third layer to consider as well, which is the release announcement. This can (and sometimes are) different from the press release.So you have the press release. Then more details can be added to the release announcement. Then even more details can be added to the release notes.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 18 May 2019 10:13:17 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sat, 18 May 2019 at 14:30, Bruce Momjian <bruce@momjian.us> wrote:\n> No one has suggested new wording, so I don't know what you are\n> complaining about. In fact, the wording we have now is by Amit Langote.\n\nhttps://www.postgresql.org/message-id/CAKJS1f8R6DC45bauzeGF-QMaQ90B_NFSJB9mvVOuhKVDkajehg@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 18 May 2019 23:34:34 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sat, May 18, 2019 at 11:34:34PM +1200, David Rowley wrote:\n> On Sat, 18 May 2019 at 14:30, Bruce Momjian <bruce@momjian.us> wrote:\n> > No one has suggested new wording, so I don't know what you are\n> > complaining about. In fact, the wording we have now is by Amit Langote.\n> \n> https://www.postgresql.org/message-id/CAKJS1f8R6DC45bauzeGF-QMaQ90B_NFSJB9mvVOuhKVDkajehg@mail.gmail.com\n\nThe second item I added yesterday at the prompting of Amit:\n\n\tcommit 05685897f0\n\tAuthor: Bruce Momjian <bruce@momjian.us>\n\tDate: Fri May 17 11:31:49 2019 -0400\n\t\n\t docs: split out sort-skip partition item in PG 12 release notes\n\t\n\t Discussion:\n\thttps://postgr.es/m/0cf10a27-c6a0-de4a-cd20-ab7493ea7422@lab.ntt.co.jp\n\n\t> Avoid sorting when partitions are already being scanned in the\n\t> necessary order (David Rowley)\n\nThe first one needs to reference visible effects, so I would add:\n\n\tConsider additional optimizations for queries referencing\n\tpartitioned tables that only affect a single partition\n\nDoes that work?\n\nIt seems you never got a reply to that email, perhaps because I thought\nAmit's next email was covering the same issue. I am sorry for that.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 18 May 2019 09:20:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Tue, May 14, 2019 at 09:19:45AM -0400, Bruce Momjian wrote:\n> On Tue, May 14, 2019 at 06:25:43PM +0900, Amit Langote wrote:\n> > Considering the quoted discussion here, maybe it's a good idea to note\n> > that only the operations that need to touch a small number of partitions\n> > are now processed efficiently, which covers both SELECT/UPDATE/DELETE that\n> > benefit from improved pruning efficiency and INSERT that benefit from\n> > improved tuple routing efficiency. So, maybe:\n> > \n> > \tTables with thousands of child partitions can now be processed\n> > \tefficiently by operations that only need to touch a small number\n> > of partitions.\n> > \n> > That is, as I mentioned above, as opposed to queries that need to process\n> > all partitions (such as, select count(*) from partitioned_table), which\n> > don't perform any faster in v12 than in v11. The percentage of users who\n> > run such workloads on PostgreSQL may be much smaller today, but perhaps\n> > it's not a good idea to mislead them into thinking that *everything* with\n> > partitioned tables is now faster even with thousands of partitions.\n> \n> Agreed, I changed it to your wording.\n\nI tightened up the wording on this item, and removed 'touch' since that\ncould suggest 'write':\n\n Allow tables with thousands of child partitions to be processed\n efficiently by operations that only affect a small number of\n partitions.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 18 May 2019 09:24:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sun, 19 May 2019 at 01:20, Bruce Momjian <bruce@momjian.us> wrote:\n> The first one needs to reference visible effects, so I would add:\n>\n> Consider additional optimizations for queries referencing\n> partitioned tables that only affect a single partition\n>\n> Does that work?\n\nThanks, typing that up. I think it lacks a bit detail about what's\nactually changed. The change is fairly evident and people can see\nwhen it takes effect when they look at EXPLAIN and see that the\nAppend/MergeAppend node is missing. Also, an Append/MergeAppend may\nexist for inheritance tables and an Append can exist for a simple\nUNION ALL. Both of those cases can have subplans removed to leave only\na single subplan, to which the additional parallel paths will be\nconsidered.\n\nI think something like:\n\n* Make the optimizer only include an Append/MergeAppend node when\nthere is more than one subplan to append.\n\nThis reduces execution overheads and allows additional plan shapes\nthat were previously not possible.\n\nor a little more brief:\n\n* Allow the optimizer to consider more plan types by eliminating\nsingle subplan Append and MergeAppends.\n\nI understand that you might be trying to avoid details of plan node\ntypes, but really anyone who's got inheritance or partitioned tables\nand has looked at an EXPLAIN should have an idea.\n\nAlso, I think Tom should be the main author of this as he rewrote my\nversion of the patch to such an extent that there was next to nothing\nof it left. I just tagged a few extra things on before he committed\nit.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 19 May 2019 02:26:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019/05/18 0:36, Bruce Momjian wrote:\n> On Fri, May 17, 2019 at 07:56:55PM +0900, Amit Langote wrote:\n>> I agree that the current description captures at a high level the many\n>> changes that made it possible. Although, a couple of commits listed with\n>> this item don't have much to do with that description, AFAICT. Especially\n>> 959d00e9d [1], which taught the planner to leverage the ordering imposed\n>> on partitions by range partitioning. With that commit, getting ordered\n>> output from partitioned tables is now much quicker, especially with a\n>> LIMIT clause. You can tell that it sounds clearly unrelated to the\n>> description we have now, which is \"processing thousands of partitions is\n> \n> Yes, it does. I added this text and moved the commit item:\n> \n> Avoid sorting when partitions are already being scanned in the\n> necessary order (David Rowley)\n\nThank you Bruce.\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 09:47:54 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Sun, May 19, 2019 at 02:26:48AM +1200, David Rowley wrote:\n> On Sun, 19 May 2019 at 01:20, Bruce Momjian <bruce@momjian.us> wrote:\n> > The first one needs to reference visible effects, so I would add:\n> >\n> > Consider additional optimizations for queries referencing\n> > partitioned tables that only affect a single partition\n> >\n> > Does that work?\n> \n> Thanks, typing that up. I think it lacks a bit detail about what's\n> actually changed. The change is fairly evident and people can see\n> when it takes effect when they look at EXPLAIN and see that the\n> Append/MergeAppend node is missing. Also, an Append/MergeAppend may\n> exist for inheritance tables and an Append can exist for a simple\n> UNION ALL. Both of those cases can have subplans removed to leave only\n> a single subplan, to which the additional parallel paths will be\n> considered.\n\nThis brings up a few points. First, it seems the change affects\npartitioned tables and UNION ALL, which means it probably needs to be\nlisted in two sections. Second, is it only parallelism paths that are\nadded? I am not sure if people care about a node being removed,\nespecially when the might not even know we do that step, but they do\ncare if there are new optimization possibilities.\n\n> I think something like:\n> \n> * Make the optimizer only include an Append/MergeAppend node when\n> there is more than one subplan to append.\n> \n> This reduces execution overheads and allows additional plan shapes\n> that were previously not possible.\n> \n> or a little more brief:\n> \n> * Allow the optimizer to consider more plan types by eliminating\n> single subplan Append and MergeAppends.\n> \n> I understand that you might be trying to avoid details of plan node\n> types, but really anyone who's got inheritance or partitioned tables\n> and has looked at an EXPLAIN should have an idea.\n\nI would like to have something that people who don't study EXPLAIN will\nstill be excited about.\n\n> Also, I think Tom should be the main author of this as he rewrote my\n> version of the patch to such an extent that there was next to nothing\n> of it left. I just tagged a few extra things on before he committed\n> it.\n\nUh, Tom listed you as author, but I don't have to follow that if you\nfeel it is inaccurate.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 21 May 2019 10:55:09 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On 2019/05/21 23:55, Bruce Momjian wrote:\n> On Sun, May 19, 2019 at 02:26:48AM +1200, David Rowley wrote:\n>> Thanks, typing that up. I think it lacks a bit detail about what's\n>> actually changed. The change is fairly evident and people can see\n>> when it takes effect when they look at EXPLAIN and see that the\n>> Append/MergeAppend node is missing. Also, an Append/MergeAppend may\n>> exist for inheritance tables and an Append can exist for a simple\n>> UNION ALL. Both of those cases can have subplans removed to leave only\n>> a single subplan, to which the additional parallel paths will be\n>> considered.\n> \n> This brings up a few points. First, it seems the change affects\n> partitioned tables and UNION ALL, which means it probably needs to be\n> listed in two sections.\n\nHow about putting this only in E.1.3.1.3. Optimizer?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 22 May 2019 09:23:13 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Wed, 22 May 2019 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sun, May 19, 2019 at 02:26:48AM +1200, David Rowley wrote:\n> > On Sun, 19 May 2019 at 01:20, Bruce Momjian <bruce@momjian.us> wrote:\n> > > The first one needs to reference visible effects, so I would add:\n> > >\n> > > Consider additional optimizations for queries referencing\n> > > partitioned tables that only affect a single partition\n> > >\n> > > Does that work?\n> >\n> > Thanks, typing that up. I think it lacks a bit detail about what's\n> > actually changed. The change is fairly evident and people can see\n> > when it takes effect when they look at EXPLAIN and see that the\n> > Append/MergeAppend node is missing. Also, an Append/MergeAppend may\n> > exist for inheritance tables and an Append can exist for a simple\n> > UNION ALL. Both of those cases can have subplans removed to leave only\n> > a single subplan, to which the additional parallel paths will be\n> > considered.\n>\n> This brings up a few points. First, it seems the change affects\n> partitioned tables and UNION ALL, which means it probably needs to be\n> listed in two sections. Second, is it only parallelism paths that are\n> added? I am not sure if people care about a node being removed,\n> especially when the might not even know we do that step, but they do\n> care if there are new optimization possibilities.\n\nLike Amit, I think the optimizer section is fine. Another thing that\nis affected is that you may no longer get a Materialize node in the\nplan. Previously you might have gotten something like Merge Join ->\nMaterialize -> Append -> Seq Scan, now you might just get Merge Join\n-> Seq Scan. This is because Append / MergeAppend don't support mark\nand restore. Removing them would allow the materialize node to be\nskipped in cases where the single subpath of the Append does support\nmark and restore.\n\n> > I think something like:\n> >\n> > * Make the optimizer only include an Append/MergeAppend node when\n> > there is more than one subplan to append.\n> >\n> > This reduces execution overheads and allows additional plan shapes\n> > that were previously not possible.\n> >\n> > or a little more brief:\n> >\n> > * Allow the optimizer to consider more plan types by eliminating\n> > single subplan Append and MergeAppends.\n> >\n> > I understand that you might be trying to avoid details of plan node\n> > types, but really anyone who's got inheritance or partitioned tables\n> > and has looked at an EXPLAIN should have an idea.\n>\n> I would like to have something that people who don't study EXPLAIN will\n> still be excited about.\n\nhmm. I guess it's a pretty hard balance between writing something\nwhere there's so little detail that nobody really knows what it means\nvs adding some detail that some people might not understand.\n\n> > Also, I think Tom should be the main author of this as he rewrote my\n> > version of the patch to such an extent that there was next to nothing\n> > of it left. I just tagged a few extra things on before he committed\n> > it.\n>\n> Uh, Tom listed you as author, but I don't have to follow that if you\n> feel it is inaccurate.\n\nI'd say he was being very generous. I'm happy to come 2nd on this one.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 12:33:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Wed, May 22, 2019 at 12:33:10PM +1200, David Rowley wrote:\n> On Wed, 22 May 2019 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n> > This brings up a few points. First, it seems the change affects\n> > partitioned tables and UNION ALL, which means it probably needs to be\n> > listed in two sections. Second, is it only parallelism paths that are\n> > added? I am not sure if people care about a node being removed,\n> > especially when the might not even know we do that step, but they do\n> > care if there are new optimization possibilities.\n> \n> Like Amit, I think the optimizer section is fine. Another thing that\n> is affected is that you may no longer get a Materialize node in the\n> plan. Previously you might have gotten something like Merge Join ->\n> Materialize -> Append -> Seq Scan, now you might just get Merge Join\n> -> Seq Scan. This is because Append / MergeAppend don't support mark\n> and restore. Removing them would allow the materialize node to be\n> skipped in cases where the single subpath of the Append does support\n> mark and restore.\n\nHow is this patch for the item? I put it in the Optimizer section.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 13 Jun 2019 23:24:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 15:24, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, May 22, 2019 at 12:33:10PM +1200, David Rowley wrote:\n> > On Wed, 22 May 2019 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n> > > This brings up a few points. First, it seems the change affects\n> > > partitioned tables and UNION ALL, which means it probably needs to be\n> > > listed in two sections. Second, is it only parallelism paths that are\n> > > added? I am not sure if people care about a node being removed,\n> > > especially when the might not even know we do that step, but they do\n> > > care if there are new optimization possibilities.\n> >\n> > Like Amit, I think the optimizer section is fine. Another thing that\n> > is affected is that you may no longer get a Materialize node in the\n> > plan. Previously you might have gotten something like Merge Join ->\n> > Materialize -> Append -> Seq Scan, now you might just get Merge Join\n> > -> Seq Scan. This is because Append / MergeAppend don't support mark\n> > and restore. Removing them would allow the materialize node to be\n> > skipped in cases where the single subpath of the Append does support\n> > mark and restore.\n>\n> How is this patch for the item? I put it in the Optimizer section.\n\nThat looks fine. Thank you.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:57:45 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 03:57:45PM +1200, David Rowley wrote:\n> On Fri, 14 Jun 2019 at 15:24, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, May 22, 2019 at 12:33:10PM +1200, David Rowley wrote:\n> > > On Wed, 22 May 2019 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n> > > > This brings up a few points. First, it seems the change affects\n> > > > partitioned tables and UNION ALL, which means it probably needs to be\n> > > > listed in two sections. Second, is it only parallelism paths that are\n> > > > added? I am not sure if people care about a node being removed,\n> > > > especially when the might not even know we do that step, but they do\n> > > > care if there are new optimization possibilities.\n> > >\n> > > Like Amit, I think the optimizer section is fine. Another thing that\n> > > is affected is that you may no longer get a Materialize node in the\n> > > plan. Previously you might have gotten something like Merge Join ->\n> > > Materialize -> Append -> Seq Scan, now you might just get Merge Join\n> > > -> Seq Scan. This is because Append / MergeAppend don't support mark\n> > > and restore. Removing them would allow the materialize node to be\n> > > skipped in cases where the single subpath of the Append does support\n> > > mark and restore.\n> >\n> > How is this patch for the item? I put it in the Optimizer section.\n> \n> That looks fine. Thank you.\n\nDone, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 14 Jun 2019 09:30:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12: Feature Highlights"
}
] |
[
{
"msg_contents": "Over in pgsql-bugs [1], we're looking into some bugs associated with\nmistranslation of SQL-spec regexes to POSIX regexes. However, while\npoking at that I couldn't help noticing that there are more ways in\nwhich we're not following the letter of the SQL spec in this area:\n\n* As Andrew noted, somewhere between SQL99 and SQL:2008, the committee\njust up and changed the syntax of <regular expression substring function>.\nSQL99 has\n\n <regular expression substring function> ::=\n SUBSTRING <left paren> <character value expression>\n FROM <character value expression>\n FOR <escape character> <right paren>\n\nbut in recent versions it's\n\n <regular expression substring function> ::=\n SUBSTRING <left paren> <character value expression>\n SIMILAR <character value expression>\n ESCAPE <escape character> <right paren>\n\nI am, frankly, inclined to ignore this as a bad idea. We do have\nSIMILAR and ESCAPE as keywords already, but they're type_func_name_keyword\nand unreserved_keyword respectively. To support this syntax, I'm pretty\nsure we'd have to make them both fully reserved. That seems likely to\nbreak existing applications, and I don't think it's worth it. But it's\nprobably something to discuss.\n\n* Our function similar_escape() is not documented, but it underlies\nthree things in the grammar:\n\n\ta SIMILAR TO b\n\tTranslated as \"a ~ similar_escape(b, null)\"\n\n\ta SIMILAR TO b ESCAPE e\n\tTranslated as \"a ~ similar_escape(b, e)\"\n\n\tsubstring(a, b, e)\n\tThis is a SQL function expanding to\n\tselect pg_catalog.substring($1, pg_catalog.similar_escape($2, $3))\n\nTo support the first usage, similar_escape is non-strict, and it takes\na NULL second argument to mean '\\'. This is already a SQL spec violation,\nbecause as far as I can tell from the spec, if you don't write an ESCAPE\nclause then there is *no* escape character; there certainly is not a\ndefault of '\\'. However, we document this behavior, so I don't know if\nwe want to change it.\n\nThis behavior also causes spec compatibility problems in the second\nsyntax, because \"a SIMILAR TO b ESCAPE NULL\" is treated as though\nit were \"ESCAPE '\\'\", which is again a spec violation: the result\nshould be null.\n\nAnd, just to add icing on the cake, it causes performance problems\nin the third syntax. 3-argument substring is labeled proisstrict,\nwhich is correct behavior per spec (the result is NULL if any of\nthe three arguments are null). But because similar_escape is not\nstrict, the planner fails to inline the SQL function, reasoning\n(quite accurately) that doing so would change the behavior for\nnull inputs. This costs us something like 4x performance compared\nto the underlying 2-argument POSIX-regex substring() function.\n\nI'm not sure what we want to do here, but we probably ought to do\nsomething, because right now substring() and SIMILAR TO aren't even\nin agreement between themselves let alone with the SQL spec. We\ncould either move towards making all these constructs strict in\naccordance with the spec (and possibly breaking some existing\napplications), or we could make substring(a, b, e) not strict so\nthat it inherits similar_escape's idea of what to do for e = NULL.\n\n* similar_escape considers a zero-length escape string to mean\n\"no escape character\". This is contrary to spec which clearly\nsays that a zero-length escape string is an error condition\n(just as more-than-one-character is an error condition). It's\nalso undocumented. Should we tighten that up to conform to spec,\nor document it as an extension?\n\n* Per spec, escape-double-quote must appear exactly twice in\nthe second argument of substring(a, b, e), while it's not valid\nin SIMILAR TO. similar_escape() doesn't enforce this, and it\ncan't do so as long as we are using the same pattern conversion\nfunction for both constructs. However, we could do better than\nwe're doing:\n\n* If there are zero occurrences, then what you get from substring()\nis the whole input string if it matches, as if escape-double-quote\nhad appeared at each end of the string. I think this is fine, but\nwe ought to document it.\n\n* If there are an odd number of occurrences, similar_escape() doesn't\ncomplain, but you'll get this from the regex engine:\n ERROR: invalid regular expression: parentheses () not balanced\nThe fact of an error isn't a problem, but the error message is pretty\nconfusing considering that what the user wrote was not parentheses.\nI think similar_escape() ought to throw its own error with an on-point\nmessage.\n\n* If there are more than two pairs of escape-double-quote, you get\nsome behavior that's completely not per spec --- the patterns\nbetween the additional pairs still contribute to whether there's an\noverall match, but they don't affect the result substring. I'm\ninclined to think we ought to throw an error for this, too.\n\n* The spec is much tighter than we are concerning what's a legal\nescape sequence. This is partly intentional on our part, I think;\nnotably, you can get at POSIX-regex escapes like \"\\d\", which the\nSQL spec doesn't provide. Although I think this is intentional,\nit's not documented. I'm not sure if we want to tighten that\nup or document what we allow ... thoughts?\n\n\nI am not eager to change any of this in released branches, but\nI think there's a good case for doing something about these\npoints in HEAD.\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net\n\n\n",
"msg_date": "Sun, 12 May 2019 20:43:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> but in recent versions it's\n\n Tom> <regular expression substring function> ::=\n Tom> SUBSTRING <left paren> <character value expression>\n Tom> SIMILAR <character value expression>\n Tom> ESCAPE <escape character> <right paren>\n\n Tom> I am, frankly, inclined to ignore this as a bad idea. We do have\n Tom> SIMILAR and ESCAPE as keywords already, but they're\n Tom> type_func_name_keyword and unreserved_keyword respectively. To\n Tom> support this syntax, I'm pretty sure we'd have to make them both\n Tom> fully reserved.\n\nI only did a quick trial but it doesn't seem to require reserving them\nmore strictly - just adding the obvious productions to the grammar\ndoesn't introduce any conflicts.\n\n Tom> * Our function similar_escape() is not documented, but it\n Tom> underlies three things in the grammar:\n\n Tom> \ta SIMILAR TO b\n Tom> \tTranslated as \"a ~ similar_escape(b, null)\"\n\n Tom> \ta SIMILAR TO b ESCAPE e\n Tom> \tTranslated as \"a ~ similar_escape(b, e)\"\n\n Tom> \tsubstring(a, b, e)\n Tom> \tThis is a SQL function expanding to\n Tom> \tselect pg_catalog.substring($1, pg_catalog.similar_escape($2, $3))\n\n Tom> To support the first usage, similar_escape is non-strict, and it\n Tom> takes a NULL second argument to mean '\\'. This is already a SQL\n Tom> spec violation, because as far as I can tell from the spec, if you\n Tom> don't write an ESCAPE clause then there is *no* escape character;\n Tom> there certainly is not a default of '\\'. However, we document this\n Tom> behavior, so I don't know if we want to change it.\n\nThis is the same spec violation that we also have for LIKE, which also\nis supposed to have no escape character in the absense of an explicit\nESCAPE clause.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 13 May 2019 07:38:23 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Tom> I am, frankly, inclined to ignore this as a bad idea. We do have\n Tom> SIMILAR and ESCAPE as keywords already, but they're\n Tom> type_func_name_keyword and unreserved_keyword respectively. To\n Tom> support this syntax, I'm pretty sure we'd have to make them both\n Tom> fully reserved.\n\n Andrew> I only did a quick trial but it doesn't seem to require\n Andrew> reserving them more strictly - just adding the obvious\n Andrew> productions to the grammar doesn't introduce any conflicts.\n\nDigging deeper, that's because both SIMILAR and ESCAPE have been\nassigned precedence. Ambiguities that exist include:\n\n ... COLNAME ! SIMILAR ( ...\n\nwhich could be COLNAME postfix-op SIMILAR a_expr, or COLNAME infix-op\nfunction-call. Postfix operators strike again... we really should kill\nthose off.\n\nThe ESCAPE part could in theory be ambiguous if the SIMILAR expression\nends in a ... SIMILAR TO xxx operator, since then we wouldn't know\nwhether to attach the ESCAPE to that or keep it as part of the function\nsyntax. But I think this is probably a non-issue. More significant is\nthat ... COLNAME ! ESCAPE ... again has postfix- vs. infix-operator\nambiguities.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 13 May 2019 08:42:56 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> The ESCAPE part could in theory be ambiguous if the SIMILAR\n Andrew> expression ends in a ... SIMILAR TO xxx operator, since then we\n Andrew> wouldn't know whether to attach the ESCAPE to that or keep it\n Andrew> as part of the function syntax. But I think this is probably a\n Andrew> non-issue. More significant is that ... COLNAME ! ESCAPE ...\n Andrew> again has postfix- vs. infix-operator ambiguities.\n\nAnd this ambiguity shows up already in other contexts:\n\nselect 'foo' similar to 'f' || escape escape escape from (values ('oo')) v(escape);\npsql: ERROR: syntax error at or near \"escape\"\nLINE 1: select 'foo' similar to 'f' || escape escape escape from (va...\n\nselect 'foo' similar to 'f' || escape escape from (values ('oo')) v(escape);\npsql: ERROR: operator does not exist: unknown ||\nLINE 1: select 'foo' similar to 'f' || escape escape from (values ('...\n\nI guess this happens because ESCAPE has precedence below POSTFIXOP, so\nthe ('f' ||) gets reduced in preference to shifting in the first ESCAPE\ntoken.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 13 May 2019 18:43:08 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> Andrew> The ESCAPE part could in theory be ambiguous if the SIMILAR\n> Andrew> expression ends in a ... SIMILAR TO xxx operator, since then we\n> Andrew> wouldn't know whether to attach the ESCAPE to that or keep it\n> Andrew> as part of the function syntax. But I think this is probably a\n> Andrew> non-issue. More significant is that ... COLNAME ! ESCAPE ...\n> Andrew> again has postfix- vs. infix-operator ambiguities.\n\n> And this ambiguity shows up already in other contexts:\n\n> select 'foo' similar to 'f' || escape escape escape from (values ('oo')) v(escape);\n> psql: ERROR: syntax error at or near \"escape\"\n> LINE 1: select 'foo' similar to 'f' || escape escape escape from (va...\n\n\nHmm. Oddly, you can't fix it by adding parens:\n\nregression=# select 'foo' similar to ('f' || escape) escape escape from (values ('oo')) v(escape);\npsql: ERROR: syntax error at or near \"escape\"\nLINE 1: select 'foo' similar to ('f' || escape) escape escape from (...\n ^\n\nSince \"escape\" is an unreserved word, I'd have expected that to work.\nOdd.\n\nThe big picture here is that fixing grammar ambiguities by adding\nprecedence is a dangerous business :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 13:53:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Hmm. Oddly, you can't fix it by adding parens:\n\n Tom> regression=# select 'foo' similar to ('f' || escape) escape escape from (values ('oo')) v(escape);\n Tom> psql: ERROR: syntax error at or near \"escape\"\n Tom> LINE 1: select 'foo' similar to ('f' || escape) escape escape from (...\n Tom> ^\n\n Tom> Since \"escape\" is an unreserved word, I'd have expected that to\n Tom> work. Odd.\n\nSimpler cases fail too:\n\nselect 'f' || escape from (values ('o')) v(escape);\npsql: ERROR: syntax error at or near \"escape\"\n\nselect 1 + escape from (values (1)) v(escape); -- works\nselect 1 & escape from (values (1)) v(escape); -- fails\n\nin short ESCAPE can't follow any generic operator, because its lower\nprecedence forces the operator to be reduced as a postfix op instead.\n\n Tom> The big picture here is that fixing grammar ambiguities by adding\n Tom> precedence is a dangerous business :-(\n\nYeah. But the alternative is usually reserving words more strictly,\nwhich has its own issues :-(\n\nOr we could kill off postfix operators...\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 13 May 2019 19:39:14 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "[ backing up to a different sub-discussion ]\n\nAndrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> To support the first usage, similar_escape is non-strict, and it\n> Tom> takes a NULL second argument to mean '\\'. This is already a SQL\n> Tom> spec violation, because as far as I can tell from the spec, if you\n> Tom> don't write an ESCAPE clause then there is *no* escape character;\n> Tom> there certainly is not a default of '\\'. However, we document this\n> Tom> behavior, so I don't know if we want to change it.\n\n> This is the same spec violation that we also have for LIKE, which also\n> is supposed to have no escape character in the absense of an explicit\n> ESCAPE clause.\n\nRight. After further thought, I propose that what we ought to do is\nunify LIKE, SIMILAR TO, and 3-arg SUBSTRING on a single set of behaviors\nfor the ESCAPE argument:\n\n1. They are strict, ie a NULL value for the escape string produces a\nNULL result. This is per spec, and we don't document anything different,\nand nobody would really expect something different. (But see below\nabout keeping similar_escape() as a legacy compatibility function.)\n\n2. Omitting the ESCAPE option (not possible for SUBSTRING) results in a\ndefault of '\\'. This is not per spec, but we've long documented it this\nway, and frankly I'd say that it's a far more useful default than the\nspec's behavior of \"there is no escape character\". I propose that we\njust document that this is not-per-spec and move on.\n\n3. Interpret an empty ESCAPE string as meaning \"there is no escape\ncharacter\". This is not per spec either (the spec would have us\nthrow an error) but it's our historical behavior, and it seems like\na saner approach than the way the spec wants to do it --- in particular,\nthere's no way to get that behavior in 3-arg SUBSTRING if we don't allow\nthis.\n\nSo only point 1 represents an actual behavioral change from what we've\nbeen doing; the other two just require doc clarifications.\n\nNow, I don't have any problem with changing what happens when somebody\nactually writes \"a LIKE b ESCAPE NULL\"; it seems fairly unlikely that\nanyone would expect that to yield a non-null result. However, we do\nhave a problem with the fact that the implementation is partially\nexposed:\n\nregression=# create view v1 as select f1 similar to 'x*' from text_tbl;\nCREATE VIEW\nregression=# \\d+ v1\n...\nView definition:\n SELECT text_tbl.f1 ~ similar_escape('x*'::text, NULL::text)\n FROM text_tbl;\n\nIf we just change similar_escape() to be strict, then this view will\nstop working, which is a bit hard on users who did not write anything\nnon-spec-compliant.\n\nI propose therefore that we leave similar_escape in place with its\ncurrent behavior, as a compatibility measure for cases like this.\nIntead, invent two new strict functions, say\n\tsimilar_to_escape(pattern)\n\tsimilar_to_escape(pattern, escape)\nand change the parser and the implementation of SUBSTRING() to\nrely on these going forward.\n\nThe net effect will be to make explicit \"ESCAPE NULL\" spec-compliant,\nand to get rid of the performance problem from inlining failure for\nsubstring(). All else is just doc clarifications.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2019 12:22:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "On Mon, May 13, 2019 at 2:39 PM Andrew Gierth\n<andrew@tao11.riddles.org.uk> wrote:\n> Or we could kill off postfix operators...\n\n/me helps Andrew hijack the thread.\n\nWe wouldn't even have to go that far. We could just restrict it to a\nspecific list of operators that are hard-coded into the lexer and\nparser, like say only '!'.\n\nEven if we killed postfix operators completely, the number of users\nwho would be affected would probably be minimal, because the only\npostfix operator we ship is for factorial, and realistically, that's\nnot exactly a critical thing for most users, especially considering\nthat our implementation is pretty slow. But the number of people\nusing out-of-core postfix operators has got to be really tiny --\nunless, maybe, there's some really popular extension like PostGIS that\nuses them.\n\nI think it's pretty clear that the theoretical beauty of being able to\nhandle postfix operators is not worth the tangible cost they impose on\nour parser. We're losing more users as a result of SQL that other\nsystems can accept and we cannot than we are gaining by being able to\nsupport user-defined postfix operators. The latter is not exactly a\nmainstream need.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 May 2019 11:53:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it's pretty clear that the theoretical beauty of being able to\n> handle postfix operators is not worth the tangible cost they impose on\n> our parser. We're losing more users as a result of SQL that other\n> systems can accept and we cannot than we are gaining by being able to\n> support user-defined postfix operators.\n\nI suppose it's possible to make such an argument, but you haven't\nactually made one --- just asserted something without providing\nevidence.\n\nIf we can lay out some concrete gains that justify zapping postfix\noperators, I'd be willing to do it. I agree that it would likely\nhurt few users ... but we need to be able to explain to those few\nwhy we broke it. And show that the benefits outweigh the cost.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 13:13:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Robert\" == Robert Haas <robertmhaas@gmail.com> writes:\n\n Robert> But the number of people using out-of-core postfix operators\n Robert> has got to be really tiny -- unless, maybe, there's some really\n Robert> popular extension like PostGIS that uses them.\n\nIf there's any extension that uses them I've so far failed to find it.\n\nFor the record, the result of my Twitter poll was 29:2 in favour of\nremoving them, for what little that's worth.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 22 May 2019 05:54:38 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "I wrote:\n> I propose therefore that we leave similar_escape in place with its\n> current behavior, as a compatibility measure for cases like this.\n> Intead, invent two new strict functions, say\n> \tsimilar_to_escape(pattern)\n> \tsimilar_to_escape(pattern, escape)\n> and change the parser and the implementation of SUBSTRING() to\n> rely on these going forward.\n\n> The net effect will be to make explicit \"ESCAPE NULL\" spec-compliant,\n> and to get rid of the performance problem from inlining failure for\n> substring(). All else is just doc clarifications.\n\nHere's a proposed patch for that. I think it's a bit too late to be\nmessing with this kind of thing for v12, so I'll add this to the\nupcoming CF.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 23 May 2019 18:10:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "This discussion seems to have died down. Apparently we have three\ndirections here, from three different people. Are we doing anything?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 13:01:32 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n> This discussion seems to have died down. Apparently we have three\n> directions here, from three different people. Are we doing anything?\n\nI don't really want to do anything beyond the patch I submitted in\nthis thread (at <32617.1558649424@sss.pgh.pa.us>). That's what the\nCF entry is for, IMO. I'm not excited about the change-of-keywords\nbusiness, but if someone else is, they should start a new CF entry\nabout that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 14:11:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n >> This discussion seems to have died down. Apparently we have three\n >> directions here, from three different people. Are we doing anything?\n\n Tom> I don't really want to do anything beyond the patch I submitted in\n Tom> this thread (at <32617.1558649424@sss.pgh.pa.us>). That's what the\n Tom> CF entry is for, IMO.\n\nI have no issues with this approach.\n\n Tom> I'm not excited about the change-of-keywords business, but if\n Tom> someone else is, they should start a new CF entry about that.\n\nIt's enough of a can of worms that I don't feel inclined to mess with it\nabsent some good reason (the spec probably isn't a good enough reason).\nIf postfix operators should happen to go away at some point then this\ncan be revisited.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Fri, 06 Sep 2019 19:47:58 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "On 2019-May-23, Tom Lane wrote:\n\n> + <para>\n> + Another nonstandard extension is that following the escape character\n> + with a letter or digit provides access to the same escape sequences\n> + defined for POSIX regular expressions, below (see\n> + <xref linkend=\"posix-character-entry-escapes-table\"/>,\n> + <xref linkend=\"posix-class-shorthand-escapes-table\"/>, and\n> + <xref linkend=\"posix-constraint-escapes-table\"/>).\n> </para>\n\nI think the word \"same\" in this para is more confusing than helpful;\nalso the tables are an integral part of this rather than just an\nillustration, so they should not be in parenthesis but after only a\nsemicolon or such. So:\n\n> + Another nonstandard extension is that following the escape character\n> + with a letter or digit provides access to the escape sequences\n> + defined for POSIX regular expressions; see\n> + <xref linkend=\"posix-character-entry-escapes-table\"/>,\n> + <xref linkend=\"posix-class-shorthand-escapes-table\"/>, and\n> + <xref linkend=\"posix-constraint-escapes-table\"/> below.\n\nI think it would be useful to provide a trivial example that illustrates\nthis in the <para> below; say '\\mabc\\M' not matching \"zabc\".\n\nAll in all, these are pretty trivial points and I would certainly not be\nmad if it's committed without these changes.\n\nMarked ready for committer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 15:54:34 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
},
{
"msg_contents": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n> Marked ready for committer.\n\nThanks for reviewing. I adopted your doc change suggestions\nand pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Sep 2019 14:23:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SQL-spec incompatibilities in similar_escape() and related stuff"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nI was reviewing Paul Ramsey's TOAST patch[0] and noticed that there is a big room for improvement in performance of pglz compression and decompression.\n\nWith Vladimir we started to investigate ways to boost byte copying and eventually created test suit[1] to investigate performance of compression and decompression.\nThis is and extension with single function test_pglz() which performs tests for different:\n1. Data payloads\n2. Compression implementations\n3. Decompression implementations\n\nCurrently we test mostly decompression improvements against two WALs and one data file taken from pgbench-generated database. Any suggestion on more relevant data payloads are very welcome.\nMy laptop tests show that our decompression implementation [2] can be from 15% to 50% faster.\nAlso I've noted that compression is extremely slow, ~30 times slower than decompression. I believe we can do something about it.\n\nWe focus only on boosting existing codec without any considerations of other compression algorithms.\n\nAny comments are much appreciated.\n\nMost important questions are:\n1. What are relevant data sets?\n2. What are relevant CPUs? I have only XEON-based servers and few laptops\\desktops with intel CPUs\n3. If compression is 30 times slower, should we better focus on compression instead of decompression?\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/message-id/flat/CANP8%2BjKcGj-JYzEawS%2BCUZnfeGKq4T5LswcswMP4GUHeZEP1ag%40mail.gmail.com\n[1] https://github.com/x4m/test_pglz\n[2] https://www.postgresql.org/message-id/C2D8E5D5-3E83-469B-8751-1C7877C2A5F2%40yandex-team.ru\n\n",
"msg_date": "Mon, 13 May 2019 07:45:59 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "pglz performance"
},
{
"msg_contents": "On Mon, May 13, 2019 at 07:45:59AM +0500, Andrey Borodin wrote:\n> I was reviewing Paul Ramsey's TOAST patch[0] and noticed that there\n> is a big room for improvement in performance of pglz compression and\n> decompression.\n\nYes, I believe so too. pglz is a huge CPU-consumer when it comes to\ncompilation compared to more modern algos like lz4.\n\n> With Vladimir we started to investigate ways to boost byte copying\n> and eventually created test suit[1] to investigate performance of\n> compression and decompression. This is and extension with single\n> function test_pglz() which performs tests for different: \n> 1. Data payloads\n> 2. Compression implementations\n> 3. Decompression implementations\n\nCool. I got something rather similar in my wallet of plugins:\nhttps://github.com/michaelpq/pg_plugins/tree/master/compress_test\nThis is something I worked on mainly for FPW compression in WAL.\n\n> Currently we test mostly decompression improvements against two WALs\n> and one data file taken from pgbench-generated database. Any\n> suggestion on more relevant data payloads are very welcome.\n\nText strings made of random data and variable length? For any test of\nthis kind I think that it is good to focus on the performance of the\nlow-level calls, even going as far as a simple C wrapper on top of the\npglz APIs to test only the performance and not have extra PG-related\noverhead like palloc() which can be a barrier. Focusing on strings of\nlengths of 1kB up to 16kB may be an idea of size, and it is important\nto keep the same uncompressed strings for performance comparison.\n\n> My laptop tests show that our decompression implementation [2] can\n> be from 15% to 50% faster. Also I've noted that compression is\n> extremely slow, ~30 times slower than decompression. I believe we\n> can do something about it.\n\nThat's nice.\n\n> We focus only on boosting existing codec without any considerations\n> of other compression algorithms.\n\nThere is this as well. A couple of algorithms have a license\ncompatible with Postgres, but it may be more simple to just improve\npglz. A 10%~20% improvement is something worth doing.\n\n> Most important questions are:\n> 1. What are relevant data sets?\n> 2. What are relevant CPUs? I have only XEON-based servers and few\n> laptops\\desktops with intel CPUs\n> 3. If compression is 30 times slower, should we better focus on\n> compression instead of decompression?\n\nDecompression can matter a lot for mostly-read workloads and\ncompression can become a bottleneck for heavy-insert loads, so\nimproving compression or decompression should be two separate\nproblems, not two problems linked. Any improvement in one or the\nother, or even both, is nice to have.\n--\nMichael",
"msg_date": "Mon, 13 May 2019 16:14:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 13 мая 2019 г., в 12:14, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n>> Currently we test mostly decompression improvements against two WALs\n>> and one data file taken from pgbench-generated database. Any\n>> suggestion on more relevant data payloads are very welcome.\n> \n> Text strings made of random data and variable length?\nLike text corpus?\n\n> For any test of\n> this kind I think that it is good to focus on the performance of the\n> low-level calls, even going as far as a simple C wrapper on top of the\n> pglz APIs to test only the performance and not have extra PG-related\n> overhead like palloc() which can be a barrier.\nOur test_pglz extension is measuring only time of real compression, doing warmup run, all allocations are done before measurement.\n\n> Focusing on strings of\n> lengths of 1kB up to 16kB may be an idea of size, and it is important\n> to keep the same uncompressed strings for performance comparison.\nWe intentionally avoid using generated data, thus keep test files committed into git repo.\nAlso we check that decompressed data matches source of compression. All tests are done 5 times.\n\nWe use PG extension only for simplicity of deployment of benchmarks to our PG clusters.\n\n\nHere are some test results.\n\nCurrently we test on 4 payloads:\n1. WAL from cluster initialization\n2. 2 WALs from pgbench pgbench -i -s 10\n3. data file taken from pgbench -i -s 10\n\nWe use these decompressors:\n1. pglz_decompress_vanilla - taken from PG source code\n2. pglz_decompress_hacked - use sliced memcpy to imitate byte-by-byte pglz decompression\n3. pglz_decompress_hacked4, pglz_decompress_hacked8, pglz_decompress_hackedX - use memcpy if match is no less than X bytes. We need to determine best X, if this approach is used.\n\nI used three platforms:\n1. Server XEONE5-2660 SM/SYS1027RN3RF/10S2.5/1U/2P (2*INTEL XEON E5-2660/16*DDR3ECCREG/10*SAS-2.5) Under Ubuntu 14, PG 9.6.\n2. Desktop Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz Ubuntu 18, PG 12devel\n3. Laptop MB Pro 15 2015 2.2 GHz Core i7 (I7-4770HQ) MacOS, PG 12devel\nOwners of AMD and ARM devices are welcome.\n\nServer results (less is better):\nNOTICE: 00000: Time to decompress one byte in ns:\nNOTICE: 00000: Payload 000000010000000000000001\nNOTICE: 00000: Decompressor pglz_decompress_hacked result 0.647235\nNOTICE: 00000: Decompressor pglz_decompress_hacked4 result 0.671029\nNOTICE: 00000: Decompressor pglz_decompress_hacked8 result 0.699949\nNOTICE: 00000: Decompressor pglz_decompress_hacked16 result 0.739586\nNOTICE: 00000: Decompressor pglz_decompress_hacked32 result 0.787926\nNOTICE: 00000: Decompressor pglz_decompress_vanilla result 1.147282\nNOTICE: 00000: Payload 000000010000000000000006\nNOTICE: 00000: Decompressor pglz_decompress_hacked result 0.201774\nNOTICE: 00000: Decompressor pglz_decompress_hacked4 result 0.211859\nNOTICE: 00000: Decompressor pglz_decompress_hacked8 result 0.212610\nNOTICE: 00000: Decompressor pglz_decompress_hacked16 result 0.214601\nNOTICE: 00000: Decompressor pglz_decompress_hacked32 result 0.221813\nNOTICE: 00000: Decompressor pglz_decompress_vanilla result 0.706005\nNOTICE: 00000: Payload 000000010000000000000008\nNOTICE: 00000: Decompressor pglz_decompress_hacked result 1.370132\nNOTICE: 00000: Decompressor pglz_decompress_hacked4 result 1.388991\nNOTICE: 00000: Decompressor pglz_decompress_hacked8 result 1.388502\nNOTICE: 00000: Decompressor pglz_decompress_hacked16 result 1.529455\nNOTICE: 00000: Decompressor pglz_decompress_hacked32 result 1.520813\nNOTICE: 00000: Decompressor pglz_decompress_vanilla result 1.433527\nNOTICE: 00000: Payload 16398\nNOTICE: 00000: Decompressor pglz_decompress_hacked result 0.606943\nNOTICE: 00000: Decompressor pglz_decompress_hacked4 result 0.623044\nNOTICE: 00000: Decompressor pglz_decompress_hacked8 result 0.624118\nNOTICE: 00000: Decompressor pglz_decompress_hacked16 result 0.620987\nNOTICE: 00000: Decompressor pglz_decompress_hacked32 result 0.621183\nNOTICE: 00000: Decompressor pglz_decompress_vanilla result 1.365318\n\nComment: pglz_decompress_hacked is unconditionally optimal. On most of cases it is 2x better than current implementation.\nOn 000000010000000000000008 it is only marginally better. pglz_decompress_hacked8 is few percents worse than pglz_decompress_hacked.\n\nDesktop results:\nNOTICE: Time to decompress one byte in ns:\nNOTICE: Payload 000000010000000000000001\nNOTICE: Decompressor pglz_decompress_hacked result 0.396454\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.429249\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.436413\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.478077\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.491488\nNOTICE: Decompressor pglz_decompress_vanilla result 0.695527\nNOTICE: Payload 000000010000000000000006\nNOTICE: Decompressor pglz_decompress_hacked result 0.110710\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.115669\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.127637\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.120544\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.117981\nNOTICE: Decompressor pglz_decompress_vanilla result 0.399446\nNOTICE: Payload 000000010000000000000008\nNOTICE: Decompressor pglz_decompress_hacked result 0.647402\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.691891\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.693834\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.776815\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.777960\nNOTICE: Decompressor pglz_decompress_vanilla result 0.721192\nNOTICE: Payload 16398\nNOTICE: Decompressor pglz_decompress_hacked result 0.337654\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.355452\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.351224\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.362548\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.356456\nNOTICE: Decompressor pglz_decompress_vanilla result 0.837042\n\nComment: identical to Server results.\n\nLaptop results:\nNOTICE: Time to decompress one byte in ns:\nNOTICE: Payload 000000010000000000000001\nNOTICE: Decompressor pglz_decompress_hacked result 0.661469\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.638366\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.664377\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.696135\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.634825\nNOTICE: Decompressor pglz_decompress_vanilla result 0.676560\nNOTICE: Payload 000000010000000000000006\nNOTICE: Decompressor pglz_decompress_hacked result 0.213921\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.224864\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.229394\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.218141\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.220954\nNOTICE: Decompressor pglz_decompress_vanilla result 0.242412\nNOTICE: Payload 000000010000000000000008\nNOTICE: Decompressor pglz_decompress_hacked result 1.053417\nNOTICE: Decompressor pglz_decompress_hacked4 result 1.063704\nNOTICE: Decompressor pglz_decompress_hacked8 result 1.007211\nNOTICE: Decompressor pglz_decompress_hacked16 result 1.145089\nNOTICE: Decompressor pglz_decompress_hacked32 result 1.079702\nNOTICE: Decompressor pglz_decompress_vanilla result 1.051557\nNOTICE: Payload 16398\nNOTICE: Decompressor pglz_decompress_hacked result 0.251690\nNOTICE: Decompressor pglz_decompress_hacked4 result 0.268125\nNOTICE: Decompressor pglz_decompress_hacked8 result 0.269248\nNOTICE: Decompressor pglz_decompress_hacked16 result 0.277880\nNOTICE: Decompressor pglz_decompress_hacked32 result 0.270290\nNOTICE: Decompressor pglz_decompress_vanilla result 0.705652\n\nComment: decompress time on WAL segments is statistically indistinguishable between hacked and original versions. Hacked decompression of data file is 2x faster.\n\nWe are going to try these tests on cascade lake processors too.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 15 May 2019 15:06:22 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 15 мая 2019 г., в 15:06, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> Owners of AMD and ARM devices are welcome.\n\nYandex hardware RND guys gave me ARM server and Power9 server. They are looking for AMD and some new Intel boxes.\n\nMeanwhile I made some enhancements to test suit:\n1. I've added Shakespeare payload: concatenation of works of this prominent poet.\n2. For each payload compute \"sliced time\" - time to decompress payload if it was sliced by 2Kb pieces or 8Kb pieces.\n3. For each decompressor we compute \"score\": (sum of time to decompress each payload, each payload sliced by 2Kb and 8Kb) * 5 times\n\nI've attached full test logs, meanwhile here's results for different platforms.\n\nIntel Server\nNOTICE: 00000: Decompressor pglz_decompress_hacked result 10.346763\nNOTICE: 00000: Decompressor pglz_decompress_hacked8 result 11.192078\nNOTICE: 00000: Decompressor pglz_decompress_hacked16 result 11.957727\nNOTICE: 00000: Decompressor pglz_decompress_vanilla result 14.262256\n\nARM Server\nNOTICE: Decompressor pglz_decompress_hacked result 12.966668\nNOTICE: Decompressor pglz_decompress_hacked8 result 13.004935\nNOTICE: Decompressor pglz_decompress_hacked16 result 13.043015\nNOTICE: Decompressor pglz_decompress_vanilla result 18.239242\n\nPower9 Server\nNOTICE: Decompressor pglz_decompress_hacked result 10.992974\nNOTICE: Decompressor pglz_decompress_hacked8 result 11.747443\nNOTICE: Decompressor pglz_decompress_hacked16 result 11.026342\nNOTICE: Decompressor pglz_decompress_vanilla result 16.375315\n\nIntel laptop\nNOTICE: Decompressor pglz_decompress_hacked result 9.445808\nNOTICE: Decompressor pglz_decompress_hacked8 result 9.105360\nNOTICE: Decompressor pglz_decompress_hacked16 result 9.621833\nNOTICE: Decompressor pglz_decompress_vanilla result 10.661968\n\nFrom these results pglz_decompress_hacked looks best.\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 16 May 2019 22:13:22 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Thu, May 16, 2019 at 10:13:22PM +0500, Andrey Borodin wrote:\n> Meanwhile I made some enhancements to test suit:\n> Intel Server\n> NOTICE: 00000: Decompressor pglz_decompress_hacked result 10.346763\n> NOTICE: 00000: Decompressor pglz_decompress_hacked8 result 11.192078\n> NOTICE: 00000: Decompressor pglz_decompress_hacked16 result 11.957727\n> NOTICE: 00000: Decompressor pglz_decompress_vanilla result 14.262256\n> \n> ARM Server\n> NOTICE: Decompressor pglz_decompress_hacked result 12.966668\n> NOTICE: Decompressor pglz_decompress_hacked8 result 13.004935\n> NOTICE: Decompressor pglz_decompress_hacked16 result 13.043015\n> NOTICE: Decompressor pglz_decompress_vanilla result 18.239242\n> \n> Power9 Server\n> NOTICE: Decompressor pglz_decompress_hacked result 10.992974\n> NOTICE: Decompressor pglz_decompress_hacked8 result 11.747443\n> NOTICE: Decompressor pglz_decompress_hacked16 result 11.026342\n> NOTICE: Decompressor pglz_decompress_vanilla result 16.375315\n> \n> Intel laptop\n> NOTICE: Decompressor pglz_decompress_hacked result 9.445808\n> NOTICE: Decompressor pglz_decompress_hacked8 result 9.105360\n> NOTICE: Decompressor pglz_decompress_hacked16 result 9.621833\n> NOTICE: Decompressor pglz_decompress_vanilla result 10.661968\n> \n> From these results pglz_decompress_hacked looks best.\n\nThat's nice.\n\nFrom the numbers you are presenting here, all of them are much better\nthan the original, and there is not much difference between any of the\npatched versions. Having a 20%~30% improvement with a patch is very\nnice.\n\nAfter that comes the simplicity and the future maintainability of what\nis proposed. I am not much into accepting a patch which has a 1%~2%\nimpact for some hardwares and makes pglz much more complex and harder\nto understand. But I am really eager to see a patch with at least a\n10% improvement which remains simple, even more if it simplifies the\nlogic used in pglz.\n--\nMichael",
"msg_date": "Fri, 17 May 2019 10:44:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 17 мая 2019 г., в 6:44, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> That's nice.\n> \n> From the numbers you are presenting here, all of them are much better\n> than the original, and there is not much difference between any of the\n> patched versions. Having a 20%~30% improvement with a patch is very\n> nice.\n> \n> After that comes the simplicity and the future maintainability of what\n> is proposed. I am not much into accepting a patch which has a 1%~2%\n> impact for some hardwares and makes pglz much more complex and harder\n> to understand. But I am really eager to see a patch with at least a\n> 10% improvement which remains simple, even more if it simplifies the\n> logic used in pglz.\n\nHere are patches for both winning versions. I'll place them on CF.\nMy gut feeling is pglz_decompress_hacked8 should be better, but on most architectures benchmarks show opposite.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 17 May 2019 15:59:58 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 16. 05. 19 19:13, Andrey Borodin wrote:\n>\n>> 15 мая 2019 г., в 15:06, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>\n>> Owners of AMD and ARM devices are welcome.\n\nI've tested according to instructions at the test repo\nhttps://github.com/x4m/test_pglz\n\nTest_pglz is at a97f63b and postgres at 6ba500.\n\nHardware is desktop AMD Ryzen 5 2600, 32GB RAM\n\nDecompressor score (summ of all times):\n\nNOTICE: Decompressor pglz_decompress_hacked result 6.988909\nNOTICE: Decompressor pglz_decompress_hacked8 result 7.562619\nNOTICE: Decompressor pglz_decompress_hacked16 result 8.316957\nNOTICE: Decompressor pglz_decompress_vanilla result 10.725826\n\n\nAttached is the full test run, if needed.\n\nKind regards,\n\nGasper\n\n> Yandex hardware RND guys gave me ARM server and Power9 server. They are looking for AMD and some new Intel boxes.\n>\n> Meanwhile I made some enhancements to test suit:\n> 1. I've added Shakespeare payload: concatenation of works of this prominent poet.\n> 2. For each payload compute \"sliced time\" - time to decompress payload if it was sliced by 2Kb pieces or 8Kb pieces.\n> 3. For each decompressor we compute \"score\": (sum of time to decompress each payload, each payload sliced by 2Kb and 8Kb) * 5 times\n>\n> I've attached full test logs, meanwhile here's results for different platforms.\n>\n> Intel Server\n> NOTICE: 00000: Decompressor pglz_decompress_hacked result 10.346763\n> NOTICE: 00000: Decompressor pglz_decompress_hacked8 result 11.192078\n> NOTICE: 00000: Decompressor pglz_decompress_hacked16 result 11.957727\n> NOTICE: 00000: Decompressor pglz_decompress_vanilla result 14.262256\n>\n> ARM Server\n> NOTICE: Decompressor pglz_decompress_hacked result 12.966668\n> NOTICE: Decompressor pglz_decompress_hacked8 result 13.004935\n> NOTICE: Decompressor pglz_decompress_hacked16 result 13.043015\n> NOTICE: Decompressor pglz_decompress_vanilla result 18.239242\n>\n> Power9 Server\n> NOTICE: Decompressor pglz_decompress_hacked result 10.992974\n> NOTICE: Decompressor pglz_decompress_hacked8 result 11.747443\n> NOTICE: Decompressor pglz_decompress_hacked16 result 11.026342\n> NOTICE: Decompressor pglz_decompress_vanilla result 16.375315\n>\n> Intel laptop\n> NOTICE: Decompressor pglz_decompress_hacked result 9.445808\n> NOTICE: Decompressor pglz_decompress_hacked8 result 9.105360\n> NOTICE: Decompressor pglz_decompress_hacked16 result 9.621833\n> NOTICE: Decompressor pglz_decompress_vanilla result 10.661968\n>\n> From these results pglz_decompress_hacked looks best.\n>\n> Best regards, Andrey Borodin.\n>",
"msg_date": "Fri, 17 May 2019 15:40:57 +0200",
"msg_from": "Gasper Zejn <zejn@owca.info>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 17 мая 2019 г., в 18:40, Gasper Zejn <zejn@owca.info> написал(а):\n> \n> I've tested according to instructions at the test repo\n> https://github.com/x4m/test_pglz\n> \n> Test_pglz is at a97f63b and postgres at 6ba500.\n> \n> Hardware is desktop AMD Ryzen 5 2600, 32GB RAM\n> \n> Decompressor score (summ of all times):\n> \n> NOTICE: Decompressor pglz_decompress_hacked result 6.988909\n> NOTICE: Decompressor pglz_decompress_hacked8 result 7.562619\n> NOTICE: Decompressor pglz_decompress_hacked16 result 8.316957\n> NOTICE: Decompressor pglz_decompress_vanilla result 10.725826\n\nThanks, Gasper! Basically we observe same 0.65 time reduction here.\n\nThat's very good that we have independent scores.\n\nI'm still somewhat not sure that score is fair, on payload 000000010000000000000008 we have vanilla decompression sometimes slower than hacked by few percents. And this is especially visible on AMD. Degradation for 000000010000000000000008 sliced by 8Kb reaches 10%\n\nI think this is because 000000010000000000000008 have highest entropy.It is almost random and matches are very short, but present.\n000000010000000000000008 \nEntropy = 4.360546 bits per byte.\n000000010000000000000006\nEntropy = 1.450059 bits per byte.\n000000010000000000000001 \nEntropy = 2.944235 bits per byte.\nshakespeare.txt \nEntropy = 3.603659 bits per byte\n16398 \nEntropy = 1.897640 bits per byte.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 18 May 2019 11:44:55 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 18 мая 2019 г., в 11:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \nHi! \nHere's rebased version of patches.\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 24 Jun 2019 13:44:21 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 13 мая 2019 г., в 12:14, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> Decompression can matter a lot for mostly-read workloads and\n> compression can become a bottleneck for heavy-insert loads, so\n> improving compression or decompression should be two separate\n> problems, not two problems linked. Any improvement in one or the\n> other, or even both, is nice to have.\n\nHere's patch hacked by Vladimir for compression.\n\nKey differences (as far as I see, maybe Vladimir will post more complete list of optimizations):\n1. Use functions instead of macro-functions: not surprisingly it's easier to optimize them and provide less constraints for compiler to optimize.\n2. More compact hash table: use indexes instead of pointers.\n3. More robust segment comparison: like memcmp, but return index of first different byte\n\nIn weighted mix of different data (same as for compression), overall speedup is x1.43 on my machine.\n\nCurrent implementation is integrated into test_pglz suit for benchmarking purposes[0].\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/x4m/test_pglz",
"msg_date": "Thu, 27 Jun 2019 23:33:16 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\nOn 27.06.2019 21:33, Andrey Borodin wrote:\n>\n>> 13 мая 2019 г., в 12:14, Michael Paquier <michael@paquier.xyz> написал(а):\n>>\n>> Decompression can matter a lot for mostly-read workloads and\n>> compression can become a bottleneck for heavy-insert loads, so\n>> improving compression or decompression should be two separate\n>> problems, not two problems linked. Any improvement in one or the\n>> other, or even both, is nice to have.\n> Here's patch hacked by Vladimir for compression.\n>\n> Key differences (as far as I see, maybe Vladimir will post more complete list of optimizations):\n> 1. Use functions instead of macro-functions: not surprisingly it's easier to optimize them and provide less constraints for compiler to optimize.\n> 2. More compact hash table: use indexes instead of pointers.\n> 3. More robust segment comparison: like memcmp, but return index of first different byte\n>\n> In weighted mix of different data (same as for compression), overall speedup is x1.43 on my machine.\n>\n> Current implementation is integrated into test_pglz suit for benchmarking purposes[0].\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://github.com/x4m/test_pglz\n\nIt takes me some time to understand that your memcpy optimization is \ncorrect;)\nI have tested different ways of optimizing this fragment of code, but \nfailed tooutperform your implementation!\nResults at my computer is simlar with yours:\n\nDecompressor score (summ of all times):\nNOTICE: Decompressor pglz_decompress_hacked result 6.627355\nNOTICE: Decompressor pglz_decompress_hacked_unrolled result 7.497114\nNOTICE: Decompressor pglz_decompress_hacked8 result 7.412944\nNOTICE: Decompressor pglz_decompress_hacked16 result 7.792978\nNOTICE: Decompressor pglz_decompress_vanilla result 10.652603\n\nCompressor score (summ of all times):\nNOTICE: Compressor pglz_compress_vanilla result 116.970005\nNOTICE: Compressor pglz_compress_hacked result 89.706105\n\n\nBut ... below are results for lz4:\n\nDecompressor score (summ of all times):\nNOTICE: Decompressor lz4_decompress result 3.660066\nCompressor score (summ of all times):\nNOTICE: Compressor lz4_compress result 10.288594\n\nThere is 2 times advantage in decompress speed and 10 times advantage in \ncompress speed.\nSo may be instead of \"hacking\" pglz algorithm we should better switch to \nlz4?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 2 Aug 2019 16:45:43 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 04:45:43PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 27.06.2019 21:33, Andrey Borodin wrote:\n>>\n>>>13 мая 2019 г., в 12:14, Michael Paquier <michael@paquier.xyz> написал(а):\n>>>\n>>>Decompression can matter a lot for mostly-read workloads and\n>>>compression can become a bottleneck for heavy-insert loads, so\n>>>improving compression or decompression should be two separate\n>>>problems, not two problems linked. Any improvement in one or the\n>>>other, or even both, is nice to have.\n>>Here's patch hacked by Vladimir for compression.\n>>\n>>Key differences (as far as I see, maybe Vladimir will post more complete list of optimizations):\n>>1. Use functions instead of macro-functions: not surprisingly it's easier to optimize them and provide less constraints for compiler to optimize.\n>>2. More compact hash table: use indexes instead of pointers.\n>>3. More robust segment comparison: like memcmp, but return index of first different byte\n>>\n>>In weighted mix of different data (same as for compression), overall speedup is x1.43 on my machine.\n>>\n>>Current implementation is integrated into test_pglz suit for benchmarking purposes[0].\n>>\n>>Best regards, Andrey Borodin.\n>>\n>>[0] https://github.com/x4m/test_pglz\n>\n>It takes me some time to understand that your memcpy optimization is \n>correct;)\n>I have tested different ways of optimizing this fragment of code, but \n>failed tooutperform your implementation!\n>Results at my computer is simlar with yours:\n>\n>Decompressor score (summ of all times):\n>NOTICE: Decompressor pglz_decompress_hacked result 6.627355\n>NOTICE: Decompressor pglz_decompress_hacked_unrolled result 7.497114\n>NOTICE: Decompressor pglz_decompress_hacked8 result 7.412944\n>NOTICE: Decompressor pglz_decompress_hacked16 result 7.792978\n>NOTICE: Decompressor pglz_decompress_vanilla result 10.652603\n>\n>Compressor score (summ of all times):\n>NOTICE: Compressor pglz_compress_vanilla result 116.970005\n>NOTICE: Compressor pglz_compress_hacked result 89.706105\n>\n>\n>But ... below are results for lz4:\n>\n>Decompressor score (summ of all times):\n>NOTICE: Decompressor lz4_decompress result 3.660066\n>Compressor score (summ of all times):\n>NOTICE: Compressor lz4_compress result 10.288594\n>\n>There is 2 times advantage in decompress speed and 10 times advantage \n>in compress speed.\n>So may be instead of \"hacking\" pglz algorithm we should better switch \n>to lz4?\n>\n\nI think we should just bite the bullet and add initdb option to pick\ncompression algorithm. That's been discussed repeatedly, but we never\nended up actually doing that. See for example [1].\n\nIf there's anyone willing to put some effort into getting this feature\nover the line, I'm willing to do reviews & commit. It's a seemingly\nsmall change with rather insane potential impact.\n\nBut even if we end up doing that, it still makes sense to optimize the\nhell out of pglz, because existing systems will still use that\n(pg_upgrade can't switch from one compression algorithm to another).\n\nregards\n\n[1] https://www.postgresql.org/message-id/flat/55341569.1090107%402ndquadrant.com\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 2 Aug 2019 16:43:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Thanks for looking into this!\n\n> 2 авг. 2019 г., в 19:43, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> On Fri, Aug 02, 2019 at 04:45:43PM +0300, Konstantin Knizhnik wrote:\n>> \n>> It takes me some time to understand that your memcpy optimization is correct;)\nSeems that comments are not explanatory enough... will try to fix.\n\n>> I have tested different ways of optimizing this fragment of code, but failed tooutperform your implementation!\nJFYI we tried optimizations with memcpy with const size (optimized into assembly instead of call), unrolling literal loop and some others. All these did not work better.\n\n>> But ... below are results for lz4:\n>> \n>> Decompressor score (summ of all times):\n>> NOTICE: Decompressor lz4_decompress result 3.660066\n>> Compressor score (summ of all times):\n>> NOTICE: Compressor lz4_compress result 10.288594\n>> \n>> There is 2 times advantage in decompress speed and 10 times advantage in compress speed.\n>> So may be instead of \"hacking\" pglz algorithm we should better switch to lz4?\n>> \n> \n> I think we should just bite the bullet and add initdb option to pick\n> compression algorithm. That's been discussed repeatedly, but we never\n> ended up actually doing that. See for example [1].\n> \n> If there's anyone willing to put some effort into getting this feature\n> over the line, I'm willing to do reviews & commit. It's a seemingly\n> small change with rather insane potential impact.\n> \n> But even if we end up doing that, it still makes sense to optimize the\n> hell out of pglz, because existing systems will still use that\n> (pg_upgrade can't switch from one compression algorithm to another).\n\nWe have some kind of \"roadmap\" of \"extensible pglz\". We plan to provide implementation on Novembers CF.\n\nCurrently, pglz starts with empty cache map: there is no prior 4k bytes before start. We can add imaginary prefix to any data with common substrings: this will enhance compression ratio.\nIt is hard to decide on training data set for this \"common prefix\". So we want to produce extension with aggregate function which produces some \"adapted common prefix\" from users's data.\nThen we can \"reserve\" few negative bytes for \"decompression commands\". This command can instruct database on which common prefix to use.\nBut also system command can say \"invoke decompression from extension\".\n\nThus, user will be able to train database compression on his data and substitute pglz compression with custom compression method seamlessly.\n\nThis will make hard-choosen compression unneeded, but seems overly hacky. But there will be no need to have lz4, zstd, brotli, lzma and others in core. Why not provide e.g. \"time series compression\"? Or \"DNA compression\"? Whatever gun user wants for his foot.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 2 Aug 2019 20:40:51 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n> We have some kind of \"roadmap\" of \"extensible pglz\". We plan to provide implementation on Novembers CF.\n\nI don't understand why it's a good idea to improve the compression side\nof pglz. There's plenty other people that spent a lot of time developing\nbetter compression algorithms.\n\n\n> Currently, pglz starts with empty cache map: there is no prior 4k bytes before start. We can add imaginary prefix to any data with common substrings: this will enhance compression ratio.\n> It is hard to decide on training data set for this \"common prefix\". So we want to produce extension with aggregate function which produces some \"adapted common prefix\" from users's data.\n> Then we can \"reserve\" few negative bytes for \"decompression commands\". This command can instruct database on which common prefix to use.\n> But also system command can say \"invoke decompression from extension\".\n> \n> Thus, user will be able to train database compression on his data and substitute pglz compression with custom compression method seamlessly.\n> \n> This will make hard-choosen compression unneeded, but seems overly hacky. But there will be no need to have lz4, zstd, brotli, lzma and others in core. Why not provide e.g. \"time series compression\"? Or \"DNA compression\"? Whatever gun user wants for his foot.\n\nI think this is way too complicated, and will provide not particularly\nmuch benefit for the majority users.\n\nIn fact, I'll argue that we should flat out reject any such patch until\nwe have at least one decent default compression algorithm in\ncore. You're trying to work around a poor compression algorithm with\ncomplicated dictionary improvement, that require user interaction, and\nonly will work in a relatively small subset of the cases, and will very\noften increase compression times.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 09:39:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 09:39:48AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n>> We have some kind of \"roadmap\" of \"extensible pglz\". We plan to\n>> provide implementation on Novembers CF.\n>\n>I don't understand why it's a good idea to improve the compression side\n>of pglz. There's plenty other people that spent a lot of time\n>developing better compression algorithms.\n>\n\nIsn't it beneficial for existing systems, that will be stuck with pglz\neven if we end up adding other algorithms?\n\n>\n>> Currently, pglz starts with empty cache map: there is no prior 4k\n>> bytes before start. We can add imaginary prefix to any data with\n>> common substrings: this will enhance compression ratio. It is hard\n>> to decide on training data set for this \"common prefix\". So we want\n>> to produce extension with aggregate function which produces some\n>> \"adapted common prefix\" from users's data. Then we can \"reserve\" few\n>> negative bytes for \"decompression commands\". This command can\n>> instruct database on which common prefix to use. But also system\n>> command can say \"invoke decompression from extension\".\n>>\n>> Thus, user will be able to train database compression on his data and\n>> substitute pglz compression with custom compression method\n>> seamlessly.\n>>\n>> This will make hard-choosen compression unneeded, but seems overly\n>> hacky. But there will be no need to have lz4, zstd, brotli, lzma and\n>> others in core. Why not provide e.g. \"time series compression\"? Or\n>> \"DNA compression\"? Whatever gun user wants for his foot.\n>\n>I think this is way too complicated, and will provide not particularly\n>much benefit for the majority users.\n>\n\nI agree with this. I do see value in the feature, but probably not as a\ndrop-in replacement for the default compression algorithm. I'd compare\nit to the \"custom compression methods\" patch that was submitted some\ntime ago.\n\n>In fact, I'll argue that we should flat out reject any such patch until\n>we have at least one decent default compression algorithm in core.\n>You're trying to work around a poor compression algorithm with\n>complicated dictionary improvement, that require user interaction, and\n>only will work in a relatively small subset of the cases, and will very\n>often increase compression times.\n>\n\nI wouldn't be so strict I guess. But I do agree an algorithm that \nrequires additional steps (training, ...) is unlikely to be a good\ncandidate for default instance compression alorithm.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 2 Aug 2019 19:00:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 19:00:39 +0200, Tomas Vondra wrote:\n> On Fri, Aug 02, 2019 at 09:39:48AM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n> > > We have some kind of \"roadmap\" of \"extensible pglz\". We plan to\n> > > provide implementation on Novembers CF.\n> > \n> > I don't understand why it's a good idea to improve the compression side\n> > of pglz. There's plenty other people that spent a lot of time\n> > developing better compression algorithms.\n> > \n> \n> Isn't it beneficial for existing systems, that will be stuck with pglz\n> even if we end up adding other algorithms?\n\nWhy would they be stuck continuing to *compress* with pglz? As we\nfully retoast on write anyway we can just gradually switch over to the\nbetter algorithm. Decompression speed is another story, of course.\n\n\nHere's what I had a few years back:\n\nhttps://www.postgresql.org/message-id/20130621000900.GA12425%40alap2.anarazel.de\nsee also\nhttps://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n\nI think we should refresh something like that patch, and:\n- make the compression algorithm GUC an enum, rename\n- add --with-system-lz4\n- obviously refresh the copy of lz4\n- drop snappy\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:12:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 10:12:58AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-08-02 19:00:39 +0200, Tomas Vondra wrote:\n>> On Fri, Aug 02, 2019 at 09:39:48AM -0700, Andres Freund wrote:\n>> > Hi,\n>> >\n>> > On 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n>> > > We have some kind of \"roadmap\" of \"extensible pglz\". We plan to\n>> > > provide implementation on Novembers CF.\n>> >\n>> > I don't understand why it's a good idea to improve the compression side\n>> > of pglz. There's plenty other people that spent a lot of time\n>> > developing better compression algorithms.\n>> >\n>>\n>> Isn't it beneficial for existing systems, that will be stuck with pglz\n>> even if we end up adding other algorithms?\n>\n>Why would they be stuck continuing to *compress* with pglz? As we\n>fully retoast on write anyway we can just gradually switch over to the\n>better algorithm. Decompression speed is another story, of course.\n>\n\nHmmm, I don't remember the details of those patches so I didn't realize\nit allows incremental recompression. If that's possible, that would mean \nexisting systems can start using it. Which is good.\n\nAnother question is whether we'd actually want to include the code in\ncore directly, or use system libraries (and if some packagers might\ndecide to disable that, for whatever reason).\n\nBut yeah, I agree you may have a point about optimizing pglz compression.\n\n>\n>Here's what I had a few years back:\n>\n>https://www.postgresql.org/message-id/20130621000900.GA12425%40alap2.anarazel.de\n>see also\n>https://www.postgresql.org/message-id/20130605150144.GD28067%40alap2.anarazel.de\n>\n>I think we should refresh something like that patch, and:\n>- make the compression algorithm GUC an enum, rename\n>- add --with-system-lz4\n>- obviously refresh the copy of lz4\n>- drop snappy\n>\n\nThat's a reasonable plan, I guess.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 2 Aug 2019 19:52:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 19:52:39 +0200, Tomas Vondra wrote:\n> Hmmm, I don't remember the details of those patches so I didn't realize\n> it allows incremental recompression. If that's possible, that would mean\n> existing systems can start using it. Which is good.\n\nThat depends on what do you mean by \"incremental\"? A single toasted\ndatum can only have one compression type, because we only update them\nall in one anyway. But different datums can be compressed differently.\n\n\n> Another question is whether we'd actually want to include the code in\n> core directly, or use system libraries (and if some packagers might\n> decide to disable that, for whatever reason).\n\nI'd personally say we should have an included version, and a\n--with-system-... flag that uses the system one.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 11:20:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 11:20:03AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-08-02 19:52:39 +0200, Tomas Vondra wrote:\n>> Hmmm, I don't remember the details of those patches so I didn't realize\n>> it allows incremental recompression. If that's possible, that would mean\n>> existing systems can start using it. Which is good.\n>\n>That depends on what do you mean by \"incremental\"? A single toasted\n>datum can only have one compression type, because we only update them\n>all in one anyway. But different datums can be compressed differently.\n>\n\nI meant different toast values using different compression algorithm,\nsorry for the confusion.\n\n>\n>> Another question is whether we'd actually want to include the code in\n>> core directly, or use system libraries (and if some packagers might\n>> decide to disable that, for whatever reason).\n>\n>I'd personally say we should have an included version, and a\n>--with-system-... flag that uses the system one.\n>\n\nOK. I'd say to require a system library, but that's a minor detail.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 2 Aug 2019 21:48:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\nOn 02.08.2019 21:20, Andres Freund wrote:\n> Another question is whether we'd actually want to include the code in\n>> core directly, or use system libraries (and if some packagers might\n>> decide to disable that, for whatever reason).\n> I'd personally say we should have an included version, and a\n> --with-system-... flag that uses the system one.\n+1\n\n\n\n",
"msg_date": "Sat, 3 Aug 2019 08:37:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 02/08/2019 21:48, Tomas Vondra wrote:\n> On Fri, Aug 02, 2019 at 11:20:03AM -0700, Andres Freund wrote:\n> \n>>\n>>> Another question is whether we'd actually want to include the code in\n>>> core directly, or use system libraries (and if some packagers might\n>>> decide to disable that, for whatever reason).\n>>\n>> I'd personally say we should have an included version, and a\n>> --with-system-... flag that uses the system one.\n>>\n> \n> OK. I'd say to require a system library, but that's a minor detail.\n> \n\nSame here.\n\nJust so that we don't idly talk, what do you think about the attached?\nIt:\n- adds new GUC compression_algorithm with possible values of pglz \n(default) and lz4 (if lz4 is compiled in), requires SIGHUP\n- adds --with-lz4 configure option (default yes, so the configure option \nis actually --without-lz4) that enables the lz4, it's using system library\n- uses the compression_algorithm for both TOAST and WAL compression (if on)\n- supports slicing for lz4 as well (pglz was already supported)\n- supports reading old TOAST values\n- adds 1 byte header to the compressed data where we currently store the \nalgorithm kind, that leaves us with 254 more to add :) (that's an extra \noverhead compared to the current state)\n- changes the rawsize in TOAST header to 31 bits via bit packing\n- uses the extra bit to differentiate between old and new format\n- supports reading from table which has different rows stored with \ndifferent algorithm (so that the GUC itself can be freely changed)\n\nSimple docs and a TAP test included.\n\nI did some basic performance testing (it's not really my thing though, \nso I would appreciate if somebody did more).\nI get about 7x perf improvement on data load with lz4 compared to pglz \non my dataset but strangely only tiny decompression improvement. Perhaps \nmore importantly I also did before patch and after patch tests with pglz \nand the performance difference with my data set was <1%.\n\nNote that this will just link against lz4, it does not add lz4 into \nPostgreSQL code-base.\n\nThe issues I know of:\n- the pg_decompress function really ought to throw error in the default \nbranch but that file is also used in front-end so not sure how to do that\n- the TAP test probably does not work with all possible configurations \n(but that's why it needs to be set in PG_TEST_EXTRA like for example ssl)\n- we don't really have any automated test for reading old TOAST format, \nno idea how to do that\n- I expect my changes to configure.in are not the greatest as I don't \nhave pretty much zero experience with autoconf\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Sun, 4 Aug 2019 02:41:24 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 2 авг. 2019 г., в 21:39, Andres Freund <andres@anarazel.de> написал(а):\n> \n> On 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n>> We have some kind of \"roadmap\" of \"extensible pglz\". We plan to provide implementation on Novembers CF.\n> \n> I don't understand why it's a good idea to improve the compression side\n> of pglz. There's plenty other people that spent a lot of time developing\n> better compression algorithms.\nImproving compression side of pglz has two different projects:\n1. Faster compression with less code and same compression ratio (patch in this thread).\n2. Better compression ratio with at least same compression speed of uncompressed values.\nWhy I want to do patch for 2? Because it's interesting.\nWill 1 or 2 be reviewed or committed? I have no idea.\nWill many users benefit from 1 or 2? Yes, clearly. Unless we force everyone to stop compressing with pglz.\n\n>> Currently, pglz starts with empty cache map: there is no prior 4k bytes before start. We can add imaginary prefix to any data with common substrings: this will enhance compression ratio.\n>> It is hard to decide on training data set for this \"common prefix\". So we want to produce extension with aggregate function which produces some \"adapted common prefix\" from users's data.\n>> Then we can \"reserve\" few negative bytes for \"decompression commands\". This command can instruct database on which common prefix to use.\n>> But also system command can say \"invoke decompression from extension\".\n>> \n>> Thus, user will be able to train database compression on his data and substitute pglz compression with custom compression method seamlessly.\n>> \n>> This will make hard-choosen compression unneeded, but seems overly hacky. But there will be no need to have lz4, zstd, brotli, lzma and others in core. Why not provide e.g. \"time series compression\"? Or \"DNA compression\"? Whatever gun user wants for his foot.\n> \n> I think this is way too complicated, and will provide not particularly\n> much benefit for the majority users.\n> \n> In fact, I'll argue that we should flat out reject any such patch until\n> we have at least one decent default compression algorithm in\n> core. You're trying to work around a poor compression algorithm with\n> complicated dictionary improvement\nOK. The idea of something plugged into pglz seemed odd even to me.\nBut looks like it restarted lz4 discussion :)\n\n> , that require user interaction, and\n> only will work in a relatively small subset of the cases, and will very\n> often increase compression times.\nNo, surely, if implementation of \"common prefix\" will increase compression times I will not even post a patch.\nBTW, lz4 also supports \"common prefix\", let's do that too?\nHere's link on Zstd dictionary builder, but it is compatible with lz4\nhttps://github.com/facebook/zstd#the-case-for-small-data-compression\nWe actually have small datums.\n\n> 4 авг. 2019 г., в 5:41, Petr Jelinek <petr@2ndquadrant.com> написал(а):\n> \n> Just so that we don't idly talk, what do you think about the attached?\n> It:\n> - adds new GUC compression_algorithm with possible values of pglz (default) and lz4 (if lz4 is compiled in), requires SIGHUP\n> - adds --with-lz4 configure option (default yes, so the configure option is actually --without-lz4) that enables the lz4, it's using system library\n> - uses the compression_algorithm for both TOAST and WAL compression (if on)\n> - supports slicing for lz4 as well (pglz was already supported)\n> - supports reading old TOAST values\n> - adds 1 byte header to the compressed data where we currently store the algorithm kind, that leaves us with 254 more to add :) (that's an extra overhead compared to the current state)\n> - changes the rawsize in TOAST header to 31 bits via bit packing\n> - uses the extra bit to differentiate between old and new format\n> - supports reading from table which has different rows stored with different algorithm (so that the GUC itself can be freely changed)\nThat's cool. I suggest defaulting to lz4 if it is available. You cannot start cluster on non-lz4 binaries which used lz4 once.\nDo we plan the possibility of compression algorithm as extension? Or will all algorithms be packed into that byte in core?\nWhat about lz4 \"common prefix\"? System or user-defined. If lz4 is compiled in we can even offer in-system training, just make sure that trained prefixes will make their way to standbys.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 4 Aug 2019 14:57:04 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 02:41:24AM +0200, Petr Jelinek wrote:\n>Hi,\n>\n>On 02/08/2019 21:48, Tomas Vondra wrote:\n>>On Fri, Aug 02, 2019 at 11:20:03AM -0700, Andres Freund wrote:\n>>\n>>>\n>>>>Another question is whether we'd actually want to include the code in\n>>>>core directly, or use system libraries (and if some packagers might\n>>>>decide to disable that, for whatever reason).\n>>>\n>>>I'd personally say we should have an included version, and a\n>>>--with-system-... flag that uses the system one.\n>>>\n>>\n>>OK. I'd say to require a system library, but that's a minor detail.\n>>\n>\n>Same here.\n>\n>Just so that we don't idly talk, what do you think about the attached?\n>It:\n>- adds new GUC compression_algorithm with possible values of pglz \n>(default) and lz4 (if lz4 is compiled in), requires SIGHUP\n>- adds --with-lz4 configure option (default yes, so the configure \n>option is actually --without-lz4) that enables the lz4, it's using \n>system library\n>- uses the compression_algorithm for both TOAST and WAL compression (if on)\n>- supports slicing for lz4 as well (pglz was already supported)\n>- supports reading old TOAST values\n>- adds 1 byte header to the compressed data where we currently store \n>the algorithm kind, that leaves us with 254 more to add :) (that's an \n>extra overhead compared to the current state)\n>- changes the rawsize in TOAST header to 31 bits via bit packing\n>- uses the extra bit to differentiate between old and new format\n>- supports reading from table which has different rows stored with \n>different algorithm (so that the GUC itself can be freely changed)\n>\n\nCool.\n\n>Simple docs and a TAP test included.\n>\n>I did some basic performance testing (it's not really my thing though, \n>so I would appreciate if somebody did more).\n>I get about 7x perf improvement on data load with lz4 compared to pglz \n>on my dataset but strangely only tiny decompression improvement. \n>Perhaps more importantly I also did before patch and after patch tests \n>with pglz and the performance difference with my data set was <1%.\n>\n>Note that this will just link against lz4, it does not add lz4 into \n>PostgreSQL code-base.\n>\n\nWFM, although I think Andres wanted to do both (link against system and\nadd lz4 code as a fallback). I think the question is what happens when\nyou run with lz4 for a while, and then switch to binaries without lz4\nsupport. Or when you try to replicate from lz4-enabled instance to an\ninstance without it. Especially for physical replication, but I suppose\nit may affect logical replication with binary protocol?\n\n\nA very brief review:\n\n\n1) I wonder what \"runstatedir\" is about.\n\n\n2) This seems rather suspicious, and obviously the comment is now\nentirely bogus:\n\n /* Check that off_t can represent 2**63 - 1 correctly.\n We can't simply define LARGE_OFF_T to be 9223372036854775807,\n since some C++ compilers masquerading as C compilers\n incorrectly reject 9223372036854775807. */\n-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n\n\n3) I can't really build without lz4:\n\nconfig.status: linking src/makefiles/Makefile.linux to src/Makefile.port\npg_lzcompress.c: In function ‘pg_compress_bound’:\npg_lzcompress.c:892:22: error: ‘SIZEOF_PG_COMPRESS_HEADER’ undeclared (first use in this function)\n return slen + 4 + SIZEOF_PG_COMPRESS_HEADER;\n ^~~~~~~~~~~~~~~~~~~~~~~~~\npg_lzcompress.c:892:22: note: each undeclared identifier is reported only once for each function it appears in\n\n\n4) I did a simple test with physical replication, with lz4 enabled on\nboth sides (well, can't build without lz4 anyway, per previous point).\nIt immediately failed like this:\n\n FATAL: failed to restore block image\n CONTEXT: WAL redo at 0/5000A40 for Btree/INSERT_LEAF: off 138\n LOG: startup process (PID 15937) exited with exit code 1\n\nThis is a simple UPDATE on a trivial table:\n\n create table t (a int primary key);\n insert into t select i from generate_series(1,1000) s(i);\n update t set a = a - 100000 where random () < 0.1;\n\nwith some checkpoints to force FPW (and wal_compression=on, of course).\n\nI haven't tried `make check-world` but I suppose some of the TAP tests\nshould fail because of this. And if not, we need to improve coverage.\n\n\n5) I wonder why compression_algorithm is defined as PGC_SIGHUP. Why not\nto allow users to set it per session? I suppose we might have a separate\noption for WAL compression_algorithm.\n\n\n6) It seems a bit strange that pg_compress/pg_decompress are now defined\nin pglz_compress.{c,h}. Maybe we should invent src/common/compress.c?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 13:52:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 04/08/2019 11:57, Andrey Borodin wrote:\n> \n> \n>> 2 авг. 2019 г., в 21:39, Andres Freund <andres@anarazel.de> написал(а):\n>>\n>> On 2019-08-02 20:40:51 +0500, Andrey Borodin wrote:\n>>> We have some kind of \"roadmap\" of \"extensible pglz\". We plan to provide implementation on Novembers CF.\n>>\n>> I don't understand why it's a good idea to improve the compression side\n>> of pglz. There's plenty other people that spent a lot of time developing\n>> better compression algorithms.\n> Improving compression side of pglz has two different projects:\n> 1. Faster compression with less code and same compression ratio (patch in this thread).\n> 2. Better compression ratio with at least same compression speed of uncompressed values.\n> Why I want to do patch for 2? Because it's interesting.\n> Will 1 or 2 be reviewed or committed? I have no idea.\n> Will many users benefit from 1 or 2? Yes, clearly. Unless we force everyone to stop compressing with pglz.\n> \n\nFWIW I agree.\n\n>> Just so that we don't idly talk, what do you think about the attached?\n>> It:\n>> - adds new GUC compression_algorithm with possible values of pglz (default) and lz4 (if lz4 is compiled in), requires SIGHUP\n>> - adds --with-lz4 configure option (default yes, so the configure option is actually --without-lz4) that enables the lz4, it's using system library\n>> - uses the compression_algorithm for both TOAST and WAL compression (if on)\n>> - supports slicing for lz4 as well (pglz was already supported)\n>> - supports reading old TOAST values\n>> - adds 1 byte header to the compressed data where we currently store the algorithm kind, that leaves us with 254 more to add :) (that's an extra overhead compared to the current state)\n>> - changes the rawsize in TOAST header to 31 bits via bit packing\n>> - uses the extra bit to differentiate between old and new format\n>> - supports reading from table which has different rows stored with different algorithm (so that the GUC itself can be freely changed)\n> That's cool. I suggest defaulting to lz4 if it is available. You cannot start cluster on non-lz4 binaries which used lz4 once.\n> Do we plan the possibility of compression algorithm as extension? Or will all algorithms be packed into that byte in core?\n\nWhat I wrote does not expect extensions providing new compression. We'd \nhave to somehow reserve compression ids for specific extensions and that \nseems like a lot of extra complexity for little benefit. I don't see \nmuch benefit in having more than say 3 generic compressors (I could \nimagine adding zstd). If you are thinking about data type specific \ncompression then I think this is wrong layer.\n\n> What about lz4 \"common prefix\"? System or user-defined. If lz4 is compiled in we can even offer in-system training, just make sure that trained prefixes will make their way to standbys.\n> \n\nI definitely don't plan to work on common prefix. But don't see why that \ncould not be added later.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Sun, 4 Aug 2019 17:52:36 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 04/08/2019 13:52, Tomas Vondra wrote:\n> On Sun, Aug 04, 2019 at 02:41:24AM +0200, Petr Jelinek wrote:\n>> Hi,\n>>\n>> On 02/08/2019 21:48, Tomas Vondra wrote:\n>>> On Fri, Aug 02, 2019 at 11:20:03AM -0700, Andres Freund wrote:\n>>>\n>>>>\n>>>>> Another question is whether we'd actually want to include the code in\n>>>>> core directly, or use system libraries (and if some packagers might\n>>>>> decide to disable that, for whatever reason).\n>>>>\n>>>> I'd personally say we should have an included version, and a\n>>>> --with-system-... flag that uses the system one.\n>>>>\n>>>\n>>> OK. I'd say to require a system library, but that's a minor detail.\n>>>\n>>\n>> Same here.\n>>\n>> Just so that we don't idly talk, what do you think about the attached?\n>> It:\n>> - adds new GUC compression_algorithm with possible values of pglz \n>> (default) and lz4 (if lz4 is compiled in), requires SIGHUP\n>> - adds --with-lz4 configure option (default yes, so the configure \n>> option is actually --without-lz4) that enables the lz4, it's using \n>> system library\n>> - uses the compression_algorithm for both TOAST and WAL compression \n>> (if on)\n>> - supports slicing for lz4 as well (pglz was already supported)\n>> - supports reading old TOAST values\n>> - adds 1 byte header to the compressed data where we currently store \n>> the algorithm kind, that leaves us with 254 more to add :) (that's an \n>> extra overhead compared to the current state)\n>> - changes the rawsize in TOAST header to 31 bits via bit packing\n>> - uses the extra bit to differentiate between old and new format\n>> - supports reading from table which has different rows stored with \n>> different algorithm (so that the GUC itself can be freely changed)\n>>\n> \n> Cool.\n> \n>> Simple docs and a TAP test included.\n>>\n>> I did some basic performance testing (it's not really my thing though, \n>> so I would appreciate if somebody did more).\n>> I get about 7x perf improvement on data load with lz4 compared to pglz \n>> on my dataset but strangely only tiny decompression improvement. \n>> Perhaps more importantly I also did before patch and after patch tests \n>> with pglz and the performance difference with my data set was <1%.\n>>\n>> Note that this will just link against lz4, it does not add lz4 into \n>> PostgreSQL code-base.\n>>\n> \n> WFM, although I think Andres wanted to do both (link against system and\n> add lz4 code as a fallback). I think the question is what happens when\n> you run with lz4 for a while, and then switch to binaries without lz4\n> support. Or when you try to replicate from lz4-enabled instance to an\n> instance without it. Especially for physical replication, but I suppose\n> it may affect logical replication with binary protocol?\n> \n\nI generally prefer having system library, we don't include for example \nICU either.\n\n> \n> A very brief review:\n> \n> \n> 1) I wonder what \"runstatedir\" is about.\n> \n\nNo idea, that stuff is generated by autoconf from configure.in.\n\n> \n> 2) This seems rather suspicious, and obviously the comment is now\n> entirely bogus:\n> \n> /* Check that off_t can represent 2**63 - 1 correctly.\n> We can't simply define LARGE_OFF_T to be 9223372036854775807,\n> since some C++ compilers masquerading as C compilers\n> incorrectly reject 9223372036854775807. */\n> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) \n> << 31))\n> \n\nSame as above. TBH I am not sure why we even include configure in git \nrepo given that different autoconf versions can build different outputs.\n\n> \n> 3) I can't really build without lz4:\n> \n> config.status: linking src/makefiles/Makefile.linux to src/Makefile.port\n> pg_lzcompress.c: In function ‘pg_compress_bound’:\n> pg_lzcompress.c:892:22: error: ‘SIZEOF_PG_COMPRESS_HEADER’ undeclared \n> (first use in this function)\n> return slen + 4 + SIZEOF_PG_COMPRESS_HEADER;\n> ^~~~~~~~~~~~~~~~~~~~~~~~~\n> pg_lzcompress.c:892:22: note: each undeclared identifier is reported \n> only once for each function it appears in\n> \n\nOkay, that's just problem of SIZEOF_PG_COMPRESS_HEADER being defined \ninside the HAVE_LZ4 ifdef while it should be defined above ifdef.\n\n> \n> 4) I did a simple test with physical replication, with lz4 enabled on\n> both sides (well, can't build without lz4 anyway, per previous point).\n> It immediately failed like this:\n> \n> FATAL: failed to restore block image\n> CONTEXT: WAL redo at 0/5000A40 for Btree/INSERT_LEAF: off 138\n> LOG: startup process (PID 15937) exited with exit code 1\n> \n> This is a simple UPDATE on a trivial table:\n> \n> create table t (a int primary key);\n> insert into t select i from generate_series(1,1000) s(i);\n> update t set a = a - 100000 where random () < 0.1;\n> \n> with some checkpoints to force FPW (and wal_compression=on, of course).\n> \n> I haven't tried `make check-world` but I suppose some of the TAP tests\n> should fail because of this. And if not, we need to improve coverage.\n> \n\nFWIW I did run check-world without problems, will have to look into this.\n\n> \n> 5) I wonder why compression_algorithm is defined as PGC_SIGHUP. Why not\n> to allow users to set it per session? I suppose we might have a separate\n> option for WAL compression_algorithm.\n>\n\nYeah I was thinking we might want to change wal_compression to enum as \nwell. Although that complicates the code quite a bit (the caller has to \ndecide algorithm instead compression system doing it).\n\n> \n> 6) It seems a bit strange that pg_compress/pg_decompress are now defined\n> in pglz_compress.{c,h}. Maybe we should invent src/common/compress.c?\n> \n\nIt does not seem worth it to invent new module for like 20 lines of \nwrapper code.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Sun, 4 Aug 2019 17:53:26 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 05:53:26PM +0200, Petr Jelinek wrote:\n>\n> ...\n>\n>>\n>>4) I did a simple test with physical replication, with lz4 enabled on\n>>both sides (well, can't build without lz4 anyway, per previous point).\n>>It immediately failed like this:\n>>\n>>FATAL:� failed to restore block image\n>>CONTEXT:� WAL redo at 0/5000A40 for Btree/INSERT_LEAF: off 138\n>>LOG:� startup process (PID 15937) exited with exit code 1\n>>\n>>This is a simple UPDATE on a trivial table:\n>>\n>>create table t (a int primary key);\n>>insert into t select i from generate_series(1,1000) s(i);\n>>update t set a = a - 100000 where random () < 0.1;\n>>\n>>with some checkpoints to force FPW (and wal_compression=on, of course).\n>>\n>>I haven't tried `make check-world` but I suppose some of the TAP tests\n>>should fail because of this. And if not, we need to improve coverage.\n>>\n>\n>FWIW I did run check-world without problems, will have to look into this.\n>\n\nNot sure if we bother to set wal_compression=on for check-world (I don't\nthink we do, but I may be missing something), so maybe check-world does\nnot really test wal compression.\n\nIMO the issue is that RestoreBlockImage() still calls pglz_decompress\ndirectly, instead of going through pg_decompress().\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 19:30:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-04 02:41:24 +0200, Petr Jelinek wrote:\n> Same here.\n> \n> Just so that we don't idly talk, what do you think about the attached?\n\nCool!\n\n> It:\n> - adds new GUC compression_algorithm with possible values of pglz (default)\n> and lz4 (if lz4 is compiled in), requires SIGHUP\n\nAs Tomas remarked, I think it shouldn't be SIGHUP but USERSET. And I\nthink lz4 should be preferred, if available. I could see us using a\nlist style guc, so we could set it to lz4, pglz, and the first available\none would be used.\n\n> - adds 1 byte header to the compressed data where we currently store the\n> algorithm kind, that leaves us with 254 more to add :) (that's an extra\n> overhead compared to the current state)\n\nHm. Why do we need an additional byte? IIRC my patch added that only\nfor the case we would run out of space for compression formats without\nextending any sizes?\n\n\n> - changes the rawsize in TOAST header to 31 bits via bit packing\n> - uses the extra bit to differentiate between old and new format\n\nHm. Wouldn't it be easier to just use a different vartag for this?\n\n\n> - I expect my changes to configure.in are not the greatest as I don't have\n> pretty much zero experience with autoconf\n\nFWIW the configure output changes are likely because you used a modified\nversion of autoconf. Unfortunately debian/ubuntu ship one with vendor\npatches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 4 Aug 2019 12:20:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 04/08/2019 21:20, Andres Freund wrote:\n> On 2019-08-04 02:41:24 +0200, Petr Jelinek wrote:\n>> Same here.\n>>\n>> Just so that we don't idly talk, what do you think about the attached?\n> \n> Cool!\n> \n>> It:\n>> - adds new GUC compression_algorithm with possible values of pglz (default)\n>> and lz4 (if lz4 is compiled in), requires SIGHUP\n> \n> As Tomas remarked, I think it shouldn't be SIGHUP but USERSET. And I\n> think lz4 should be preferred, if available. I could see us using a\n> list style guc, so we could set it to lz4, pglz, and the first available\n> one would be used.\n> \n\nSounds reasonable.\n\n>> - adds 1 byte header to the compressed data where we currently store the\n>> algorithm kind, that leaves us with 254 more to add :) (that's an extra\n>> overhead compared to the current state)\n> \n> Hm. Why do we need an additional byte? IIRC my patch added that only\n> for the case we would run out of space for compression formats without\n> extending any sizes?\n> \n\nYeah your patch worked differently (I didn't actually use any code from \nit). The main reason why I add the byte is that I am storing the \nalgorithm in the compressed value itself, not in varlena header. I was \nmainly trying to not have every caller care about storing and loading \nthe compression algorithm. I also can't say I particularly like that \nhack in your patch.\n\nHowever if we'd want to have separate GUCs for TOAST and WAL then we'll \nhave to do that anyway so maybe it does not matter anymore (we can't use \nsimilar hack there AFAICS though).\n\n> \n>> - changes the rawsize in TOAST header to 31 bits via bit packing\n>> - uses the extra bit to differentiate between old and new format\n> \n> Hm. Wouldn't it be easier to just use a different vartag for this?\n> \n\nThat would only work for external TOAST pointers right? The compressed \nvarlena can also be stored inline and potentially in index tuple.\n\n> \n>> - I expect my changes to configure.in are not the greatest as I don't have\n>> pretty much zero experience with autoconf\n> \n> FWIW the configure output changes are likely because you used a modified\n> version of autoconf. Unfortunately debian/ubuntu ship one with vendor\n> patches.\n> \n\nYeah, Ubuntu here, that explains.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 5 Aug 2019 00:08:37 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-04 17:53:26 +0200, Petr Jelinek wrote:\n> > 5) I wonder why compression_algorithm is defined as PGC_SIGHUP. Why not\n> > to allow users to set it per session? I suppose we might have a separate\n> > option for WAL compression_algorithm.\n> > \n> \n> Yeah I was thinking we might want to change wal_compression to enum as well.\n> Although that complicates the code quite a bit (the caller has to decide\n> algorithm instead compression system doing it).\n\nIsn't that basically required anyway? The WAL record will need to carry\ninformation about the type of compression used, independent of\nPGC_SIGHUP/PGC_USERSET, unless you want to make it an initdb option or\nsomething super heavyweight like that.\n\nWe could just have the wal_compression assign hook set a the compression\ncallback and a compression type integer or such if we want to avoid\ndoing that kind of thing at runtime.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 4 Aug 2019 15:15:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 05/08/2019 00:15, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-04 17:53:26 +0200, Petr Jelinek wrote:\n>>> 5) I wonder why compression_algorithm is defined as PGC_SIGHUP. Why not\n>>> to allow users to set it per session? I suppose we might have a separate\n>>> option for WAL compression_algorithm.\n>>>\n>>\n>> Yeah I was thinking we might want to change wal_compression to enum as well.\n>> Although that complicates the code quite a bit (the caller has to decide\n>> algorithm instead compression system doing it).\n> \n> Isn't that basically required anyway? The WAL record will need to carry\n> information about the type of compression used, independent of\n> PGC_SIGHUP/PGC_USERSET, unless you want to make it an initdb option or\n> something super heavyweight like that.\n> \n\nIt carries that information inside the compressed value, like I said in \nthe other reply, that's why the extra byte.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 5 Aug 2019 00:19:28 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 07:52:39PM +0200, Tomas Vondra wrote:\n> On Fri, Aug 02, 2019 at 10:12:58AM -0700, Andres Freund wrote:\n>> Why would they be stuck continuing to *compress* with pglz? As we\n>> fully retoast on write anyway we can just gradually switch over to the\n>> better algorithm. Decompression speed is another story, of course.\n> \n> Hmmm, I don't remember the details of those patches so I didn't realize\n> it allows incremental recompression. If that's possible, that would mean\n> existing systems can start using it. Which is good.\n\nIt may become a problem on some platforms though (Windows?), so\npatches to improve either the compression or decompression of pglz are\nnot that much crazy as they are still likely going to be used, and for\nread-mostly switching to a new algo may not be worth the extra cost so\nit is not like we are going to drop it completely either. My take,\nstill the same as upthread, is that it mostly depends on the amount of\ncomplexity each patch introduces compared to the performance gain. \n\n> Another question is whether we'd actually want to include the code in\n> core directly, or use system libraries (and if some packagers might\n> decide to disable that, for whatever reason).\n\nLinking to system libraries would make our maintenance much easier,\nand when it comes to have a copy of something else in the tree we\nwould be stuck with more maintenance around it. These tend to rot\neasily. After that comes the case where the compression algo is not\nin the binary across one server to another, in which case we have an\nautomatic ERROR in case of a mismatching algo, or FATAL for\ndeompression of FPWs at recovery when wal_compression is used.\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 16:04:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 00:19:28 +0200, Petr Jelinek wrote:\n> It carries that information inside the compressed value, like I said in the\n> other reply, that's why the extra byte.\n\nI'm not convinced that that is a good plan - imo the reference to the\ncompressed data should carry that information.\n\nI.e. in the case of toast, at least toast pointers should hold enough\ninformation to determine the compression algorithm. And in the case of\nWAL, the WAL record should contain that.\n\nConsider e.g. adding support for slice fetching of datums compressed\nwith some algorithm - we should be able to determine whether that's\npossible without fetching the datum (so we can take a non-exceptional\npath for datums compressed otherwise). Similarly, for WAL, we should be\nable to detect whether an incompatible compression format is used,\nwithout having to invoke a generic compression routine that then fails\nin some way. Or adding compression reporting for WAL to xlogdump.\n\nI also don't particularly like baking in the assumption that we don't\nsupport tuples larger than 1GB in further places. To me it seems likely\nthat we're going to have to fix that, and it's hard enough already... I\nknow that my patch did that too...\n\nFor external datums I suggest encoding the compression method as a\ndistinct VARTAG_ONDISK_COMPRESSED, and then have that include the\ncompression method as a field.\n\nFor in-line compressed values (so VARATT_IS_4B_C), doing something\nroughly like you did, indicating the type of metadata following using\nthe high bit sounds reasonable. But I think I'd make it so that if the\nhighbit is set, the struct is instead entirely different, keeping a full\n4 byte byte length, and including the compression type header inside the\nstruct. Perhaps I'd combine the compression type with the high-bit-set\npart? So when the high bit is set, it'd be something like\n\n{\n int32\t\tvl_len_;\t\t/* varlena header (do not touch directly!) */\n\n /*\n * Actually only 7 bit, the high bit determines whether this\n * is the old compression header (if unset), or this type of header\n * (if set).\n */\n uint8 type;\n\n /*\n * Stored as uint8[4], to avoid unnecessary alignment padding.\n */\n uint8[4] length;\n\n char va_data[FLEXIBLE_ARRAY_MEMBER];\n}\n\nI think it's worth spending some effort trying to get this right - we'll\nbe bound by design choices for a while.\n\n\nIt's kinda annoying that toast datums aren't better designed. Creating\nthem from scratch, I'd make it something like:\n\n1) variable-width integer describing the \"physical length\", so that\n tuple deforming can quickly determine length - all the ifs necessary\n to determine lengths are a bottleneck. I'd probably just use a 127bit\n encoding, if the high bit is set, there's a following length byte.\n\n2) type of toasted datum, probably also in a variable width encoding,\n starting at 1byte. Not that I think it's likely we'd overrun 256\n types - but it's cheap enough to just declare the high bit as an\n length extension bit.\n\nThese are always stored unaligned. So there's no need to deal with\npadding bytes having to be zero to determine whether we're dealing with\na 1byte datum etc.\n\nThen, type dependant:\n\nFor In-line uncompressed datums\n3a) alignment padding, amount determined by 2) above, i.e. we'd just\n have different types for different amounts of alignment. Probably\n using some heuristic to use unaligned when either dealing with data\n that doesn't need alignment, or when the datum is fairly small, so\n copying to get the data as unaligned won't be a significant penalty.\n4a) data\n\nFor in-line compressed datums\n3b) compression metadata {varint rawsize, varint compression algorithm}\n4b) unaligned compressed data - there's no benefit in keeping it aligned\n\nFor External toast for uncompressed data:\n3d) {toastrelid, valueid, varint rawsize}\n\nFor External toast for compressed data:\n3e) {valueid, toastrelid, varint compression_algorithm, varint rawsize, varint extsize}\n\n\nThat'd make it a lot more extensible, easier to understand, faster to\ndecode in a lot of cases, remove a lot of arbitrary limits. Yes, it'd\nincrease the header size for small datums to two bytes, but I think\nthat'd be more than bought back by the other improvements.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 00:26:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 16:04:46 +0900, Michael Paquier wrote:\n> On Fri, Aug 02, 2019 at 07:52:39PM +0200, Tomas Vondra wrote:\n> > On Fri, Aug 02, 2019 at 10:12:58AM -0700, Andres Freund wrote:\n> >> Why would they be stuck continuing to *compress* with pglz? As we\n> >> fully retoast on write anyway we can just gradually switch over to the\n> >> better algorithm. Decompression speed is another story, of course.\n> > \n> > Hmmm, I don't remember the details of those patches so I didn't realize\n> > it allows incremental recompression. If that's possible, that would mean\n> > existing systems can start using it. Which is good.\n> \n> It may become a problem on some platforms though (Windows?), so\n> patches to improve either the compression or decompression of pglz are\n> not that much crazy as they are still likely going to be used, and for\n> read-mostly switching to a new algo may not be worth the extra cost so\n> it is not like we are going to drop it completely either.\n\nWhat's the platform dependency that you're thinking of? And how's\ncompression speed relevant to \"read mostly\"? Switching would just\nhappen whenever tuple fields are changed. And it'll not have an\nadditional cost, because all it does is reduce the cost of a toast write\nthat'd otherwise happened with pglz.\n\n\n> Linking to system libraries would make our maintenance much easier,\n> and when it comes to have a copy of something else in the tree we\n> would be stuck with more maintenance around it. These tend to rot\n> easily.\n\nI don't think it's really our experience that they \"rot easily\".\n\n\n> After that comes the case where the compression algo is not\n> in the binary across one server to another, in which case we have an\n> automatic ERROR in case of a mismatching algo, or FATAL for\n> deompression of FPWs at recovery when wal_compression is used.\n\nHuh? That's a failure case that only exists if you don't include it in\nthe tree (with the option to use an out-of-tree lib)?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Aug 2019 00:04:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-06-24 10:44, Andrey Borodin wrote:\n>> 18 мая 2019 г., в 11:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>\n> Hi! \n> Here's rebased version of patches.\n> \n> Best regards, Andrey Borodin.\n\nI think this is the most recent patch for the CF entry\n<https://commitfest.postgresql.org/24/2119/>.\n\nWhat about the two patches? Which one is better?\n\nHave you also considered using memmove() to deal with the overlap issue?\n\nBenchmarks have been posted in this thread. Where is the benchmarking\ntool? Should we include that in the source somehow?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 11:09:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi, Peter! Thanks for looking into this.\n\n> 4 сент. 2019 г., в 14:09, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> написал(а):\n> \n> On 2019-06-24 10:44, Andrey Borodin wrote:\n>>> 18 мая 2019 г., в 11:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>> \n>> Hi! \n>> Here's rebased version of patches.\n>> \n>> Best regards, Andrey Borodin.\n> \n> I think this is the most recent patch for the CF entry\n> <https://commitfest.postgresql.org/24/2119/>.\n> \n> What about the two patches? Which one is better?\nOn our observations pglz_decompress_hacked.patch is best for most of tested platforms.\nDifference is that pglz_decompress_hacked8.patch will not appply optimization if decompressed match is not greater than 8 bytes. This optimization was suggested by Tom, that's why we benchmarked it specifically.\n\n> Have you also considered using memmove() to deal with the overlap issue?\nYes, memmove() resolves ambiguity of copying overlapping regions in a way that is not compatible with pglz. In proposed patch we never copy overlapping regions.\n\n> Benchmarks have been posted in this thread. Where is the benchmarking\n> tool? Should we include that in the source somehow?\n\nBenchmarking tool is here [0]. Well, code of the benchmarking tool do not adhere to our standards in some places, we did not consider its inclusion in core.\nHowever, most questionable part of benchmarking is choice of test data. It's about 100Mb of useless WALs, datafile and valuable Shakespeare writings.\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/x4m/test_pglz\n\n\n\n",
"msg_date": "Wed, 4 Sep 2019 14:22:20 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-09-04 11:22, Andrey Borodin wrote:\n>> What about the two patches? Which one is better?\n> On our observations pglz_decompress_hacked.patch is best for most of tested platforms.\n> Difference is that pglz_decompress_hacked8.patch will not appply optimization if decompressed match is not greater than 8 bytes. This optimization was suggested by Tom, that's why we benchmarked it specifically.\n\nThe patches attached to the message I was replying to are named\n\n0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n0001-Use-memcpy-in-pglz-decompression.patch\n\nAre those the same ones?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 14:40:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 4 сент. 2019 г., в 17:40, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> написал(а):\n> \n> On 2019-09-04 11:22, Andrey Borodin wrote:\n>>> What about the two patches? Which one is better?\n>> On our observations pglz_decompress_hacked.patch is best for most of tested platforms.\n>> Difference is that pglz_decompress_hacked8.patch will not appply optimization if decompressed match is not greater than 8 bytes. This optimization was suggested by Tom, that's why we benchmarked it specifically.\n> \n> The patches attached to the message I was replying to are named\n> \n> 0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n> 0001-Use-memcpy-in-pglz-decompression.patch\n> \n> Are those the same ones?\n\nYes. Sorry for this confusion.\n\nThe only difference of 0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch is that it fallbacks to byte-loop if len is <= 8.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 4 Sep 2019 17:45:19 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "I just noticed we had two CF items pointing to this thread,\n\nhttps://commitfest.postgresql.org/24/2119/\nhttps://commitfest.postgresql.org/24/2180/\n\nso I marked the newer one as withdrawn.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 15:58:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 12:22 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi, Peter! Thanks for looking into this.\n>\n> > 4 сент. 2019 г., в 14:09, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> написал(а):\n> >\n> > On 2019-06-24 10:44, Andrey Borodin wrote:\n> >>> 18 мая 2019 г., в 11:44, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> >>>\n> >> Hi!\n> >> Here's rebased version of patches.\n> >>\n> >> Best regards, Andrey Borodin.\n> >\n> > I think this is the most recent patch for the CF entry\n> > <https://commitfest.postgresql.org/24/2119/>.\n> >\n> > What about the two patches? Which one is better?\n> On our observations pglz_decompress_hacked.patch is best for most of tested platforms.\n> Difference is that pglz_decompress_hacked8.patch will not appply optimization if decompressed match is not greater than 8 bytes. This optimization was suggested by Tom, that's why we benchmarked it specifically.\n>\n> > Have you also considered using memmove() to deal with the overlap issue?\n> Yes, memmove() resolves ambiguity of copying overlapping regions in a way that is not compatible with pglz. In proposed patch we never copy overlapping regions.\n>\n> > Benchmarks have been posted in this thread. Where is the benchmarking\n> > tool? Should we include that in the source somehow?\n>\n> Benchmarking tool is here [0]. Well, code of the benchmarking tool do not adhere to our standards in some places, we did not consider its inclusion in core.\n> However, most questionable part of benchmarking is choice of test data. It's about 100Mb of useless WALs, datafile and valuable Shakespeare writings.\n\nWhy not use 'Silesia compression corpus'\n(http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia), which used by\nlzbench (https://github.com/inikep/lzbench) ? I and Teodor remember\nthat testing on non-english texts could be very important.\n\n\n>\n> Best regards, Andrey Borodin.\n>\n>\n> [0] https://github.com/x4m/test_pglz\n>\n>\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 15 Sep 2019 13:57:48 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi,\n\nOn 05/08/2019 09:26, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-05 00:19:28 +0200, Petr Jelinek wrote:\n>> It carries that information inside the compressed value, like I said in the\n>> other reply, that's why the extra byte.\n> \n> I'm not convinced that that is a good plan - imo the reference to the\n> compressed data should carry that information.\n> \n> I.e. in the case of toast, at least toast pointers should hold enough\n> information to determine the compression algorithm. And in the case of\n> WAL, the WAL record should contain that.\n> \nPoint taken.\n\n> \n> For external datums I suggest encoding the compression method as a\n> distinct VARTAG_ONDISK_COMPRESSED, and then have that include the\n> compression method as a field.\n\nSo the new reads/writes will use this and reads of old format won't \nchange? Sounds fine.\n\n> \n> For in-line compressed values (so VARATT_IS_4B_C), doing something\n> roughly like you did, indicating the type of metadata following using\n> the high bit sounds reasonable. But I think I'd make it so that if the\n> highbit is set, the struct is instead entirely different, keeping a full\n> 4 byte byte length, and including the compression type header inside the\n> struct. Perhaps I'd combine the compression type with the high-bit-set\n> part? So when the high bit is set, it'd be something like\n> \n> {\n> int32\t\tvl_len_;\t\t/* varlena header (do not touch directly!) */\n> \n> /*\n> * Actually only 7 bit, the high bit determines whether this\n> * is the old compression header (if unset), or this type of header\n> * (if set).\n> */\n> uint8 type;\n> \n> /*\n> * Stored as uint8[4], to avoid unnecessary alignment padding.\n> */\n> uint8[4] length;\n> \n> char va_data[FLEXIBLE_ARRAY_MEMBER];\n> }\n> \n\nWon't this break BW compatibility on big-endian (if I understand \ncorretly what you are trying to achieve here)?\n\n> I think it's worth spending some effort trying to get this right - we'll\n> be bound by design choices for a while.\n> \n\nSure, however I am not in business of redesigning TOAST from scratch \nright now, even if I do agree that the current header is far from ideal.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Wed, 25 Sep 2019 23:24:18 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-09-04 14:45, Andrey Borodin wrote:\n>> On 2019-09-04 11:22, Andrey Borodin wrote:\n>>>> What about the two patches? Which one is better?\n>>> On our observations pglz_decompress_hacked.patch is best for most of tested platforms.\n>>> Difference is that pglz_decompress_hacked8.patch will not appply optimization if decompressed match is not greater than 8 bytes. This optimization was suggested by Tom, that's why we benchmarked it specifically.\n>>\n>> The patches attached to the message I was replying to are named\n>>\n>> 0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n>> 0001-Use-memcpy-in-pglz-decompression.patch\n>>\n>> Are those the same ones?\n> \n> Yes. Sorry for this confusion.\n> \n> The only difference of 0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch is that it fallbacks to byte-loop if len is <= 8.\n\nAfter reviewing this thread and more testing, I think\n0001-Use-memcpy-in-pglz-decompression.patch appears to be a solid win\nand we should move ahead with it.\n\nI don't, however, fully understand the code changes, and I think this\ncould use more and better comments. In particular, I wonder about\n\noff *= 2;\n\nThis is new logic that isn't explained anywhere.\n\nThis whole function is commented a bit strangely. It begins with\n\"Otherwise\", but there is nothing before it. And what does \"from OUTPUT\nto OUTPUT\" mean? There is no \"output\" variable. We should make this\nmatch the code better.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:41:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Oleg, Peter, thanks for looking into this!\n\nI hope to benchmark decompression on Silesian corpus soon.\n\nPFA v2 with better comments.\n\n> 27 сент. 2019 г., в 14:41, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> написал(а):\n> \n> After reviewing this thread and more testing, I think\n> 0001-Use-memcpy-in-pglz-decompression.patch appears to be a solid win\n> and we should move ahead with it.\n> \n> I don't, however, fully understand the code changes, and I think this\n> could use more and better comments. In particular, I wonder about\n> \n> off *= 2;\nI've changed this to\noff += off;\n\n> \n> This is new logic that isn't explained anywhere.\n> \n> This whole function is commented a bit strangely. It begins with\n> \"Otherwise\", but there is nothing before it. And what does \"from OUTPUT\n> to OUTPUT\" mean? There is no \"output\" variable. We should make this\n> match the code better.\n\n\nI've added small example to illustrate what is going on.\n\nThanks!\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud",
"msg_date": "Sat, 28 Sep 2019 13:29:18 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 28 сент. 2019 г., в 10:29, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> I hope to benchmark decompression on Silesian corpus soon.\n\nI've done it. And results are quite controversial.\nDataset adds 12 payloads to our 5. Payloads have relatively high entropy. In many cases pglz cannot compress them at all, so decompression is nop, data is stored as is.\n\nDecompressor pglz_decompress_hacked result 48.281747\nDecompressor pglz_decompress_hacked8 result 33.868779\nDecompressor pglz_decompress_vanilla result 42.510165\n\nTested on Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz\n\nWith Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\nMeanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\nIn spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n\nI've updated test suite [0] and anyone interested can verify benchmarks.\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud\n\n[0] https://github.com/x4m/test_pglz\n\n",
"msg_date": "Mon, 21 Oct 2019 11:09:29 +0200",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 21 окт. 2019 г., в 14:09, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> With Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\n> Meanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\n> In spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n\nHere's v3 which takes into account recent benchmarks with Silesian Corpus and have better comments.\n\nThanks!\n\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud",
"msg_date": "Fri, 25 Oct 2019 10:05:13 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-10-25 07:05, Andrey Borodin wrote:\n>> 21 окт. 2019 г., в 14:09, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>\n>> With Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\n>> Meanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\n>> In spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n> \n> Here's v3 which takes into account recent benchmarks with Silesian Corpus and have better comments.\n\nYour message from 21 October appears to say that this change makes the \nperformance worse. So I don't know how to proceed with this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 1 Nov 2019 13:33:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-Nov-01, Peter Eisentraut wrote:\n\n> On 2019-10-25 07:05, Andrey Borodin wrote:\n> > > 21 окт. 2019 г., в 14:09, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> > > \n> > > With Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\n> > > Meanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\n> > > In spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n> > \n> > Here's v3 which takes into account recent benchmarks with Silesian Corpus and have better comments.\n> \n> Your message from 21 October appears to say that this change makes the\n> performance worse. So I don't know how to proceed with this.\n\nAs I understand that report, in these results \"less is better\", so the\nhacked8 variant shows better performance (33.8) than current (42.5).\nThe \"hacked\" variant shows worse performance (48.2) that the current\ncode. The \"in spite\" phrase seems to have been a mistake.\n\nI am surprised that there is so much variability in the performance\nnumbers, though, based on such small tweaks of the code.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 1 Nov 2019 12:48:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Nov 01, 2019 at 12:48:28PM -0300, Alvaro Herrera wrote:\n>On 2019-Nov-01, Peter Eisentraut wrote:\n>\n>> On 2019-10-25 07:05, Andrey Borodin wrote:\n>> > > 21 окт. 2019 г., в 14:09, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>> > >\n>> > > With Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\n>> > > Meanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\n>> > > In spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n>> >\n>> > Here's v3 which takes into account recent benchmarks with Silesian Corpus and have better comments.\n>>\n>> Your message from 21 October appears to say that this change makes the\n>> performance worse. So I don't know how to proceed with this.\n>\n>As I understand that report, in these results \"less is better\", so the\n>hacked8 variant shows better performance (33.8) than current (42.5).\n>The \"hacked\" variant shows worse performance (48.2) that the current\n>code. The \"in spite\" phrase seems to have been a mistake.\n>\n>I am surprised that there is so much variability in the performance\n>numbers, though, based on such small tweaks of the code.\n>\n\nI'd try running the benchmarks to verify the numbers, and maybe do some\nadditional tests, but it's not clear to me which patches should I use.\n\nI think the last patches with 'hacked' and 'hacked8' in the name are a\ncouple of months old, and the recent posts attach just a single patch.\nAndrey, can you post current versions of both patches?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 1 Nov 2019 17:59:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 1 нояб. 2019 г., в 18:48, Alvaro Herrera <alvherre@2ndquadrant.com> написал(а):\n> \n> On 2019-Nov-01, Peter Eisentraut wrote:\n> \n>> On 2019-10-25 07:05, Andrey Borodin wrote:\n>>>> 21 окт. 2019 г., в 14:09, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n>>>> \n>>>> With Silesian corpus pglz_decompress_hacked is actually decreasing performance on high-entropy data.\n>>>> Meanwhile pglz_decompress_hacked8 is still faster than usual pglz_decompress.\n>>>> In spite of this benchmarks, I think that pglz_decompress_hacked8 is safer option.\n>>> \n>>> Here's v3 which takes into account recent benchmarks with Silesian Corpus and have better comments.\n>> \n>> Your message from 21 October appears to say that this change makes the\n>> performance worse. So I don't know how to proceed with this.\n> \n> As I understand that report, in these results \"less is better\", so the\n> hacked8 variant shows better performance (33.8) than current (42.5).\n> The \"hacked\" variant shows worse performance (48.2) that the current\n> code.\nThis is correct. Thanks, Álvaro.\n\n> The \"in spite\" phrase seems to have been a mistake.\nYes. Sorry, I actually thought that \"in spite\" is a contradiction of \"despite\" and means \"In view of\".\n\n> I am surprised that there is so much variability in the performance\n> numbers, though, based on such small tweaks of the code.\nSilesian Corpus is very different from WALs and PG data files. Data files are rich in long sequences of same byte. This sequences are long, thus unrolled very effectively by memcpy method.\nBut Silesian corpus is rich in short matches of few bytes.\n\n> 1 нояб. 2019 г., в 19:59, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> I'd try running the benchmarks to verify the numbers, and maybe do some\n> additional tests, but it's not clear to me which patches should I use.\nCool, thanks!\n\n> I think the last patches with 'hacked' and 'hacked8' in the name are a\n> couple of months old, and the recent posts attach just a single patch.\n> Andrey, can you post current versions of both patches?\nPFA two patches:\nv4-0001-Use-memcpy-in-pglz-decompression.patch (known as 'hacked' in test_pglz extension)\nv4-0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch (known as 'hacked8')\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 2 Nov 2019 14:30:22 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hello Andrey,\n\nOn 2019-11-02 12:30, Andrey Borodin wrote:\n>> 1 нояб. 2019 г., в 18:48, Alvaro Herrera <alvherre@2ndquadrant.com> \n>> написал(а):\n> PFA two patches:\n> v4-0001-Use-memcpy-in-pglz-decompression.patch (known as 'hacked' in\n> test_pglz extension)\n> v4-0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch (known\n> as 'hacked8')\n\nLooking at the patches, it seems only the case of a match is changed. \nBut when we observe a literal byte, this is copied byte-by-byte with:\n\n else\n {\n * An unset control bit means LITERAL BYTE. So we just\n * copy one from INPUT to OUTPUT.\n */\n *dp++ = *sp++;\n }\n\nMaybe we can optimize this, too. For instance, you could just increase a \ncounter:\n\n else\n {\n /*\n * An unset control bit means LITERAL BYTE. We count\n * these and copy them later.\n */\n literal_bytes ++;\n }\n\nand in the case of:\n\n if (ctrl & 1)\n {\n /* First copy all the literal bytes */\n if (literal_bytes > 0)\n {\n memcpy( sp, dp, literal_bytes);\n sp += literal_bytes;\n dp += literal_bytes;\n literal_bytes = 0;\n }\n\n(Code untested!)\n\nThe same would need to be done at the very end, if the input ends \nwithout any new CTRL-byte.\n\nWether that gains us anything depends on how common literal bytes are. \nIt might be that highly compressible input has almost none, while input \nthat is a mix of incompressible strings and compressible ones might have \nlonger stretches. One example would be something like an SHA-256, that \nis repeated twice. The first instance would be incompressible, the \nsecond one would be just a copy. This might not happens that often in \npractical inputs, though.\n\nI wonder if you agree and what would happen if you try this variant on \nyour corpus tests.\n\nBest regards,\n\nTels\n\n\n",
"msg_date": "Sun, 03 Nov 2019 10:24:43 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi Tels!\nThanks for your interest in fast decompression.\n\n> 3 нояб. 2019 г., в 12:24, Tels <nospam-pg-abuse@bloodgate.com> написал(а):\n> \n> I wonder if you agree and what would happen if you try this variant on your corpus tests.\n\nI've tried some different optimization for literals. For example loop unrolling[0] and literals bulk-copying.\nThis approaches were brining some performance improvement. But with noise. Statistically they were somewhere better, somewhere worse, net win, but that \"net win\" depends on what we consider important data and important platform.\n\nProposed patch makes clearly decompression faster on any dataset, and platform.\nI believe improving pglz further is viable, but optimizations like common data prefix seems more promising to me.\nAlso, I think we actually need real codecs like lz4, zstd and brotli instead of our own invented wheel.\n\nIf you have some spare time - Pull Requests to test_pglz are welcome, lets benchmark more micro optimizations, it brings a lot of fun :)\n\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud\n\n[0] https://github.com/x4m/test_pglz/blob/master/pg_lzcompress_hacked.c#L166\n\n",
"msg_date": "Mon, 4 Nov 2019 10:14:35 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-11-01 16:48, Alvaro Herrera wrote:\n> As I understand that report, in these results \"less is better\", so the\n> hacked8 variant shows better performance (33.8) than current (42.5).\n> The \"hacked\" variant shows worse performance (48.2) that the current\n> code.\n\nWhich appears to be the opposite of, or at least inconsistent with, \nresults earlier in the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 09:03:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-11-01 17:59, Tomas Vondra wrote:\n> I'd try running the benchmarks to verify the numbers, and maybe do some\n> additional tests, but it's not clear to me which patches should I use.\n> \n> I think the last patches with 'hacked' and 'hacked8' in the name are a\n> couple of months old, and the recent posts attach just a single patch.\n> Andrey, can you post current versions of both patches?\n\nOK, waiting on some independent verification of benchmark numbers.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 09:04:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 09:04:25AM +0100, Peter Eisentraut wrote:\n> OK, waiting on some independent verification of benchmark numbers.\n\nStill waiting for these after 19 days, so the patch has been marked as\nreturned with feedback.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 17:03:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 25 нояб. 2019 г., в 13:03, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> On Wed, Nov 06, 2019 at 09:04:25AM +0100, Peter Eisentraut wrote:\n>> OK, waiting on some independent verification of benchmark numbers.\n> \n> Still waiting for these after 19 days, so the patch has been marked as\n> returned with feedback.\n\nI think status Needs Review describes what is going on better. It's not like something is awaited from my side.\n\nThanks.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 25 Nov 2019 13:21:27 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 01:21:27PM +0500, Andrey Borodin wrote:\n> I think status Needs Review describes what is going on better. It's\n> not like something is awaited from my side.\n\nIndeed. You are right so I have moved the patch instead, with \"Needs\nreview\". The patch status was actually incorrect in the CF app, as it\nwas marked as waiting on author.\n\n@Tomas: updated versions of the patches have been sent by Andrey. \n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 17:29:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Mon, Nov 25, 2019 at 05:29:40PM +0900, Michael Paquier wrote:\n>On Mon, Nov 25, 2019 at 01:21:27PM +0500, Andrey Borodin wrote:\n>> I think status Needs Review describes what is going on better. It's\n>> not like something is awaited from my side.\n>\n>Indeed. You are right so I have moved the patch instead, with \"Needs\n>review\". The patch status was actually incorrect in the CF app, as it\n>was marked as waiting on author.\n>\n>@Tomas: updated versions of the patches have been sent by Andrey.\n\nI've done benchmarks on the two last patches, using the data sets from\ntest_pglz repository [1], but using three simple queries:\n\n1) prefix - first 100 bytes of the value\n\n SELECT length(substr(value, 0, 100)) FROM t\n\n2) infix - 100 bytes from the middle\n\n SELECT length(substr(value, test_length/2, 100)) FROM t\n\n3) suffix - last 100 bytes\n\n SELECT length(substr(value, test_length - 100, 100)) FROM t\n\nSee the two attached scripts, implementing this benchmark. The test\nitself did a 60-second pgbench runs (single client) measuring tps on two\ndifferent machines.\n\npatch 1: v4-0001-Use-memcpy-in-pglz-decompression.patch\npatch 2: v4-0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n\nThe results (compared to master) from the first machine (i5-2500k CPU)\nlook like this:\n\n patch 1 | patch 2\ndataset prefix infix suffix | prefix infix suffix\n-------------------------------------------------------------------------\n000000010000000000000001 99% 134% 161% | 100% 126% 152%\n000000010000000000000006 99% 260% 287% | 100% 257% 279%\n000000010000000000000008 100% 100% 100% | 100% 95% 91%\n16398 100% 168% 221% | 100% 159% 215%\nshakespeare.txt 100% 138% 141% | 100% 116% 117%\nmr 99% 120% 128% | 100% 107% 108%\ndickens 100% 129% 132% | 100% 100% 100%\nmozilla 100% 119% 120% | 100% 102% 104%\nnci 100% 149% 141% | 100% 143% 135%\nooffice 99% 121% 123% | 100% 97% 98%\nosdb 100% 99% 99% | 100% 100% 99%\nreymont 99% 130% 132% | 100% 106% 107%\nsamba 100% 126% 132% | 100% 105% 111%\nsao 100% 100% 99% | 100% 100% 100%\nwebster 100% 127% 127% | 100% 106% 106%\nx-ray 99% 99% 99% | 100% 100% 100%\nxml 100% 144% 144% | 100% 130% 128%\n\nand on the other one (xeon e5-2620v4) looks like this:\n\n patch 1 | patch 2\ndataset prefix infix suffix | prefix infix suffix\n------------------------------------------------------------------------\n000000010000000000000001 98% 147% 170% | 98% 132% 159%\n000000010000000000000006 100% 340% 314% | 98% 334% 355%\n000000010000000000000008 99% 100% 105% | 99% 99% 101%\n16398 101% 153% 205% | 99% 148% 201%\nshakespeare.txt 100% 147% 149% | 99% 117% 118%\nmr 100% 131% 139% | 99% 112% 108%\ndickens 100% 143% 143% | 99% 103% 102%\nmozilla 100% 122% 122% | 99% 105% 106%\nnci 100% 151% 135% | 100% 135% 125%\nooffice 99% 127% 129% | 98% 101% 102%\nosdb 102% 100% 101% | 102% 100% 99%\nreymont 101% 142% 143% | 100% 108% 108%\nsamba 100% 132% 136% | 99% 109% 112%\nsao 99% 101% 100% | 99% 100% 100%\nwebster 100% 132% 129% | 100% 106% 106%\nx-ray 99% 101% 100% | 90% 101% 101%\nxml 100% 147% 148% | 100% 127% 125%\n\nIn general, I think the results for both patches seem clearly a win, but\nmaybe patch 1 is bit better, especially on the newer (xeon) CPU. So I'd\nprobably go with that one.\n\n[1] https://github.com/x4m/test_pglz\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 26 Nov 2019 10:43:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-11-26 10:43, Tomas Vondra wrote:\n> In general, I think the results for both patches seem clearly a win, but\n> maybe patch 1 is bit better, especially on the newer (xeon) CPU. So I'd\n> probably go with that one.\n\nPatch 1 is also the simpler patch, so it seems clearly preferable.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Nov 2019 20:17:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 08:17:13PM +0100, Peter Eisentraut wrote:\n>On 2019-11-26 10:43, Tomas Vondra wrote:\n>>In general, I think the results for both patches seem clearly a win, but\n>>maybe patch 1 is bit better, especially on the newer (xeon) CPU. So I'd\n>>probably go with that one.\n>\n>Patch 1 is also the simpler patch, so it seems clearly preferable.\n>\n\nYeah, although the difference is minimal. We could probably construct a\nbenchmark where #2 wins, but I think these queries are fairly realistic.\nSo I'd just go with #1.\n\nCode-wise I think the patches are mostly fine, although the comments\nmight need some proof-reading.\n\n1) I wasn't really sure what a \"nibble\" is, but maybe it's just me and\nit's a well-known term.\n\n2) First byte use lower -> First byte uses lower\n\n3) nibble contain upper -> nibble contains upper\n\n4) to preven possible uncertanity -> to prevent possible uncertainty\n\n5) I think we should briefly explain why memmove would be incompatible\nwith pglz, it's not quite clear to me.\n\n6) I'm pretty sure the comment in the 'while (off < len)' branch will be\nbadly mangled by pgindent.\n\n7) The last change moving \"copy\" to the next line seems unnecessary.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 26 Nov 2019 21:05:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 09:05:59PM +0100, Tomas Vondra wrote:\n> Yeah, although the difference is minimal. We could probably construct a\n> benchmark where #2 wins, but I think these queries are fairly realistic.\n> So I'd just go with #1.\n\nNice results. Using your benchmarks it indeed looks like patch 1 is a\nwinner here.\n\n> Code-wise I think the patches are mostly fine, although the comments\n> might need some proof-reading.\n> \n> 1) I wasn't really sure what a \"nibble\" is, but maybe it's just me and\n> it's a well-known term.\n> \n> 2) First byte use lower -> First byte uses lower\n> \n> 3) nibble contain upper -> nibble contains upper\n> \n> 4) to preven possible uncertanity -> to prevent possible uncertainty\n> \n> 5) I think we should briefly explain why memmove would be incompatible\n> with pglz, it's not quite clear to me.\n> \n> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n> badly mangled by pgindent.\n> \n> 7) The last change moving \"copy\" to the next line seems unnecessary.\n\nPatch 1 has a typo as well here:\n+ * When offset is smaller than lengh - source and\ns/lengh/length/\n\nOkay, if we are reaching a conclusion here, Tomas or Peter, are you\nplanning to finish brushing the patch and potentially commit it?\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 17:01:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 05:01:47PM +0900, Michael Paquier wrote:\n>On Tue, Nov 26, 2019 at 09:05:59PM +0100, Tomas Vondra wrote:\n>> Yeah, although the difference is minimal. We could probably construct a\n>> benchmark where #2 wins, but I think these queries are fairly realistic.\n>> So I'd just go with #1.\n>\n>Nice results. Using your benchmarks it indeed looks like patch 1 is a\n>winner here.\n>\n>> Code-wise I think the patches are mostly fine, although the comments\n>> might need some proof-reading.\n>>\n>> 1) I wasn't really sure what a \"nibble\" is, but maybe it's just me and\n>> it's a well-known term.\n>>\n>> 2) First byte use lower -> First byte uses lower\n>>\n>> 3) nibble contain upper -> nibble contains upper\n>>\n>> 4) to preven possible uncertanity -> to prevent possible uncertainty\n>>\n>> 5) I think we should briefly explain why memmove would be incompatible\n>> with pglz, it's not quite clear to me.\n>>\n>> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n>> badly mangled by pgindent.\n>>\n>> 7) The last change moving \"copy\" to the next line seems unnecessary.\n>\n>Patch 1 has a typo as well here:\n>+ * When offset is smaller than lengh - source and\n>s/lengh/length/\n>\n>Okay, if we are reaching a conclusion here, Tomas or Peter, are you\n>planning to finish brushing the patch and potentially commit it?\n\nI'd like some feedback from Andrey regarding the results and memmove\ncomment, ant then I'll polish and push.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 27 Nov 2019 10:42:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi Tomas!\n\nThanks for benchmarking this!\n\n> 26 нояб. 2019 г., в 14:43, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> On Mon, Nov 25, 2019 at 05:29:40PM +0900, Michael Paquier wrote:\n>> On Mon, Nov 25, 2019 at 01:21:27PM +0500, Andrey Borodin wrote:\n>>> I think status Needs Review describes what is going on better. It's\n>>> not like something is awaited from my side.\n>> \n>> Indeed. You are right so I have moved the patch instead, with \"Needs\n>> review\". The patch status was actually incorrect in the CF app, as it\n>> was marked as waiting on author.\n>> \n>> @Tomas: updated versions of the patches have been sent by Andrey.\n> \n> I've done benchmarks on the two last patches, using the data sets from\n> test_pglz repository [1], but using three simple queries:\n> \n> 1) prefix - first 100 bytes of the value\n> \n> SELECT length(substr(value, 0, 100)) FROM t\n> \n> 2) infix - 100 bytes from the middle\n> \n> SELECT length(substr(value, test_length/2, 100)) FROM t\n> \n> 3) suffix - last 100 bytes\n> \n> SELECT length(substr(value, test_length - 100, 100)) FROM t\n> \n> See the two attached scripts, implementing this benchmark. The test\n> itself did a 60-second pgbench runs (single client) measuring tps on two\n> different machines.\n> \n> patch 1: v4-0001-Use-memcpy-in-pglz-decompression.patch\n> patch 2: v4-0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n> \n> The results (compared to master) from the first machine (i5-2500k CPU)\n> look like this:\n> \n> patch 1 | patch 2\n> dataset prefix infix suffix | prefix infix suffix\n> -------------------------------------------------------------------------\n> 000000010000000000000001 99% 134% 161% | 100% 126% 152%\n> 000000010000000000000006 99% 260% 287% | 100% 257% 279%\n> 000000010000000000000008 100% 100% 100% | 100% 95% 91%\n> 16398 100% 168% 221% | 100% 159% 215%\n> shakespeare.txt 100% 138% 141% | 100% 116% 117%\n> mr 99% 120% 128% | 100% 107% 108%\n> dickens 100% 129% 132% | 100% 100% 100%\n> mozilla 100% 119% 120% | 100% 102% 104%\n> nci 100% 149% 141% | 100% 143% 135%\n> ooffice 99% 121% 123% | 100% 97% 98%\n> osdb 100% 99% 99% | 100% 100% 99%\n> reymont 99% 130% 132% | 100% 106% 107%\n> samba 100% 126% 132% | 100% 105% 111%\n> sao 100% 100% 99% | 100% 100% 100%\n> webster 100% 127% 127% | 100% 106% 106%\n> x-ray 99% 99% 99% | 100% 100% 100%\n> xml 100% 144% 144% | 100% 130% 128%\n> \n> and on the other one (xeon e5-2620v4) looks like this:\n> \n> patch 1 | patch 2\n> dataset prefix infix suffix | prefix infix suffix\n> ------------------------------------------------------------------------\n> 000000010000000000000001 98% 147% 170% | 98% 132% 159%\n> 000000010000000000000006 100% 340% 314% | 98% 334% 355%\n> 000000010000000000000008 99% 100% 105% | 99% 99% 101%\n> 16398 101% 153% 205% | 99% 148% 201%\n> shakespeare.txt 100% 147% 149% | 99% 117% 118%\n> mr 100% 131% 139% | 99% 112% 108%\n> dickens 100% 143% 143% | 99% 103% 102%\n> mozilla 100% 122% 122% | 99% 105% 106%\n> nci 100% 151% 135% | 100% 135% 125%\n> ooffice 99% 127% 129% | 98% 101% 102%\n> osdb 102% 100% 101% | 102% 100% 99%\n> reymont 101% 142% 143% | 100% 108% 108%\n> samba 100% 132% 136% | 99% 109% 112%\n> sao 99% 101% 100% | 99% 100% 100%\n> webster 100% 132% 129% | 100% 106% 106%\n> x-ray 99% 101% 100% | 90% 101% 101%\n> xml 100% 147% 148% | 100% 127% 125%\n> \n> In general, I think the results for both patches seem clearly a win, but\n> maybe patch 1 is bit better, especially on the newer (xeon) CPU. So I'd\n> probably go with that one.\n\n\n\nFrom my POV there are two interesting new points in your benchmarks:\n1. They are more or lesss end-to-end benchmarks with whole system involved.\n2. They provide per-payload breakdown\n\nPrefix experiment is mostly related to reading from page cache and not directly connected with decompression. It's a bit strange that we observe 1% degradation in certain experiments, but I believe it's a noise.\nInfix and Suffix results are correlated. We observe no impact of the patch on compressed data.\n\ntest_pglz also includes slicing by 2Kb and 8Kb. This was done to imitate toasting. But as far as I understand, in your test data payload will be inserted into toast table too, won't it? If so, I agree that patch 1 looks like a better option.\n\n> 27 нояб. 2019 г., в 1:05, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> Code-wise I think the patches are mostly fine, although the comments\n> might need some proof-reading.\n> \n> 1) I wasn't really sure what a \"nibble\" is, but maybe it's just me and\n> it's a well-known term.\nI've took the word from pg_lzcompress.c comments\n * The offset is in the upper nibble of T1 and in T2.\n * The length is in the lower nibble of T1.\n> \n> 2) First byte use lower -> First byte uses lower\n> \n> 3) nibble contain upper -> nibble contains upper\n> \n> 4) to preven possible uncertanity -> to prevent possible uncertainty\n> \n> 5) I think we should briefly explain why memmove would be incompatible\n> with pglz, it's not quite clear to me.\nHere's the example\n+\t\t\t\t\t * Consider input: 112341234123412341234\n+\t\t\t\t\t * At byte 5 here ^ we have match with length 16 and\n+\t\t\t\t\t * offset 4. 11234M(len=16, off=4)\nIf we simply memmove() this 16 bytes we will produce 112341234XXXXXXXXXXXX, where series of X is 12 undefined bytes, that were at bytes [6:18].\n\n> \n> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n> badly mangled by pgindent.\nI think I can just write it without line limit and then run pgindent. Will try to do it this evening. Also, I will try to write more about memmove.\n> \n> 7) The last change moving \"copy\" to the next line seems unnecessary.\n\n\nOh, looks like I had been rewording this comment, and eventually came to the same text..Yes, this change is absolutely unnecessary.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 27 Nov 2019 17:47:25 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 05:47:25PM +0500, Andrey Borodin wrote:\n>Hi Tomas!\n>\n>Thanks for benchmarking this!\n>\n>> 26 нояб. 2019 г., в 14:43, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> On Mon, Nov 25, 2019 at 05:29:40PM +0900, Michael Paquier wrote:\n>>> On Mon, Nov 25, 2019 at 01:21:27PM +0500, Andrey Borodin wrote:\n>>>> I think status Needs Review describes what is going on better. It's\n>>>> not like something is awaited from my side.\n>>>\n>>> Indeed. You are right so I have moved the patch instead, with \"Needs\n>>> review\". The patch status was actually incorrect in the CF app, as it\n>>> was marked as waiting on author.\n>>>\n>>> @Tomas: updated versions of the patches have been sent by Andrey.\n>>\n>> I've done benchmarks on the two last patches, using the data sets from\n>> test_pglz repository [1], but using three simple queries:\n>>\n>> 1) prefix - first 100 bytes of the value\n>>\n>> SELECT length(substr(value, 0, 100)) FROM t\n>>\n>> 2) infix - 100 bytes from the middle\n>>\n>> SELECT length(substr(value, test_length/2, 100)) FROM t\n>>\n>> 3) suffix - last 100 bytes\n>>\n>> SELECT length(substr(value, test_length - 100, 100)) FROM t\n>>\n>> See the two attached scripts, implementing this benchmark. The test\n>> itself did a 60-second pgbench runs (single client) measuring tps on two\n>> different machines.\n>>\n>> patch 1: v4-0001-Use-memcpy-in-pglz-decompression.patch\n>> patch 2: v4-0001-Use-memcpy-in-pglz-decompression-for-long-matches.patch\n>>\n>> The results (compared to master) from the first machine (i5-2500k CPU)\n>> look like this:\n>>\n>> patch 1 | patch 2\n>> dataset prefix infix suffix | prefix infix suffix\n>> -------------------------------------------------------------------------\n>> 000000010000000000000001 99% 134% 161% | 100% 126% 152%\n>> 000000010000000000000006 99% 260% 287% | 100% 257% 279%\n>> 000000010000000000000008 100% 100% 100% | 100% 95% 91%\n>> 16398 100% 168% 221% | 100% 159% 215%\n>> shakespeare.txt 100% 138% 141% | 100% 116% 117%\n>> mr 99% 120% 128% | 100% 107% 108%\n>> dickens 100% 129% 132% | 100% 100% 100%\n>> mozilla 100% 119% 120% | 100% 102% 104%\n>> nci 100% 149% 141% | 100% 143% 135%\n>> ooffice 99% 121% 123% | 100% 97% 98%\n>> osdb 100% 99% 99% | 100% 100% 99%\n>> reymont 99% 130% 132% | 100% 106% 107%\n>> samba 100% 126% 132% | 100% 105% 111%\n>> sao 100% 100% 99% | 100% 100% 100%\n>> webster 100% 127% 127% | 100% 106% 106%\n>> x-ray 99% 99% 99% | 100% 100% 100%\n>> xml 100% 144% 144% | 100% 130% 128%\n>>\n>> and on the other one (xeon e5-2620v4) looks like this:\n>>\n>> patch 1 | patch 2\n>> dataset prefix infix suffix | prefix infix suffix\n>> ------------------------------------------------------------------------\n>> 000000010000000000000001 98% 147% 170% | 98% 132% 159%\n>> 000000010000000000000006 100% 340% 314% | 98% 334% 355%\n>> 000000010000000000000008 99% 100% 105% | 99% 99% 101%\n>> 16398 101% 153% 205% | 99% 148% 201%\n>> shakespeare.txt 100% 147% 149% | 99% 117% 118%\n>> mr 100% 131% 139% | 99% 112% 108%\n>> dickens 100% 143% 143% | 99% 103% 102%\n>> mozilla 100% 122% 122% | 99% 105% 106%\n>> nci 100% 151% 135% | 100% 135% 125%\n>> ooffice 99% 127% 129% | 98% 101% 102%\n>> osdb 102% 100% 101% | 102% 100% 99%\n>> reymont 101% 142% 143% | 100% 108% 108%\n>> samba 100% 132% 136% | 99% 109% 112%\n>> sao 99% 101% 100% | 99% 100% 100%\n>> webster 100% 132% 129% | 100% 106% 106%\n>> x-ray 99% 101% 100% | 90% 101% 101%\n>> xml 100% 147% 148% | 100% 127% 125%\n>>\n>> In general, I think the results for both patches seem clearly a win, but\n>> maybe patch 1 is bit better, especially on the newer (xeon) CPU. So I'd\n>> probably go with that one.\n>\n>\n>\n>From my POV there are two interesting new points in your benchmarks:\n>1. They are more or lesss end-to-end benchmarks with whole system involved.\n>2. They provide per-payload breakdown\n>\n\nYes. I was considering using the test_pglz extension first, but in the\nend I decided an end-to-end test is easier to do and more relevant.\n\n>Prefix experiment is mostly related to reading from page cache and not\n>directly connected with decompression. It's a bit strange that we\n>observe 1% degradation in certain experiments, but I believe it's a\n>noise.\n>\n\nYes, I agree it's probably noise - it's not always a degradation, there\nare cases where it actually improves by ~1%. Perhaps more runs would\neven this out, or maybe it's due to different bin layout or something.\n\nI should have some results from a test with longer (10-minute) run soon,\nbut I don't think this is a massive issue.\n\n>Infix and Suffix results are correlated. We observe no impact of the\n>patch on compressed data.\n>\n\nTBH I have not looked at which data sets are compressible etc. so I\ncan't really comment on this.\n\nFWIW the reason why I did the prefix/infix/suffix is primarily that I\nwas involved in some recent patches tweaking TOAST slicing, so I wanted\nto se if this happens to negatively affect it somehow. And it does not.\n\n>test_pglz also includes slicing by 2Kb and 8Kb. This was done to\n>imitate toasting. But as far as I understand, in your test data payload\n>will be inserted into toast table too, won't it? If so, I agree that\n>patch 1 looks like a better option.\n>\n\nYes, the tests simply do whatever PostgreSQL would do when loading and\nstoring this data, including TOASTing.\n\n>> 27 нояб. 2019 г., в 1:05, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> Code-wise I think the patches are mostly fine, although the comments\n>> might need some proof-reading.\n>>\n>> 1) I wasn't really sure what a \"nibble\" is, but maybe it's just me and\n>> it's a well-known term.\n>I've took the word from pg_lzcompress.c comments\n> * The offset is in the upper nibble of T1 and in T2.\n> * The length is in the lower nibble of T1.\n\nAha, good. I haven't noticed that word before, so I assumed it's\nintroduced by those patches. And the first thing I thought of was\n\"nibbles\" video game [1]. Which obviously left me a bit puzzled ;-)\n\nBut it seems to be a well-known term, I just never heard it before.\n\n[1] https://en.wikipedia.org/wiki/Nibbles_(video_game)\n\n>>\n>> 2) First byte use lower -> First byte uses lower\n>>\n>> 3) nibble contain upper -> nibble contains upper\n>>\n>> 4) to preven possible uncertanity -> to prevent possible uncertainty\n>>\n>> 5) I think we should briefly explain why memmove would be incompatible\n>> with pglz, it's not quite clear to me.\n>Here's the example\n>+ * Consider input: 112341234123412341234\n>+ * At byte 5 here ^ we have match with length 16 and\n>+ * offset 4. 11234M(len=16, off=4)\n>\n>If we simply memmove() this 16 bytes we will produce\n>112341234XXXXXXXXXXXX, where series of X is 12 undefined bytes, that\n>were at bytes [6:18].\n>\n\nOK, thanks.\n\n>>\n>> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n>> badly mangled by pgindent.\n>\n>I think I can just write it without line limit and then run pgindent.\n>Will try to do it this evening. Also, I will try to write more about\n>memmove.\n>\n>>\n>> 7) The last change moving \"copy\" to the next line seems unnecessary.\n>\n>Oh, looks like I had been rewording this comment, and eventually came\n>to the same text..Yes, this change is absolutely unnecessary.\n>\n>Thanks!\n>\n\nGood. I'll wait for an updated version of the patch and then try to get\nit pushed by the end of the CF.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Nov 2019 16:28:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "> 27 нояб. 2019 г., в 20:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n>>> \n>>> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n>>> badly mangled by pgindent.\n>> \n>> I think I can just write it without line limit and then run pgindent.\n>> Will try to do it this evening. Also, I will try to write more about\n>> memmove.\nWell, yes, I could not make pgindent format some parts of that comment, gave up and left only simple text.\n>> \n>>> \n>>> 7) The last change moving \"copy\" to the next line seems unnecessary.\n>> \n>> Oh, looks like I had been rewording this comment, and eventually came\n>> to the same text..Yes, this change is absolutely unnecessary.\n>> \n>> Thanks!\n>> \n> \n> Good. I'll wait for an updated version of the patch and then try to get\n> it pushed by the end of the CF.\n\nPFA v5.\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 27 Nov 2019 23:27:49 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On 2019-Nov-27, Andrey Borodin wrote:\n\n> \n> \n> > 27 нояб. 2019 г., в 20:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> > \n> >>> \n> >>> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n> >>> badly mangled by pgindent.\n> >> \n> >> I think I can just write it without line limit and then run pgindent.\n> >> Will try to do it this evening. Also, I will try to write more about\n> >> memmove.\n> Well, yes, I could not make pgindent format some parts of that\n> comment, gave up and left only simple text.\n\nPlease don't. The way to avoid pgindent from messing with the comment\nis to surround it with dashes, /*--------- and end it with *------- */\nJust make sure that you use well-aligned and lines shorter than 80, for\ncleanliness; but whatever you do, if you use the dashes then pgindent\nwon't touch it.\n\n(I think the closing dash line is not necessary, but it looks better for\nthings to be symmetrical.)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Nov 2019 15:41:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 04:28:18PM +0100, Tomas Vondra wrote:\n> Yes. I was considering using the test_pglz extension first, but in the\n> end I decided an end-to-end test is easier to do and more relevant.\n\nI actually got something in this area in one of my trees:\nhttps://github.com/michaelpq/pg_plugins/tree/master/compress_test\n\n> Good. I'll wait for an updated version of the patch and then try to get\n> it pushed by the end of the CF.\n\nSounds like a plan. Thanks.\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 11:39:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 11:27:49PM +0500, Andrey Borodin wrote:\n>\n>\n>> 27 нояб. 2019 г., в 20:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>>>>\n>>>> 6) I'm pretty sure the comment in the 'while (off < len)' branch will be\n>>>> badly mangled by pgindent.\n>>>\n>>> I think I can just write it without line limit and then run pgindent.\n>>> Will try to do it this evening. Also, I will try to write more about\n>>> memmove.\n>Well, yes, I could not make pgindent format some parts of that comment, gave up and left only simple text.\n>>>\n>>>>\n>>>> 7) The last change moving \"copy\" to the next line seems unnecessary.\n>>>\n>>> Oh, looks like I had been rewording this comment, and eventually came\n>>> to the same text..Yes, this change is absolutely unnecessary.\n>>>\n>>> Thanks!\n>>>\n>>\n>> Good. I'll wait for an updated version of the patch and then try to get\n>> it pushed by the end of the CF.\n>\n\nOK, pushed, with some minor cosmetic tweaks on the comments (essentially\nusing the formatting trick pointed out by Alvaro), and removing one\nunnecessary change in pglz_maximum_compressed_size.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 28 Nov 2019 23:43:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "\n\n> 29 нояб. 2019 г., в 3:43, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> OK, pushed, with some minor cosmetic tweaks on the comments (essentially\n> using the formatting trick pointed out by Alvaro), and removing one\n> unnecessary change in pglz_maximum_compressed_size.\n\nCool, thanks!\n\n> pglz_maximum_compressed_size\nIt was an artifact of pgindent.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 29 Nov 2019 10:18:18 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 10:18:18AM +0500, Andrey Borodin wrote:\n>> 29 нояб. 2019 г., в 3:43, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> OK, pushed, with some minor cosmetic tweaks on the comments (essentially\n>> using the formatting trick pointed out by Alvaro), and removing one\n>> unnecessary change in pglz_maximum_compressed_size.\n> \n> Cool, thanks!\n\nYippee. Thanks.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:50:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
},
{
"msg_contents": "Hi Petr!\n\n> 4 авг. 2019 г., в 05:41, Petr Jelinek <petr@2ndquadrant.com> написал(а):\n> \n> Just so that we don't idly talk, what do you think about the attached?\n> It:\n> - adds new GUC compression_algorithm with possible values of pglz (default) and lz4 (if lz4 is compiled in), requires SIGHUP\n> - adds --with-lz4 configure option (default yes, so the configure option is actually --without-lz4) that enables the lz4, it's using system library\n\nDo you plan to work on lz4 aiming at 13 or 14?\nMaybe let's register it on CF 2020-07?\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Fri, 13 Mar 2020 10:32:20 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pglz performance"
}
] |
[
{
"msg_contents": "Hello,\n\nThe process crashed when running in bootstrap mode and received signal to shutdown.\n\nFrom the call stack we can see that the transaction id is 1, which is BootstrapTransactionId.\n\nDuring TransactionLogFetch function, which fetch commit status of specified transaction id, it will return COMITTED when transcation id is BootstrapTransactionId.\n\nThen it will crash because we cannot abort transaction while it was already committed.\n\n\n(gdb) bt\n#0 0x00007f4598f02617 in raise () from /lib64/libc.so.6\n#1 0x00007f4598f03d08 in abort () from /lib64/libc.so.6\n#2 0x000000000106001d in errfinish (dummy=0) at elog.c:564\n#3 0x0000000001064788 in elog_finish (elevel=22, fmt=0x1190408 \"cannot abort transaction %u, it was already committed\") at elog.c:1385\n#4 0x0000000000633e5a in RecordTransactionAbort (isSubXact=false) at xact.c:1584\n#5 0x00000000006361fa in AbortTransaction () at xact.c:2614\n#6 0x000000000063a5b9 in AbortOutOfAnyTransaction () at xact.c:4423\n#7 0x000000000108f417 in ShutdownPostgres (code=1, arg=0) at postinit.c:1221\n#8 0x0000000000d3554a in shmem_exit (code=1) at ipc.c:239\n#9 0x0000000000d35395 in proc_exit_prepare (code=1) at ipc.c:194\n#10 0x0000000000d352bb in proc_exit (code=1) at ipc.c:107\n#11 0x000000000105ffbb in errfinish (dummy=0) at elog.c:550\n#12 0x0000000000d9a020 in ProcessInterrupts () at postgres.c:3019\n#13 0x00000000005489aa in heapgetpage (scan=0x34895f0, page=1) at heapam.c:384\n#14 0x000000000054c48d in heapgettup_pagemode (scan=0x34895f0, dir=ForwardScanDirection, nkeys=1, key=0x33cf150) at heapam.c:1052\n#15 0x000000000054e3e0 in heap_getnext (scan=0x34895f0, direction=ForwardScanDirection) at heapam.c:1850\n#16 0x00000000006fe364 in index_update_stats (rel=0x7f459b770030, hasindex=true, reltuples=0) at index.c:2167\n#17 0x00000000006ff40b in index_build (heapRelation=0x7f459b770030, indexRelation=0x7f459b704ca8, indexInfo=0x345c008, isprimary=false, isreindex=false, parallel=false) at index.c:2398\n#18 0x00000000006e3ae3 in build_indices () at bootstrap.c:1112\n#19 0x00000000006dbc8b in boot_yyparse () at bootparse.y:399\n#20 0x00000000006e17f4 in BootstrapModeMain () at bootstrap.c:516\n#21 0x00000000006e1578 in AuxiliaryProcessMain (argc=6, argv=0x33299b8) at bootstrap.c:436\n#22 0x0000000000aab0be in main (argc=7, argv=0x33299b0) at main.c:220\n(gdb) f 4\n#4 0x0000000000633e5a in RecordTransactionAbort (isSubXact=false) at xact.c:1584\n1584 elog(PANIC, \"cannot abort transaction %u, it was already committed\",\n(gdb) p xid\n$4 = 1\n\nI try to fix this issue and check whether it's normal transaction id before we do abort.\n\n\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 20feeec327..dbf2bf567a 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -4504,8 +4504,13 @@ RollbackAndReleaseCurrentSubTransaction(void)\n void\n AbortOutOfAnyTransaction(void)\n {\n+ TransactionId xid = GetCurrentTransactionIdIfAny();\n TransactionState s = CurrentTransactionState;\n \n+ /* Check to see if the transaction ID is a permanent one because we cannot abort it */\n+ if (!TransactionIdIsNormal(xid))\n+ return;\n+\n /* Ensure we're not running in a doomed memory context */\n AtAbort_Memory();\n\nCan we fix in this way?\n\nThanks\n\nRay\n\n\n\n\n\n\n\nHello,\nThe process crashed when running in bootstrap mode and received signal to shutdown.\nFrom the call stack we can see that the transaction id is 1, which is BootstrapTransactionId.\nDuring TransactionLogFetch function, which fetch commit status of \nspecified transaction id, it will return COMITTED when transcation id is\n BootstrapTransactionId.\nThen it will crash because we cannot abort transaction while it was already committed.\n\n(gdb) bt\n#0 0x00007f4598f02617 in raise () from /lib64/libc.so.6\n#1 0x00007f4598f03d08 in abort () from /lib64/libc.so.6\n#2 0x000000000106001d in errfinish (dummy=0) at elog.c:564\n#3 0x0000000001064788 in elog_finish (elevel=22, fmt=0x1190408 \"cannot \nabort transaction %u, it was already committed\") at elog.c:1385\n#4 0x0000000000633e5a in RecordTransactionAbort (isSubXact=false) at xact.c:1584\n#5 0x00000000006361fa in AbortTransaction () at xact.c:2614\n#6 0x000000000063a5b9 in AbortOutOfAnyTransaction () at xact.c:4423\n#7 0x000000000108f417 in ShutdownPostgres (code=1, arg=0) at postinit.c:1221\n#8 0x0000000000d3554a in shmem_exit (code=1) at ipc.c:239\n#9 0x0000000000d35395 in proc_exit_prepare (code=1) at ipc.c:194\n#10 0x0000000000d352bb in proc_exit (code=1) at ipc.c:107\n#11 0x000000000105ffbb in errfinish (dummy=0) at elog.c:550\n#12 0x0000000000d9a020 in ProcessInterrupts () at postgres.c:3019\n#13 0x00000000005489aa in heapgetpage (scan=0x34895f0, page=1) at heapam.c:384\n#14 0x000000000054c48d in heapgettup_pagemode (scan=0x34895f0, \ndir=ForwardScanDirection, nkeys=1, key=0x33cf150) at heapam.c:1052\n#15 0x000000000054e3e0 in heap_getnext (scan=0x34895f0, direction=ForwardScanDirection) at heapam.c:1850\n#16 0x00000000006fe364 in index_update_stats (rel=0x7f459b770030, hasindex=true, reltuples=0) at index.c:2167\n#17 0x00000000006ff40b in index_build (heapRelation=0x7f459b770030, \nindexRelation=0x7f459b704ca8, indexInfo=0x345c008, isprimary=false, \nisreindex=false, parallel=false) at index.c:2398\n#18 0x00000000006e3ae3 in build_indices () at bootstrap.c:1112\n#19 0x00000000006dbc8b in boot_yyparse () at bootparse.y:399\n#20 0x00000000006e17f4 in BootstrapModeMain () at bootstrap.c:516\n#21 0x00000000006e1578 in AuxiliaryProcessMain (argc=6, argv=0x33299b8) at bootstrap.c:436\n#22 0x0000000000aab0be in main (argc=7, argv=0x33299b0) at main.c:220\n(gdb) f 4\n#4 0x0000000000633e5a in RecordTransactionAbort (isSubXact=false) at xact.c:1584\n1584 elog(PANIC, \"cannot abort transaction %u, it was already committed\",\n(gdb) p xid\n$4 = 1\nI try to fix this issue and check whether it's normal transaction id before we do abort.\n\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 20feeec327..dbf2bf567a 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -4504,8 +4504,13 @@ RollbackAndReleaseCurrentSubTransaction(void)\n void\n AbortOutOfAnyTransaction(void)\n {\n+ TransactionId xid = GetCurrentTransactionIdIfAny();\n TransactionState s = CurrentTransactionState;\n \n+ /* Check to see if the transaction ID is a permanent one because we cannot abort it */\n+ if (!TransactionIdIsNormal(xid))\n+ return;\n+\n /* Ensure we're not running in a doomed memory context */\n AtAbort_Memory();\nCan we fix in this way?\nThanks\nRay",
"msg_date": "Mon, 13 May 2019 12:45:25 +0800 (CST)",
"msg_from": "Thunder <thunder1@126.com>",
"msg_from_op": true,
"msg_subject": "PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "Hello,\n\nOn Mon, May 13, 2019 at 10:15 AM Thunder <thunder1@126.com> wrote:\n\n> I try to fix this issue and check whether it's normal transaction id\n> before we do abort.\n>\n> diff --git a/src/backend/access/transam/xact.c\n> b/src/backend/access/transam/xact.c\n> index 20feeec327..dbf2bf567a 100644\n> --- a/src/backend/access/transam/xact.c\n> +++ b/src/backend/access/transam/xact.c\n> @@ -4504,8 +4504,13 @@ RollbackAndReleaseCurrentSubTransaction(void)\n> void\n> AbortOutOfAnyTransaction(void)\n> {\n> + TransactionId xid = GetCurrentTransactionIdIfAny();\n> TransactionState s = CurrentTransactionState;\n>\n> + /* Check to see if the transaction ID is a permanent one because\n> we cannot abort it */\n> + if (!TransactionIdIsNormal(xid))\n> + return;\n> +\n> /* Ensure we're not running in a doomed memory context */\n> AtAbort_Memory();\n>\n> Can we fix in this way?\n>\n> If we fix the issue in this way, we're certainly not going to do all those\nportal,locks,memory,resource owner cleanups that are done\ninside AbortTransaction() for a normal transaction ID. But, I'm not sure\nhow relevant those steps are since the database is anyway shutting down.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nHello,On Mon, May 13, 2019 at 10:15 AM Thunder <thunder1@126.com> wrote:\nI try to fix this issue and check whether it's normal transaction id before we do abort.\n\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 20feeec327..dbf2bf567a 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -4504,8 +4504,13 @@ RollbackAndReleaseCurrentSubTransaction(void)\n void\n AbortOutOfAnyTransaction(void)\n {\n+ TransactionId xid = GetCurrentTransactionIdIfAny();\n TransactionState s = CurrentTransactionState;\n \n+ /* Check to see if the transaction ID is a permanent one because we cannot abort it */\n+ if (!TransactionIdIsNormal(xid))\n+ return;\n+\n /* Ensure we're not running in a doomed memory context */\n AtAbort_Memory();\nCan we fix in this way?\nIf we fix the issue in this way, we're certainly not going to do all those portal,locks,memory,resource owner cleanups that are done inside AbortTransaction() for a normal transaction ID. But, I'm not sure how relevant those steps are since the database is anyway shutting down.-- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 13 May 2019 13:25:19 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "On Mon, May 13, 2019 at 01:25:19PM +0530, Kuntal Ghosh wrote:\n> If we fix the issue in this way, we're certainly not going to do all\n> those portal,locks,memory,resource owner cleanups that are done\n> inside AbortTransaction() for a normal transaction ID. But, I'm not\n> sure how relevant those steps are since the database is anyway\n> shutting down.\n\nAnd it is happening in bootstrap, meaning that the data folder is most\nlikely toast, and needs to be reinitialized. TransactionLogFetch()\ntreats bootstrap and frozen XIDs as always committed, so from this\nperspective it is not wrong either to complain that this transaction\nhas already been committed when attempting to abort it. Not sure\nwhat's a more user-friendly behavior in this case though.\n--\nMichael",
"msg_date": "Mon, 13 May 2019 17:10:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 13, 2019 at 01:25:19PM +0530, Kuntal Ghosh wrote:\n>> If we fix the issue in this way, we're certainly not going to do all\n>> those portal,locks,memory,resource owner cleanups that are done\n>> inside AbortTransaction() for a normal transaction ID. But, I'm not\n>> sure how relevant those steps are since the database is anyway\n>> shutting down.\n\n> And it is happening in bootstrap, meaning that the data folder is most\n> likely toast, and needs to be reinitialized.\n\nIndeed, initdb is going to remove the data directory if the bootstrap run\ncrashes.\n\nIf we do anything at all about this, my thought would just be to change\nbootstrap_signals() so that it points all the signal handlers at\nquickdie(), or maybe something equivalent to quickdie() but printing\na more apropos message, or even just set them all to SIGDFL since that\nmeans process termination for all of these. die() isn't really the right\nthing, precisely because it thinks it can trigger transaction abort,\nwhich makes no sense in bootstrap mode.\n\nBut ... that code's been like that for decades and nobody's complained\nbefore. Why are we worried about bootstrap's response to signals at all?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 09:37:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "On Mon, May 13, 2019 at 7:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, May 13, 2019 at 01:25:19PM +0530, Kuntal Ghosh wrote:\n> >> If we fix the issue in this way, we're certainly not going to do all\n> >> those portal,locks,memory,resource owner cleanups that are done\n> >> inside AbortTransaction() for a normal transaction ID. But, I'm not\n> >> sure how relevant those steps are since the database is anyway\n> >> shutting down.\n>\n> > And it is happening in bootstrap, meaning that the data folder is most\n> > likely toast, and needs to be reinitialized.\n>\n> Indeed, initdb is going to remove the data directory if the bootstrap run\n> crashes.\n>\n> But ... that code's been like that for decades and nobody's complained\n> before. Why are we worried about bootstrap's response to signals at all?\n>\n> +1\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, May 13, 2019 at 7:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 13, 2019 at 01:25:19PM +0530, Kuntal Ghosh wrote:\n>> If we fix the issue in this way, we're certainly not going to do all\n>> those portal,locks,memory,resource owner cleanups that are done\n>> inside AbortTransaction() for a normal transaction ID. But, I'm not\n>> sure how relevant those steps are since the database is anyway\n>> shutting down.\n\n> And it is happening in bootstrap, meaning that the data folder is most\n> likely toast, and needs to be reinitialized.\n\nIndeed, initdb is going to remove the data directory if the bootstrap run\ncrashes.\nBut ... that code's been like that for decades and nobody's complained\nbefore. Why are we worried about bootstrap's response to signals at all?\n+1 -- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 13 May 2019 19:43:24 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "I wrote:\n> If we do anything at all about this, my thought would just be to change\n> bootstrap_signals() so that it points all the signal handlers at\n> quickdie(), or maybe something equivalent to quickdie() but printing\n> a more apropos message, or even just set them all to SIGDFL since that\n> means process termination for all of these. die() isn't really the right\n> thing, precisely because it thinks it can trigger transaction abort,\n> which makes no sense in bootstrap mode.\n\nAfter further thought I like the SIG_DFL answer, as per attached proposed\npatch. With this, you get SIGINT behavior like this if you manage to\ncatch it in bootstrap mode (which is not that easy these days):\n\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default timezone ... America/New_York\ncreating configuration files ... ok\nrunning bootstrap script ... ^Cchild process was terminated by signal 2: Interrupt\ninitdb: removing data directory \"/home/postgres/testversion/data\"\n\nThat seems perfectly fine from here.\n\n> But ... that code's been like that for decades and nobody's complained\n> before. Why are we worried about bootstrap's response to signals at all?\n\nI'm still wondering why the OP cares. Still, the PANIC message that you\nget right now is potentially confusing.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 13 May 2019 14:15:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "On Mon, May 13, 2019 at 09:37:32AM -0400, Tom Lane wrote:\n> But ... that code's been like that for decades and nobody's complained\n> before. Why are we worried about bootstrap's response to signals at all?\n\nYeah, I don't think that it is something worth bothering either. As\nyou mentioned the data folder would be removed by default. Or perhaps\nthe reporter has another case in mind which could justify a change in\nthe signal handlers? I am ready to hear that case, but there is\nnothing about the reason why it could be a benefit.\n\nThe patch proposed upthread is not something I find correct anyway,\nI'd rather have the abort path complain loudly about a bootstrap\ntransaction that fails instead of just ignoring it, because it is the\nkind of transaction which must never fail. And it seems to me that it\ncan be handy for development purposes.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 08:53:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "On our server when process crash and core dump file generated we will receive complaining phone call.\nThat's why i try to fix it.\n\n\n\n\n\n\n\n\nAt 2019-05-14 07:53:36, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>On Mon, May 13, 2019 at 09:37:32AM -0400, Tom Lane wrote:\n>> But ... that code's been like that for decades and nobody's complained\n>> before. Why are we worried about bootstrap's response to signals at all?\n>\n>Yeah, I don't think that it is something worth bothering either. As\n>you mentioned the data folder would be removed by default. Or perhaps\n>the reporter has another case in mind which could justify a change in\n>the signal handlers? I am ready to hear that case, but there is\n>nothing about the reason why it could be a benefit.\n>\n>The patch proposed upthread is not something I find correct anyway,\n>I'd rather have the abort path complain loudly about a bootstrap\n>transaction that fails instead of just ignoring it, because it is the\n>kind of transaction which must never fail. And it seems to me that it\n>can be handy for development purposes.\n>--\n>Michael\n\nOn our server when process crash and core dump file generated we will receive complaining phone call.That's why i try to fix it.At 2019-05-14 07:53:36, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>On Mon, May 13, 2019 at 09:37:32AM -0400, Tom Lane wrote:\n>> But ... that code's been like that for decades and nobody's complained\n>> before. Why are we worried about bootstrap's response to signals at all?\n>\n>Yeah, I don't think that it is something worth bothering either. As\n>you mentioned the data folder would be removed by default. Or perhaps\n>the reporter has another case in mind which could justify a change in\n>the signal handlers? I am ready to hear that case, but there is\n>nothing about the reason why it could be a benefit.\n>\n>The patch proposed upthread is not something I find correct anyway,\n>I'd rather have the abort path complain loudly about a bootstrap\n>transaction that fails instead of just ignoring it, because it is the\n>kind of transaction which must never fail. And it seems to me that it\n>can be handy for development purposes.\n>--\n>Michael",
"msg_date": "Tue, 14 May 2019 10:56:04 +0800 (CST)",
"msg_from": "Thunder <thunder1@126.com>",
"msg_from_op": false,
"msg_subject": "Re:Re: PANIC :Call AbortTransaction when transaction id is no\n normal"
},
{
"msg_contents": "[ please don't top-post on the PG lists ]\n\nThunder <thunder1@126.com> writes:\n> At 2019-05-14 07:53:36, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> On Mon, May 13, 2019 at 09:37:32AM -0400, Tom Lane wrote:\n>>> But ... that code's been like that for decades and nobody's complained\n>>> before. Why are we worried about bootstrap's response to signals at all?\n\n> On our server when process crash and core dump file generated we will receive complaining phone call.\n> That's why i try to fix it.\n\nOK, that's fair. The SIG_DFL change I suggested will fix that problem\nfor SIGINT etc (except SIGQUIT, for which you should be *expecting*\na core file). I agree with Michael that we do not wish to change what\nhappens for an internal error; but external signals do not represent\na bug in PG, so forcing a PANIC for those seems unwarranted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 23:28:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "On Mon, May 13, 2019 at 11:28:51PM -0400, Tom Lane wrote:\n> OK, that's fair. The SIG_DFL change I suggested will fix that problem\n> for SIGINT etc (except SIGQUIT, for which you should be *expecting*\n> a core file). I agree with Michael that we do not wish to change what\n> happens for an internal error; but external signals do not represent\n> a bug in PG, so forcing a PANIC for those seems unwarranted.\n\nNo objections from here to change the signal handlers. Still, I would\nlike to understand why the bootstrap process has been signaled to\nbegin with, particularly for an initdb, which is not really something\nthat should happen on a server where an instance runs. If you have a\ntoo aggressive monitoring job, you may want to revisit that as well,\nbecause it is able to complain just with an initdb.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 12:37:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 12:37:39 +0900, Michael Paquier wrote:\n> Still, I would like to understand why the bootstrap process has been\n> signaled to begin with, particularly for an initdb, which is not\n> really something that should happen on a server where an instance\n> runs. If you have a too aggressive monitoring job, you may want to\n> revisit that as well, because it is able to complain just with an\n> initdb.\n\nShutdown, timeout, resource exhaustion all seem like possible\ncauses. Don't think any of them warrant a core file - as the OP\nexplains, that'll often trigger pages etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 May 2019 20:50:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-14 12:37:39 +0900, Michael Paquier wrote:\n>> Still, I would like to understand why the bootstrap process has been\n>> signaled to begin with, particularly for an initdb, which is not\n>> really something that should happen on a server where an instance\n>> runs. If you have a too aggressive monitoring job, you may want to\n>> revisit that as well, because it is able to complain just with an\n>> initdb.\n\n> Shutdown, timeout, resource exhaustion all seem like possible\n> causes. Don't think any of them warrant a core file - as the OP\n> explains, that'll often trigger pages etc.\n\nYeah. The case I was thinking about was mostly \"start initdb,\ndecide I didn't want to do that, hit control-C\". That cleans up\nwithout much fuss *except* if you manage to hit the window\nwhere it's running bootstrap, and then it spews this scary-looking\nerror. It's less scary-looking with the SIG_DFL patch, which\nI've now pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 May 2019 10:25:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC :Call AbortTransaction when transaction id is no normal"
}
] |
[
{
"msg_contents": "pg_stat_reset_single_table_counters/pg_stat_reset_single_function_counters\nonly update pg_stat_database column stats_reset.\nstat_reset shuld update when all the column is reset.\n\nsample:\ndrop database if exists lzzhang_db;\ncreate database lzzhang_db;\n\\c lzzhang_db\n\ncreate table lzzhang_tab(id int);\ninsert into lzzhang_tab values(1);\ninsert into lzzhang_tab values(1);\n\nselect tup_fetched, stats_reset from pg_stat_database where\ndatname='lzzhang_db';\nselect pg_sleep(1);\n\nselect pg_stat_reset_single_table_counters('lzzhang_tab'::regclass::oid);\nselect tup_fetched, stats_reset from pg_stat_database where\ndatname='lzzhang_db';\n\nresult:\n tup_fetched | stats_reset\n-------------+-------------------------------\n 514 | 2019-05-12 03:22:55.702753+08\n(1 row)\n tup_fetched | stats_reset\n-------------+-------------------------------\n 710 | 2019-05-12 03:22:56.729336+08\n(1 row)\ntup_fetched is not reset but stats_reset is reset.",
"msg_date": "Mon, 13 May 2019 15:30:54 +0800",
"msg_from": "=?UTF-8?B?5byg6L+e5aOu?= <lianzhuangzhang@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_database update stats_reset only by pg_stat_reset"
},
{
"msg_contents": "it reset statistics for a single table and update the column stats_reset of\npg_stat_database.\nbut i think that stats_reset shoud be database-level statistics, a single\ntable should not update the column stats_reset.\n\ni am monitor the xact_commit every 5 minutes, when stats_reset is reset but\nohter columns is not reset, i can't decide\nif i will recount the xact_commit, because pg_stat_reset make all column to\nzero. pg_stat_reset_single_table_counters\nonly reset the column stats_reset.\n\n\n张连壮 <lianzhuangzhang@gmail.com> 于2019年5月13日周一 下午3:30写道:\n\n> pg_stat_reset_single_table_counters/pg_stat_reset_single_function_counters\n> only update pg_stat_database column stats_reset.\n> stat_reset shuld update when all the column is reset.\n>\n> sample:\n> drop database if exists lzzhang_db;\n> create database lzzhang_db;\n> \\c lzzhang_db\n>\n> create table lzzhang_tab(id int);\n> insert into lzzhang_tab values(1);\n> insert into lzzhang_tab values(1);\n>\n> select tup_fetched, stats_reset from pg_stat_database where\n> datname='lzzhang_db';\n> select pg_sleep(1);\n>\n> select pg_stat_reset_single_table_counters('lzzhang_tab'::regclass::oid);\n> select tup_fetched, stats_reset from pg_stat_database where\n> datname='lzzhang_db';\n>\n> result:\n> tup_fetched | stats_reset\n> -------------+-------------------------------\n> 514 | 2019-05-12 03:22:55.702753+08\n> (1 row)\n> tup_fetched | stats_reset\n> -------------+-------------------------------\n> 710 | 2019-05-12 03:22:56.729336+08\n> (1 row)\n> tup_fetched is not reset but stats_reset is reset.\n>\n\nit reset statistics for a single table and update the column stats_reset of pg_stat_database.but i think that stats_reset shoud be database-level statistics, a single table should not update the column stats_reset.i am monitor the xact_commit every 5 minutes, when stats_reset is reset but ohter columns is not reset, i can't decideif i will recount the xact_commit, because pg_stat_reset make all column to zero. pg_stat_reset_single_table_countersonly reset the column stats_reset.张连壮 <lianzhuangzhang@gmail.com> 于2019年5月13日周一 下午3:30写道:pg_stat_reset_single_table_counters/pg_stat_reset_single_function_counters only update pg_stat_database column stats_reset.stat_reset shuld update when all the column is reset.sample:drop database if exists lzzhang_db;create database lzzhang_db;\\c lzzhang_dbcreate table lzzhang_tab(id int);insert into lzzhang_tab values(1);insert into lzzhang_tab values(1);select tup_fetched, stats_reset from pg_stat_database where datname='lzzhang_db';select pg_sleep(1);select pg_stat_reset_single_table_counters('lzzhang_tab'::regclass::oid);select tup_fetched, stats_reset from pg_stat_database where datname='lzzhang_db';result: tup_fetched | stats_reset -------------+------------------------------- 514 | 2019-05-12 03:22:55.702753+08(1 row) tup_fetched | stats_reset -------------+------------------------------- 710 | 2019-05-12 03:22:56.729336+08(1 row)tup_fetched is not reset but stats_reset is reset.",
"msg_date": "Tue, 14 May 2019 17:00:24 +0800",
"msg_from": "=?UTF-8?B?5byg6L+e5aOu?= <lianzhuangzhang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_database update stats_reset only by pg_stat_reset"
},
{
"msg_contents": "\t张连壮 wrote:\n\n> it reset statistics for a single table and update the column stats_reset of\n> pg_stat_database.\n> but i think that stats_reset shoud be database-level statistics, a single\n> table should not update the column stats_reset.\n\nThis patch is a current CF entry at\nhttps://commitfest.postgresql.org/23/2116/\n\nThe issue it addresses was submitted as bug #15801:\nhttps://www.postgresql.org/message-id/flat/15801-21c7fbff08b6c10c%40postgresql.org\n\nAs mentioned in the discussion on -bugs, it's not necessarily a bug\nbecause:\n\n* the comment in the code specifically states that it's intentional,\nin pgstat_recv_resetsinglecounter():\n\n\t/* Set the reset timestamp for the whole database */\n\tdbentry->stat_reset_timestamp = GetCurrentTimestamp();\n\n* the commit message also states the same:\n\ncommit 4c468b37a281941afd3bf61c782b20def8c17047\nAuthor: Magnus Hagander <magnus@hagander.net>\nDate:\tThu Feb 10 15:09:35 2011 +0100\n\n Track last time for statistics reset on databases and bgwriter\n\n Tracks one counter for each database, which is reset whenever\n the statistics for any individual object inside the database is\n reset, and one counter for the background writer.\n\n Tomas Vondra, reviewed by Greg Smith\n\n\nI can understand why you'd want that resetting the stats for a single object\nwould not reset the per-database timestamp, but this would revert a 8+ years\nold decision that seems intentional and has apparently not been criticized\nsince then (based on searching for pg_stat_reset_single_table_counters in \nthe archives) . More opinions are probably needed in favor of this\nchange (or against, in which case the fate of the patch might be a\nrejection).\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 11 Jul 2019 16:34:20 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_database update stats_reset only by pg_stat_reset"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 04:34:20PM +0200, Daniel Verite wrote:\n> I can understand why you'd want that resetting the stats for a single object\n> would not reset the per-database timestamp, but this would revert a 8+ years\n> old decision that seems intentional and has apparently not been criticized\n> since then (based on searching for pg_stat_reset_single_table_counters in \n> the archives) . More opinions are probably needed in favor of this\n> change (or against, in which case the fate of the patch might be a\n> rejection).\n\nI agree with Daniel that breaking an 8-year-old behavior may not be of\nthe taste of folks relying on the current behavior, particularly\nbecause we have not had complains about the current behavior being\nbad. So -1 from me.\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 13:51:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_database update stats_reset only by pg_stat_reset"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 01:51:50PM +0900, Michael Paquier wrote:\n>On Thu, Jul 11, 2019 at 04:34:20PM +0200, Daniel Verite wrote:\n>> I can understand why you'd want that resetting the stats for a single object\n>> would not reset the per-database timestamp, but this would revert a 8+ years\n>> old decision that seems intentional and has apparently not been criticized\n>> since then (based on searching for pg_stat_reset_single_table_counters in\n>> the archives) . More opinions are probably needed in favor of this\n>> change (or against, in which case the fate of the patch might be a\n>> rejection).\n>\n>I agree with Daniel that breaking an 8-year-old behavior may not be of\n>the taste of folks relying on the current behavior, particularly\n>because we have not had complains about the current behavior being\n>bad. So -1 from me.\n\nYeah, I agree. There are several reasons why it's done this way:\n\n1) overhead\n\nNow we only store a two timestamps - for a database and for bgwriter. We\ncould track a timestamp for each object, of course ...\n\n2) complexity\n\nUpdating the timestamps would be fairly simple, but what about querying\nthe data? Currently you fetch the data, see if the stats_reset changed\nsince the last snapshot, and if not you're good. If it changed, you know\nsome object (or the whole db) has reset counters, so you can't rely on\nthe data being consistent.\n\nIf we had stats_reset for each object, figuring out which data is still\nvalid and what has been reset would be far more complicated.\n\nBut reseting stats is not expected to be a common operation, so this\nseemed like an acceptable tradeoff (and I'd argue it still is).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 15:07:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_database update stats_reset only by pg_stat_reset"
},
{
"msg_contents": "Yeah, I agree. this is not necessary, i will remove the commitfest at\n'2019-07-19'.\n\nTomas Vondra <tomas.vondra@2ndquadrant.com> 于2019年7月12日周五 下午9:07写道:\n\n> On Fri, Jul 12, 2019 at 01:51:50PM +0900, Michael Paquier wrote:\n> >On Thu, Jul 11, 2019 at 04:34:20PM +0200, Daniel Verite wrote:\n> >> I can understand why you'd want that resetting the stats for a single\n> object\n> >> would not reset the per-database timestamp, but this would revert a 8+\n> years\n> >> old decision that seems intentional and has apparently not been\n> criticized\n> >> since then (based on searching for pg_stat_reset_single_table_counters\n> in\n> >> the archives) . More opinions are probably needed in favor of this\n> >> change (or against, in which case the fate of the patch might be a\n> >> rejection).\n> >\n> >I agree with Daniel that breaking an 8-year-old behavior may not be of\n> >the taste of folks relying on the current behavior, particularly\n> >because we have not had complains about the current behavior being\n> >bad. So -1 from me.\n>\n> Yeah, I agree. There are several reasons why it's done this way:\n>\n> 1) overhead\n>\n> Now we only store a two timestamps - for a database and for bgwriter. We\n> could track a timestamp for each object, of course ...\n>\n> 2) complexity\n>\n> Updating the timestamps would be fairly simple, but what about querying\n> the data? Currently you fetch the data, see if the stats_reset changed\n> since the last snapshot, and if not you're good. If it changed, you know\n> some object (or the whole db) has reset counters, so you can't rely on\n> the data being consistent.\n>\n> If we had stats_reset for each object, figuring out which data is still\n> valid and what has been reset would be far more complicated.\n>\n> But reseting stats is not expected to be a common operation, so this\n> seemed like an acceptable tradeoff (and I'd argue it still is).\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nYeah, I agree. this is not necessary, i will remove the commitfest at '2019-07-19'.Tomas Vondra <tomas.vondra@2ndquadrant.com> 于2019年7月12日周五 下午9:07写道:On Fri, Jul 12, 2019 at 01:51:50PM +0900, Michael Paquier wrote:\n>On Thu, Jul 11, 2019 at 04:34:20PM +0200, Daniel Verite wrote:\n>> I can understand why you'd want that resetting the stats for a single object\n>> would not reset the per-database timestamp, but this would revert a 8+ years\n>> old decision that seems intentional and has apparently not been criticized\n>> since then (based on searching for pg_stat_reset_single_table_counters in\n>> the archives) . More opinions are probably needed in favor of this\n>> change (or against, in which case the fate of the patch might be a\n>> rejection).\n>\n>I agree with Daniel that breaking an 8-year-old behavior may not be of\n>the taste of folks relying on the current behavior, particularly\n>because we have not had complains about the current behavior being\n>bad. So -1 from me.\n\nYeah, I agree. There are several reasons why it's done this way:\n\n1) overhead\n\nNow we only store a two timestamps - for a database and for bgwriter. We\ncould track a timestamp for each object, of course ...\n\n2) complexity\n\nUpdating the timestamps would be fairly simple, but what about querying\nthe data? Currently you fetch the data, see if the stats_reset changed\nsince the last snapshot, and if not you're good. If it changed, you know\nsome object (or the whole db) has reset counters, so you can't rely on\nthe data being consistent.\n\nIf we had stats_reset for each object, figuring out which data is still\nvalid and what has been reset would be far more complicated.\n\nBut reseting stats is not expected to be a common operation, so this\nseemed like an acceptable tradeoff (and I'd argue it still is).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 17 Jul 2019 13:52:46 +0800",
"msg_from": "=?UTF-8?B?5byg6L+e5aOu?= <lianzhuangzhang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_database update stats_reset only by pg_stat_reset"
}
] |
[
{
"msg_contents": "Hello Community,\n\nWhile I was searching for logon trigger in postgres similar to that of\nOracle, I came across \"login_hook\", which can be installed as a Postgres\ndatabase extension to mimic a logon trigger.\n\nBut I tried to install but failed. Error is that it could not find its .so\nfile. Could you please help me in installing this login_hook ??\n\nLooking forward to hear from you. Any suggestions would be of great help!\n\nThanks & Regards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 13 May 2019 04:18:26 -0700 (MST)",
"msg_from": "pavan95 <pavan.postgresdba@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "Hello,\n\nThis extension https://github.com/splendiddata/login_hook\nseems very interesting !\nBut I didn't test it myself and maybe the best place to ask \nsupport is there\nhttps://github.com/splendiddata/login_hook/issues\n\nFor information there is something equivalent in core\n\"[PATCH] A hook for session start\"\nhttps://www.postgresql.org/message-id/flat/20171103164305.1f952c0f%40asp437-24-g082ur#d897fd6ec551d242493815312de5f5d5\n\nthat finished commited\n\"pgsql: Add hooks for session start and session end\"\nhttps://www.postgresql.org/message-id/flat/575d6fa2-78d0-4456-8600-302fc35b2591%40dunslane.net#0819e315c6e44c49a36c69080cab644d\n\nbut was finally rollbacked because it didn't pass installcheck test\n...\n\nMaybe you are able to patch your pg installation, \nit would be a solution of choice (there is a nice exemple \nof extension included)\n\nShowing interest for this may also help getting this feature back ;o)\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 13 May 2019 13:06:10 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "On Mon, May 13, 2019 at 01:06:10PM -0700, legrand legrand wrote:\n> that finished commited\n> \"pgsql: Add hooks for session start and session end\"\n> https://www.postgresql.org/message-id/flat/575d6fa2-78d0-4456-8600-302fc35b2591%40dunslane.net#0819e315c6e44c49a36c69080cab644d\n> \n> but was finally rollbacked because it didn't pass installcheck test\n> ...\n> \n> Maybe you are able to patch your pg installation, \n> it would be a solution of choice (there is a nice exemple \n> of extension included)\n\nYou will need to patch Postgres to add this hook, and you could\nbasically reuse the patch which has been committed once. I don't\nthink that it would be that much amount of work to get it working\ncorrectly on the test side to be honest, so we may be able to get\nsomething into v13 at this stage. This is mainly a matter of\nresources though, and of folks willing to actually push for it.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 09:32:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "On 5/13/19 8:32 PM, Michael Paquier wrote:\n> On Mon, May 13, 2019 at 01:06:10PM -0700, legrand legrand wrote:\n>> that finished commited\n>> \"pgsql: Add hooks for session start and session end\"\n>> https://www.postgresql.org/message-id/flat/575d6fa2-78d0-4456-8600-302fc35b2591%40dunslane.net#0819e315c6e44c49a36c69080cab644d\n>> \n>> but was finally rollbacked because it didn't pass installcheck test\n>> ...\n>> \n>> Maybe you are able to patch your pg installation, \n>> it would be a solution of choice (there is a nice exemple \n>> of extension included)\n> \n> You will need to patch Postgres to add this hook, and you could\n> basically reuse the patch which has been committed once. I don't\n> think that it would be that much amount of work to get it working\n> correctly on the test side to be honest, so we may be able to get\n> something into v13 at this stage. This is mainly a matter of\n> resources though, and of folks willing to actually push for it.\n\n\nI am interested in this, so if Andrew wants to create a buildfarm module\nI will either add it to rhinoceros or stand up another buildfarm animal\nfor it. I am also happy to help push it for v13.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 14 May 2019 08:34:59 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "\nOn 5/14/19 8:34 AM, Joe Conway wrote:\n> On 5/13/19 8:32 PM, Michael Paquier wrote:\n>> On Mon, May 13, 2019 at 01:06:10PM -0700, legrand legrand wrote:\n>>> that finished commited\n>>> \"pgsql: Add hooks for session start and session end\"\n>>> https://www.postgresql.org/message-id/flat/575d6fa2-78d0-4456-8600-302fc35b2591%40dunslane.net#0819e315c6e44c49a36c69080cab644d\n>>>\n>>> but was finally rollbacked because it didn't pass installcheck test\n>>> ...\n>>>\n>>> Maybe you are able to patch your pg installation, \n>>> it would be a solution of choice (there is a nice exemple \n>>> of extension included)\n>> You will need to patch Postgres to add this hook, and you could\n>> basically reuse the patch which has been committed once. I don't\n>> think that it would be that much amount of work to get it working\n>> correctly on the test side to be honest, so we may be able to get\n>> something into v13 at this stage. This is mainly a matter of\n>> resources though, and of folks willing to actually push for it.\n>\n> I am interested in this, so if Andrew wants to create a buildfarm module\n> I will either add it to rhinoceros or stand up another buildfarm animal\n> for it. I am also happy to help push it for v13.\n>\n\n\nI've just been looking at this again. I think the right way to test it\nis not to use the regression framework but to use a TAP test that would\npreload the test module. Then it would just happen as part of all the\nother TAP tests with no extra buildfarm module needed.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 18 May 2019 11:51:20 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "Hello,\n\nshouldn't we update associated commitfest entry\nhttps://commitfest.postgresql.org/15/1318/\n\nto give it a chance to be included in pg13 ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Thu, 1 Aug 2019 05:01:17 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 05:01:17AM -0700, legrand legrand wrote:\n> Shouldn't we update associated commitfest entry\n> https://commitfest.postgresql.org/15/1318/\n> \n> to give it a chance to be included in pg13 ?\n\nWell, it is mainly a matter of finding somebody willing to do the\nlegwork, in which case I would let the past commit fest entry as it\nis, and create a new one once a new patch is ready to be reviewed.\n--\nMichael",
"msg_date": "Fri, 2 Aug 2019 13:07:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
},
{
"msg_contents": "pavan95 wrote\n> Hello Community,\n> \n> While I was searching for logon trigger in postgres similar to that of\n> Oracle, I came across \"login_hook\", which can be installed as a Postgres\n> database extension to mimic a logon trigger.\n> \n> But I tried to install but failed. Error is that it could not find its .so\n> file. Could you please help me in installing this login_hook ??\n> \n> Looking forward to hear from you. Any suggestions would be of great help!\n> \n> Thanks & Regards,\n> Pavan\n> \n> \n> \n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\nissue\nERROR: could not access file \"login_hook.so\": No such file or directory\nhas been fixed see:\nhttps://github.com/splendiddata/login_hook/issues/1\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 5 Oct 2019 13:45:56 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to install login_hook in Postgres 10.5"
}
] |
[
{
"msg_contents": "Hi Andres, Hari, David,\n\nIn the latest PostgreSQL code, I could see that we are passing\nCopyMultiInsertInfo structure to CopyMultiInsertInfoNextFreeSlot() although\nit is not being used anywhere in that function. Could you please let me\nknow if it has been done intentionally or it is just an overlook that needs\nto be corrected. AFAIU, CopyMultiInsertInfoNextFreeSlot() is just intended\nto return the next free slot available in the multi insert buffer and we\nalready have that buffer stored in ResultRelInfo structure which is also\nbeing passed to that function so not sure what could be the purpose of\npassing CopyMultiInsertInfo structure as well. Please let me know if i am\nmissing something here. Thank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nHi Andres, Hari, David,In the latest PostgreSQL code, I could see that we are passing CopyMultiInsertInfo structure to CopyMultiInsertInfoNextFreeSlot() although it is not being used anywhere in that function. Could you please let me know if it has been done intentionally or it is just an overlook that needs to be corrected. AFAIU, CopyMultiInsertInfoNextFreeSlot() is just intended to return the next free slot available in the multi insert buffer and we already have that buffer stored in ResultRelInfo structure which is also being passed to that function so not sure what could be the purpose of passing CopyMultiInsertInfo structure as well. Please let me know if i am missing something here. Thank you.-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Mon, 13 May 2019 16:50:35 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Mon, 13 May 2019 at 23:20, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> In the latest PostgreSQL code, I could see that we are passing CopyMultiInsertInfo structure to CopyMultiInsertInfoNextFreeSlot() although it is not being used anywhere in that function. Could you please let me know if it has been done intentionally or it is just an overlook that needs to be corrected. AFAIU, CopyMultiInsertInfoNextFreeSlot() is just intended to return the next free slot available in the multi insert buffer and we already have that buffer stored in ResultRelInfo structure which is also being passed to that function so not sure what could be the purpose of passing CopyMultiInsertInfo structure as well. Please let me know if i am missing something here. Thank you.\n\nThere's likely no good reason for that. The attached removes it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 14 May 2019 01:46:00 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Mon, May 13, 2019 at 7:16 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Mon, 13 May 2019 at 23:20, Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > In the latest PostgreSQL code, I could see that we are passing\n> CopyMultiInsertInfo structure to CopyMultiInsertInfoNextFreeSlot() although\n> it is not being used anywhere in that function. Could you please let me\n> know if it has been done intentionally or it is just an overlook that needs\n> to be corrected. AFAIU, CopyMultiInsertInfoNextFreeSlot() is just intended\n> to return the next free slot available in the multi insert buffer and we\n> already have that buffer stored in ResultRelInfo structure which is also\n> being passed to that function so not sure what could be the purpose of\n> passing CopyMultiInsertInfo structure as well. Please let me know if i am\n> missing something here. Thank you.\n>\n> There's likely no good reason for that. The attached removes it.\n>\n\nThanks for the confirmation David. The patch looks good to me.\n\n\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nOn Mon, May 13, 2019 at 7:16 PM David Rowley <david.rowley@2ndquadrant.com> wrote:On Mon, 13 May 2019 at 23:20, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> In the latest PostgreSQL code, I could see that we are passing CopyMultiInsertInfo structure to CopyMultiInsertInfoNextFreeSlot() although it is not being used anywhere in that function. Could you please let me know if it has been done intentionally or it is just an overlook that needs to be corrected. AFAIU, CopyMultiInsertInfoNextFreeSlot() is just intended to return the next free slot available in the multi insert buffer and we already have that buffer stored in ResultRelInfo structure which is also being passed to that function so not sure what could be the purpose of passing CopyMultiInsertInfo structure as well. Please let me know if i am missing something here. Thank you.\n\nThere's likely no good reason for that. The attached removes it.Thanks for the confirmation David. The patch looks good to me. \n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 13 May 2019 20:17:49 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Mon, May 13, 2019 at 08:17:49PM +0530, Ashutosh Sharma wrote:\n> Thanks for the confirmation David. The patch looks good to me.\n\nIt looks to me that it can be a matter a consistency with the other\nAPIs dealing with multi-inserts in COPY. For now I have added an open\nitem on that.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 10:00:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Tue, 14 May 2019 at 13:00, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 13, 2019 at 08:17:49PM +0530, Ashutosh Sharma wrote:\n> > Thanks for the confirmation David. The patch looks good to me.\n>\n> It looks to me that it can be a matter a consistency with the other\n> APIs dealing with multi-inserts in COPY. For now I have added an open\n> item on that.\n\nWhen I wrote the code I admit that I was probably wearing my\nobject-orientated programming hat. I had in mind that the whole\nfunction series would have made a good class. Passing the\nCopyMultiInsertInfo was sort of the non-OOP equivalent to having\nthis/Me/self available, as it would be for any instance method of a\nclass. Back to reality, this isn't OOP, so I was wearing the wrong\nhat. I think the unused parameter should likely be removed. It's\nprobably not doing a great deal of harm since the function is static\ninline and the compiler should be producing any code for the unused\nparam, but for the sake of preventing confusion, it should be removed.\nAshutosh had to ask about it, so it wasn't immediately clear what the\npurpose of it was. Since there's none, be gone with it, I say.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 14 May 2019 13:19:30 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> When I wrote the code I admit that I was probably wearing my\n> object-orientated programming hat. I had in mind that the whole\n> function series would have made a good class. Passing the\n> CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> this/Me/self available, as it would be for any instance method of a\n> class. Back to reality, this isn't OOP, so I was wearing the wrong\n> hat. I think the unused parameter should likely be removed. It's\n> probably not doing a great deal of harm since the function is static\n> inline and the compiler should be producing any code for the unused\n> param, but for the sake of preventing confusion, it should be removed.\n> Ashutosh had to ask about it, so it wasn't immediately clear what the\n> purpose of it was. Since there's none, be gone with it, I say.\n\nSounds fair to me. This has been introduced by 86b8504, so let's see\nwhat's Andres take.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 11:52:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On 2019-May-14, Michael Paquier wrote:\n\n> On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > When I wrote the code I admit that I was probably wearing my\n> > object-orientated programming hat. I had in mind that the whole\n> > function series would have made a good class. Passing the\n> > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > this/Me/self available, as it would be for any instance method of a\n> > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > hat. I think the unused parameter should likely be removed. It's\n> > probably not doing a great deal of harm since the function is static\n> > inline and the compiler should be producing any code for the unused\n> > param, but for the sake of preventing confusion, it should be removed.\n> > Ashutosh had to ask about it, so it wasn't immediately clear what the\n> > purpose of it was. Since there's none, be gone with it, I say.\n> \n> Sounds fair to me. This has been introduced by 86b8504, so let's see\n> what's Andres take.\n\nIf this were up to me, I'd leave the function signature alone, and just add\n\t(void) miinfo;\t\t/* unused parameter */\nto the function code. It seems perfectly reasonable to have that\nfunction argument, and a little weird not to have it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 16 May 2019 17:40:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Fri, May 17, 2019 at 3:10 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-May-14, Michael Paquier wrote:\n>\n> > On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > > When I wrote the code I admit that I was probably wearing my\n> > > object-orientated programming hat. I had in mind that the whole\n> > > function series would have made a good class. Passing the\n> > > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > > this/Me/self available, as it would be for any instance method of a\n> > > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > > hat. I think the unused parameter should likely be removed. It's\n> > > probably not doing a great deal of harm since the function is static\n> > > inline and the compiler should be producing any code for the unused\n> > > param, but for the sake of preventing confusion, it should be removed.\n> > > Ashutosh had to ask about it, so it wasn't immediately clear what the\n> > > purpose of it was. Since there's none, be gone with it, I say.\n> >\n> > Sounds fair to me. This has been introduced by 86b8504, so let's see\n> > what's Andres take.\n>\n> If this were up to me, I'd leave the function signature alone, and just add\n> (void) miinfo; /* unused parameter */\n> to the function code. It seems perfectly reasonable to have that\n> function argument, and a little weird not to have it.\n>\n>\nI think, we should only do that if at all there is any circumstances under\nwhich 'miinfo' might be used otherwise it would good to remove it unless\nthere is some specific reason for having it. As an example, please refer to\nthe following code in printTableAddHeader() or printTableAddCell().\n\n#ifndef ENABLE_NLS\n (void) translate; /* unused parameter */\n#endif\n\nThe function argument *translate* has been marked as unsed but only for\nnon-nls build which means it will be used if it is nls enabled build. But,\nI do not see any such requirement in our case. Please let me know if I\nmissing anything here.\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nOn Fri, May 17, 2019 at 3:10 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-May-14, Michael Paquier wrote:\n\n> On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > When I wrote the code I admit that I was probably wearing my\n> > object-orientated programming hat. I had in mind that the whole\n> > function series would have made a good class. Passing the\n> > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > this/Me/self available, as it would be for any instance method of a\n> > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > hat. I think the unused parameter should likely be removed. It's\n> > probably not doing a great deal of harm since the function is static\n> > inline and the compiler should be producing any code for the unused\n> > param, but for the sake of preventing confusion, it should be removed.\n> > Ashutosh had to ask about it, so it wasn't immediately clear what the\n> > purpose of it was. Since there's none, be gone with it, I say.\n> \n> Sounds fair to me. This has been introduced by 86b8504, so let's see\n> what's Andres take.\n\nIf this were up to me, I'd leave the function signature alone, and just add\n (void) miinfo; /* unused parameter */\nto the function code. It seems perfectly reasonable to have that\nfunction argument, and a little weird not to have it.\nI think, we should only do that if at all there is any circumstances under which 'miinfo' might be used otherwise it would good to remove it unless there is some specific reason for having it. As an example, please refer to the following code in printTableAddHeader() or printTableAddCell().#ifndef ENABLE_NLS (void) translate; /* unused parameter */#endifThe function argument *translate* has been marked as unsed but only for non-nls build which means it will be used if it is nls enabled build. But, I do not see any such requirement in our case. Please let me know if I missing anything here.Thanks,-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 17 May 2019 11:09:41 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On 2019-05-17 11:09:41 +0530, Ashutosh Sharma wrote:\n> On Fri, May 17, 2019 at 3:10 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> \n> > On 2019-May-14, Michael Paquier wrote:\n> >\n> > > On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > > > When I wrote the code I admit that I was probably wearing my\n> > > > object-orientated programming hat. I had in mind that the whole\n> > > > function series would have made a good class. Passing the\n> > > > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > > > this/Me/self available, as it would be for any instance method of a\n> > > > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > > > hat. I think the unused parameter should likely be removed. It's\n> > > > probably not doing a great deal of harm since the function is static\n> > > > inline and the compiler should be producing any code for the unused\n> > > > param, but for the sake of preventing confusion, it should be removed.\n> > > > Ashutosh had to ask about it, so it wasn't immediately clear what the\n> > > > purpose of it was. Since there's none, be gone with it, I say.\n> > >\n> > > Sounds fair to me. This has been introduced by 86b8504, so let's see\n> > > what's Andres take.\n> >\n> > If this were up to me, I'd leave the function signature alone, and just add\n> > (void) miinfo; /* unused parameter */\n> > to the function code. It seems perfectly reasonable to have that\n> > function argument, and a little weird not to have it.\n\nI'd do that, or simply nothing. We've plenty of unused args, so I don't\nsee much point in these kind of \"unused parameter\" warning suppressions\nin isolated places.\n\n\n> I think, we should only do that if at all there is any circumstances under\n> which 'miinfo' might be used otherwise it would good to remove it unless\n> there is some specific reason for having it.\n\nIt seems like it could entirely be reasonable to e.g. reuse slots across\npartitions, which'd require CopyMultiInsertInfo to be around.\n\nAlso, from a consistency point, it seems the caller doesn't need to know\nwhether all the necessary information is in the ResultRelInfo and not in\nCopyMultiInsertInfo for *some* of the CopyMultiInsertInfo functions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 13:04:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Sat, May 18, 2019 at 1:34 AM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-05-17 11:09:41 +0530, Ashutosh Sharma wrote:\n> > On Fri, May 17, 2019 at 3:10 AM Alvaro Herrera <alvherre@2ndquadrant.com\n> >\n> > wrote:\n> >\n> > > On 2019-May-14, Michael Paquier wrote:\n> > >\n> > > > On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > > > > When I wrote the code I admit that I was probably wearing my\n> > > > > object-orientated programming hat. I had in mind that the whole\n> > > > > function series would have made a good class. Passing the\n> > > > > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > > > > this/Me/self available, as it would be for any instance method of a\n> > > > > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > > > > hat. I think the unused parameter should likely be removed. It's\n> > > > > probably not doing a great deal of harm since the function is\n> static\n> > > > > inline and the compiler should be producing any code for the unused\n> > > > > param, but for the sake of preventing confusion, it should be\n> removed.\n> > > > > Ashutosh had to ask about it, so it wasn't immediately clear what\n> the\n> > > > > purpose of it was. Since there's none, be gone with it, I say.\n> > > >\n> > > > Sounds fair to me. This has been introduced by 86b8504, so let's see\n> > > > what's Andres take.\n> > >\n> > > If this were up to me, I'd leave the function signature alone, and\n> just add\n> > > (void) miinfo; /* unused parameter */\n> > > to the function code. It seems perfectly reasonable to have that\n> > > function argument, and a little weird not to have it.\n>\n> I'd do that, or simply nothing. We've plenty of unused args, so I don't\n> see much point in these kind of \"unused parameter\" warning suppressions\n> in isolated places.\n>\n>\n> > I think, we should only do that if at all there is any circumstances\n> under\n> > which 'miinfo' might be used otherwise it would good to remove it unless\n> > there is some specific reason for having it.\n>\n> It seems like it could entirely be reasonable to e.g. reuse slots across\n> partitions, which'd require CopyMultiInsertInfo to be around.\n>\n>\nConsidering that we can have MAX_BUFFERED_TUPLES slots in each multi-insert\nbuffer and we do flush the buffer after MAX_BUFFERED_TUPLES tuples have\nbeen stored, it seems unlikely that we would ever come across a situation\nwhere one partition would need to reuse the slot of another partition.\n\nAlso, from a consistency point, it seems the caller doesn't need to know\n> whether all the necessary information is in the ResultRelInfo and not in\n> CopyMultiInsertInfo for *some* of the CopyMultiInsertInfo functions.\n>\n>\nI actually feel that the function name itself is not correct here, it\nappears to be confusing and inconsistent considering the kind of work that\nit is doing. I think, the function name should have been CopyMultiInsert\n*Buffer*NextFreeSlot() instead of CopyMultiInsert*Info*NextFreeSlot(). What\ndo you think, Andres, David, Alvaro ?\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nOn Sat, May 18, 2019 at 1:34 AM Andres Freund <andres@anarazel.de> wrote:On 2019-05-17 11:09:41 +0530, Ashutosh Sharma wrote:\n> On Fri, May 17, 2019 at 3:10 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> \n> > On 2019-May-14, Michael Paquier wrote:\n> >\n> > > On Tue, May 14, 2019 at 01:19:30PM +1200, David Rowley wrote:\n> > > > When I wrote the code I admit that I was probably wearing my\n> > > > object-orientated programming hat. I had in mind that the whole\n> > > > function series would have made a good class. Passing the\n> > > > CopyMultiInsertInfo was sort of the non-OOP equivalent to having\n> > > > this/Me/self available, as it would be for any instance method of a\n> > > > class. Back to reality, this isn't OOP, so I was wearing the wrong\n> > > > hat. I think the unused parameter should likely be removed. It's\n> > > > probably not doing a great deal of harm since the function is static\n> > > > inline and the compiler should be producing any code for the unused\n> > > > param, but for the sake of preventing confusion, it should be removed.\n> > > > Ashutosh had to ask about it, so it wasn't immediately clear what the\n> > > > purpose of it was. Since there's none, be gone with it, I say.\n> > >\n> > > Sounds fair to me. This has been introduced by 86b8504, so let's see\n> > > what's Andres take.\n> >\n> > If this were up to me, I'd leave the function signature alone, and just add\n> > (void) miinfo; /* unused parameter */\n> > to the function code. It seems perfectly reasonable to have that\n> > function argument, and a little weird not to have it.\n\nI'd do that, or simply nothing. We've plenty of unused args, so I don't\nsee much point in these kind of \"unused parameter\" warning suppressions\nin isolated places.\n\n\n> I think, we should only do that if at all there is any circumstances under\n> which 'miinfo' might be used otherwise it would good to remove it unless\n> there is some specific reason for having it.\n\nIt seems like it could entirely be reasonable to e.g. reuse slots across\npartitions, which'd require CopyMultiInsertInfo to be around.\nConsidering that we can have MAX_BUFFERED_TUPLES slots in each multi-insert buffer and we do flush the buffer after MAX_BUFFERED_TUPLES tuples have been stored, it seems unlikely that we would ever come across a situation where one partition would need to reuse the slot of another partition.\nAlso, from a consistency point, it seems the caller doesn't need to know\nwhether all the necessary information is in the ResultRelInfo and not in\nCopyMultiInsertInfo for *some* of the CopyMultiInsertInfo functions.\nI actually feel that the function name itself is not correct here, it appears to be confusing and inconsistent considering the kind of work that it is doing. I think, the function name should have been CopyMultiInsertBufferNextFreeSlot() instead of CopyMultiInsertInfoNextFreeSlot(). What do you think, Andres, David, Alvaro ?Thanks,-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Sat, 18 May 2019 06:14:15 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-18 06:14:15 +0530, Ashutosh Sharma wrote:\n> On Sat, May 18, 2019 at 1:34 AM Andres Freund <andres@anarazel.de> wrote:\n> Considering that we can have MAX_BUFFERED_TUPLES slots in each multi-insert\n> buffer and we do flush the buffer after MAX_BUFFERED_TUPLES tuples have\n> been stored, it seems unlikely that we would ever come across a situation\n> where one partition would need to reuse the slot of another partition.\n\nI don't think this is right. Obviously it'd not be without a bit more\nchanges, but we definitely *should* try to reuse slots from other\npartitions (including the root partition if compatible). Creating them\nisn't that cheap, compared to putting slots onto a freelist for a wihle.\n\n\n> Also, from a consistency point, it seems the caller doesn't need to know\n> > whether all the necessary information is in the ResultRelInfo and not in\n> > CopyMultiInsertInfo for *some* of the CopyMultiInsertInfo functions.\n> >\n> >\n> I actually feel that the function name itself is not correct here, it\n> appears to be confusing and inconsistent considering the kind of work that\n> it is doing. I think, the function name should have been CopyMultiInsert\n> *Buffer*NextFreeSlot() instead of CopyMultiInsert*Info*NextFreeSlot(). What\n> do you think, Andres, David, Alvaro ?\n\nUnless somebody else presses back hard against doing so *soon*, I'm\ngoing to close this open issue. I don't think it's worth spending\nfurther time arguing about a few characters.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 17:49:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Sat, 18 May 2019 at 12:49, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-05-18 06:14:15 +0530, Ashutosh Sharma wrote:\n> > I actually feel that the function name itself is not correct here, it\n> > appears to be confusing and inconsistent considering the kind of work that\n> > it is doing. I think, the function name should have been CopyMultiInsert\n> > *Buffer*NextFreeSlot() instead of CopyMultiInsert*Info*NextFreeSlot(). What\n> > do you think, Andres, David, Alvaro ?\n>\n> Unless somebody else presses back hard against doing so *soon*, I'm\n> going to close this open issue. I don't think it's worth spending\n> further time arguing about a few characters.\n\nI'd say if we're not going to bother removing the unused param that\nthere's not much point in renaming the function. The proposed name\nmight make sense if the function was:\n\nstatic inline TupleTableSlot *\nCopyMultiInsertBufferNextFreeSlot(CopyMultiInsertBuffer *buffer, Relation rel)\n\nthen that might be worth a commit, but giving it that name without\nchanging the signature to that does not seem like an improvement to\nme.\n\nI'm personally about +0.1 for making the above change, which is well\nbelow my threshold for shouting and screaming.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 18 May 2019 13:14:07 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sat, 18 May 2019 at 12:49, Andres Freund <andres@anarazel.de> wrote:\n>> Unless somebody else presses back hard against doing so *soon*, I'm\n>> going to close this open issue. I don't think it's worth spending\n>> further time arguing about a few characters.\n\n> I'd say if we're not going to bother removing the unused param that\n> there's not much point in renaming the function.\n\nFWIW, I'm on the side of \"we shouldn't change this\". There's lots of\nunused parameters in PG functions, and in most of those cases the API\nis reasonable as it stands.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 21:30:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
},
{
"msg_contents": "On Sat, May 18, 2019 at 6:44 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Sat, 18 May 2019 at 12:49, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-05-18 06:14:15 +0530, Ashutosh Sharma wrote:\n> > > I actually feel that the function name itself is not correct here, it\n> > > appears to be confusing and inconsistent considering the kind of work\n> that\n> > > it is doing. I think, the function name should have been\n> CopyMultiInsert\n> > > *Buffer*NextFreeSlot() instead of CopyMultiInsert*Info*NextFreeSlot().\n> What\n> > > do you think, Andres, David, Alvaro ?\n> >\n> > Unless somebody else presses back hard against doing so *soon*, I'm\n> > going to close this open issue. I don't think it's worth spending\n> > further time arguing about a few characters.\n>\n> I'd say if we're not going to bother removing the unused param that\n> there's not much point in renaming the function. The proposed name\n> might make sense if the function was:\n>\n> static inline TupleTableSlot *\n> CopyMultiInsertBufferNextFreeSlot(CopyMultiInsertBuffer *buffer, Relation\n> rel)\n>\n>\nWell, that's what I suggested but seems like Andres is not in favour of it\nbecause he feels that in the future we *should* add a facility to reuse the\nslots across the partitions because reusing a free slot is quite cheaper\nthan creating a new one and in that sense we would in future need to pass\nmiinfo to *NextFreeSlot function\n\n\n> then that might be worth a commit, but giving it that name without\n> changing the signature to that does not seem like an improvement to\n> me.\n>\n>\nThat's right, we cannot have this name without changing it's signature.\n\n\n> I'm personally about +0.1 for making the above change, which is well\n> below my threshold for shouting and screaming.\n>\n>\nI think, as Andres pointed out in his earlier reply, we should probably\nstop this discussion here, if in future we add the support to reuse the\nslots across the partition then probably we will have to undo the changes\nthat we will be doing here. Anyways, thanks to Andres and David for\nclearing my doubts.\n\n\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nOn Sat, May 18, 2019 at 6:44 AM David Rowley <david.rowley@2ndquadrant.com> wrote:On Sat, 18 May 2019 at 12:49, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-05-18 06:14:15 +0530, Ashutosh Sharma wrote:\n> > I actually feel that the function name itself is not correct here, it\n> > appears to be confusing and inconsistent considering the kind of work that\n> > it is doing. I think, the function name should have been CopyMultiInsert\n> > *Buffer*NextFreeSlot() instead of CopyMultiInsert*Info*NextFreeSlot(). What\n> > do you think, Andres, David, Alvaro ?\n>\n> Unless somebody else presses back hard against doing so *soon*, I'm\n> going to close this open issue. I don't think it's worth spending\n> further time arguing about a few characters.\n\nI'd say if we're not going to bother removing the unused param that\nthere's not much point in renaming the function. The proposed name\nmight make sense if the function was:\n\nstatic inline TupleTableSlot *\nCopyMultiInsertBufferNextFreeSlot(CopyMultiInsertBuffer *buffer, Relation rel)\nWell, that's what I suggested but seems like Andres is not in favour of it because he feels that in the future we *should* add a facility to reuse the slots across the partitions because reusing a free slot is quite cheaper than creating a new one and in that sense we would in future need to pass miinfo to *NextFreeSlot function \nthen that might be worth a commit, but giving it that name without\nchanging the signature to that does not seem like an improvement to\nme.\nThat's right, we cannot have this name without changing it's signature. \nI'm personally about +0.1 for making the above change, which is well\nbelow my threshold for shouting and screaming.\nI think, as Andres pointed out in his earlier reply, we should probably stop this discussion here, if in future we add the support to reuse the slots across the partition then probably we will have to undo the changes that we will be doing here. Anyways, thanks to Andres and David for clearing my doubts. \n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 18 May 2019 07:01:05 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Passing CopyMultiInsertInfo structure to\n CopyMultiInsertInfoNextFreeSlot()"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.