threads
listlengths
1
2.99k
[ { "msg_contents": "Currently there's no good way that I'm aware of for monitoring\nsoftware to check what the xmin horizon is being blocked at. You can\ncheck pg_stat_replication and pg_replication_slots and\ntxid_snapshot_xmin(txid_current_snapshot()) and so on, but that list\ncan grow, and it means monitoring setups need to update any time any\nnew feature might hold another snapshot and expose it in a different\nway.\n\nTo my knowledge the current oldest xmin (GetOldestXmin() if I'm not\nmistaken) isn't exposed directly in any view or function by Postgres.\n\nAm I missing anything in the above description? And if not, would\nthere be any reason why we would want to avoid exposing that\ninformation? And if not, then would exposing it as a function be\nacceptable?\n\nThanks,\nJames\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:12:41 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> To my knowledge the current oldest xmin (GetOldestXmin() if I'm not\n> mistaken) isn't exposed directly in any view or function by Postgres.\n\nYou could do something like\n\nselect max(age(backend_xmin)) from pg_stat_activity;\n\nthough I'm not sure whether that accounts for absolutely every process.\n\n> Am I missing anything in the above description? And if not, would\n> there be any reason why we would want to avoid exposing that\n> information? And if not, then would exposing it as a function be\n> acceptable?\n\nThe fact that I had to use max(age(...)) in that sample query\nhints at one reason: it's really hard to do arithmetic correctly\non raw XIDs. Dealing with wraparound is a problem, and knowing\nwhat's past or future is even harder. What use-case do you\nforesee exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 17:45:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "On 2020-Apr-01, Tom Lane wrote:\n\n> James Coleman <jtc331@gmail.com> writes:\n> > To my knowledge the current oldest xmin (GetOldestXmin() if I'm not\n> > mistaken) isn't exposed directly in any view or function by Postgres.\n> \n> You could do something like\n> \n> select max(age(backend_xmin)) from pg_stat_activity;\n> \n> though I'm not sure whether that accounts for absolutely every process.\n>\n> > Am I missing anything in the above description? And if not, would\n> > there be any reason why we would want to avoid exposing that\n> > information? And if not, then would exposing it as a function be\n> > acceptable?\n> \n> The fact that I had to use max(age(...)) in that sample query\n> hints at one reason: it's really hard to do arithmetic correctly\n> on raw XIDs. Dealing with wraparound is a problem, and knowing\n> what's past or future is even harder. What use-case do you\n> foresee exactly?\n\nMaybe it would make sense to start exposing fullXids in these views and\nfunctions, for this reason. There's no good reason to continue to\nexpose bare Xids to userspace, we should use them only for storage.\n\nBut I think James' point is precisely that it's not easy to know where\nto look for things that keep Xmin from advancing. Currently it's\nbackends, replication slots, prepared transactions, and replicas with\nhot_standby_feedback. If you forget to monitor just one of these, your\nvacuums might be useless and you won't notice until disaster strikes.\n\n\nMaybe a useful value to publish in some monitoring view is\nRecentGlobalXmin -- which has a valid value when reading a view, since\nyou had to acquire a snapshot to read the view in the first place.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Apr 2020 18:58:31 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "On Wed, Apr 1, 2020 at 5:58 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Apr-01, Tom Lane wrote:\n>\n> > James Coleman <jtc331@gmail.com> writes:\n> > > To my knowledge the current oldest xmin (GetOldestXmin() if I'm not\n> > > mistaken) isn't exposed directly in any view or function by Postgres.\n> >\n> > You could do something like\n> >\n> > select max(age(backend_xmin)) from pg_stat_activity;\n> >\n> > though I'm not sure whether that accounts for absolutely every process.\n\nThat doesn't account for for replication slots that's aren't active, right?\n\n> > > Am I missing anything in the above description? And if not, would\n> > > there be any reason why we would want to avoid exposing that\n> > > information? And if not, then would exposing it as a function be\n> > > acceptable?\n> >\n> > The fact that I had to use max(age(...)) in that sample query\n> > hints at one reason: it's really hard to do arithmetic correctly\n> > on raw XIDs. Dealing with wraparound is a problem, and knowing\n> > what's past or future is even harder. What use-case do you\n> > foresee exactly?\n\nSo the use case we've encountered multiple times is some (at that\npoint unknown) process or object that's preventing the xmin from\nadvancing, and thus blocking vacuum. That kind of situation can pretty\nquickly lead to query plans that can result in significant business\nimpact.\n\nAs such, it'd be helpful to be able to monitor something like \"how old\nis the current xmin on the cluster\". Ideally it would also tell you\nwhat process or object is holding that xmin, but starting with the\nxmin itself would at least alert to the problem so you can\ninvestigate.\n\nOn that note, for this particular use case it would be sufficient to\nhave something like pg_timestamp_of_oldest_xmin() (given we have\ncommit timestamps tracking enabled) or even a function returning the\nnumber of xids consumed since the oldest xmin, but it seems more\nbroadly useful to provide a function that gives the oldest xmin and\nallow users to build on top of that.\n\n> Maybe it would make sense to start exposing fullXids in these views and\n> functions, for this reason. There's no good reason to continue to\n> expose bare Xids to userspace, we should use them only for storage.\n\nThis would be useful too (and for more reasons than the above).\n\n> But I think James' point is precisely that it's not easy to know where\n> to look for things that keep Xmin from advancing. Currently it's\n> backends, replication slots, prepared transactions, and replicas with\n> hot_standby_feedback. If you forget to monitor just one of these, your\n> vacuums might be useless and you won't notice until disaster strikes.\n>\n>\n> Maybe a useful value to publish in some monitoring view is\n> RecentGlobalXmin -- which has a valid value when reading a view, since\n> you had to acquire a snapshot to read the view in the first place.\n\nIf we went down that path what view do you think would be best -- an\nexisting one or a new one?\n\nI go back and forth on whether this is best exposed as a monitoring\noriented view or as part of the suite of txid functions we already\nhave that seem to have broader applicability.\n\nJames\n\n\n", "msg_date": "Wed, 1 Apr 2020 18:40:31 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Apr-01, Tom Lane wrote:\n>> The fact that I had to use max(age(...)) in that sample query\n>> hints at one reason: it's really hard to do arithmetic correctly\n>> on raw XIDs. Dealing with wraparound is a problem, and knowing\n>> what's past or future is even harder. What use-case do you\n>> foresee exactly?\n\n> Maybe it would make sense to start exposing fullXids in these views and\n> functions, for this reason. There's no good reason to continue to\n> expose bare Xids to userspace, we should use them only for storage.\n\n+1, that would help a lot.\n\n> But I think James' point is precisely that it's not easy to know where\n> to look for things that keep Xmin from advancing. Currently it's\n> backends, replication slots, prepared transactions, and replicas with\n> hot_standby_feedback. If you forget to monitor just one of these, your\n> vacuums might be useless and you won't notice until disaster strikes.\n\nAgreed, but just knowing what the oldest xmin is doesn't help you\nfind *where* it is. Maybe what we need is a view showing all of\nthese potential sources of an old xmin.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 19:57:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "On Thu, 2 Apr 2020 at 07:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Apr-01, Tom Lane wrote:\n> >> The fact that I had to use max(age(...)) in that sample query\n> >> hints at one reason: it's really hard to do arithmetic correctly\n> >> on raw XIDs. Dealing with wraparound is a problem, and knowing\n> >> what's past or future is even harder. What use-case do you\n> >> foresee exactly?\n>\n> > Maybe it would make sense to start exposing fullXids in these views and\n> > functions, for this reason. There's no good reason to continue to\n> > expose bare Xids to userspace, we should use them only for storage.\n>\n> +1, that would help a lot.\n>\n> > But I think James' point is precisely that it's not easy to know where\n> > to look for things that keep Xmin from advancing. Currently it's\n> > backends, replication slots, prepared transactions, and replicas with\n> > hot_standby_feedback. If you forget to monitor just one of these, your\n> > vacuums might be useless and you won't notice until disaster strikes.\n>\n> Agreed, but just knowing what the oldest xmin is doesn't help you\n> find *where* it is. Maybe what we need is a view showing all of\n> these potential sources of an old xmin.\n\n\n Strongly agree.\n\nhttps://www.postgresql.org/message-id/CAMsr+YGSS6JBHmEHbxqMdc1XJ7sobDSq62YwaEkOHN-KBQYr-A@mail.gmail.com\n\nI was aiming to write such a view, but folks seemed opposed. I still think\nit'd be a very good thing to have built-in as Pg grows more complex.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 2 Apr 2020 at 07:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Apr-01, Tom Lane wrote:\n>> The fact that I had to use max(age(...)) in that sample query\n>> hints at one reason: it's really hard to do arithmetic correctly\n>> on raw XIDs.  Dealing with wraparound is a problem, and knowing\n>> what's past or future is even harder.  What use-case do you\n>> foresee exactly?\n\n> Maybe it would make sense to start exposing fullXids in these views and\n> functions, for this reason.  There's no good reason to continue to\n> expose bare Xids to userspace, we should use them only for storage.\n\n+1, that would help a lot.\n\n> But I think James' point is precisely that it's not easy to know where\n> to look for things that keep Xmin from advancing.  Currently it's\n> backends, replication slots, prepared transactions, and replicas with\n> hot_standby_feedback.  If you forget to monitor just one of these, your\n> vacuums might be useless and you won't notice until disaster strikes.\n\nAgreed, but just knowing what the oldest xmin is doesn't help you\nfind *where* it is.  Maybe what we need is a view showing all of\nthese potential sources of an old xmin. Strongly agree.https://www.postgresql.org/message-id/CAMsr+YGSS6JBHmEHbxqMdc1XJ7sobDSq62YwaEkOHN-KBQYr-A@mail.gmail.comI was aiming to write such a view, but folks seemed opposed. I still think it'd be a very good thing to have built-in as Pg grows more complex.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 2 Apr 2020 12:13:01 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "On Thu, Apr 2, 2020 at 12:13 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n>\n>\n>\n> On Thu, 2 Apr 2020 at 07:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> > On 2020-Apr-01, Tom Lane wrote:\n>> >> The fact that I had to use max(age(...)) in that sample query\n>> >> hints at one reason: it's really hard to do arithmetic correctly\n>> >> on raw XIDs. Dealing with wraparound is a problem, and knowing\n>> >> what's past or future is even harder. What use-case do you\n>> >> foresee exactly?\n>>\n>> > Maybe it would make sense to start exposing fullXids in these views and\n>> > functions, for this reason. There's no good reason to continue to\n>> > expose bare Xids to userspace, we should use them only for storage.\n>>\n>> +1, that would help a lot.\n>>\n>> > But I think James' point is precisely that it's not easy to know where\n>> > to look for things that keep Xmin from advancing. Currently it's\n>> > backends, replication slots, prepared transactions, and replicas with\n>> > hot_standby_feedback. If you forget to monitor just one of these, your\n>> > vacuums might be useless and you won't notice until disaster strikes.\n>>\n>> Agreed, but just knowing what the oldest xmin is doesn't help you\n>> find *where* it is. Maybe what we need is a view showing all of\n>> these potential sources of an old xmin.\n>\n>\n> Strongly agree.\n>\n> https://www.postgresql.org/message-id/CAMsr+YGSS6JBHmEHbxqMdc1XJ7sobDSq62YwaEkOHN-KBQYr-A@mail.gmail.com\n>\n> I was aiming to write such a view, but folks seemed opposed. I still think it'd be a very good thing to have built-in as Pg grows more complex.\n\nDid you by any chance prototype anything/are you still interested?\n\nThis sounds extremely valuable to me, and while I don't want to\nresurrect the old thread (it seemed like a bit of a tangent there\nanyway), in my view this kind of basic diagnostic capability is\nexactly the kind of thing that *has* to be in core, and then other\nmonitoring packages can take advantage of it.\n\nFinding things holding back xmin from advancing is easily one of the\nsingle biggest operational things we care about. We need to\ninvestigate quickly when an issue occurs, so being able to do so\ndirectly on the server (and having it be up-to-date with any new\nfeatures as they're released) is essential. And it's also one of the\nareas where in my experience tracking things down is the hardest [with\ncapabilities in core]; you basically need to have this list in your\nhead of all of the things you need to check.\n\nJames\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:04:20 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "Hi,\n\nOn 2020-04-01 19:57:32 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Apr-01, Tom Lane wrote:\n> >> The fact that I had to use max(age(...)) in that sample query\n> >> hints at one reason: it's really hard to do arithmetic correctly\n> >> on raw XIDs. Dealing with wraparound is a problem, and knowing\n> >> what's past or future is even harder. What use-case do you\n> >> foresee exactly?\n>\n> > Maybe it would make sense to start exposing fullXids in these views and\n> > functions, for this reason. There's no good reason to continue to\n> > expose bare Xids to userspace, we should use them only for storage.\n>\n> +1, that would help a lot.\n\nI agree.\n\n\n> > But I think James' point is precisely that it's not easy to know where\n> > to look for things that keep Xmin from advancing. Currently it's\n> > backends, replication slots, prepared transactions, and replicas with\n> > hot_standby_feedback. If you forget to monitor just one of these, your\n> > vacuums might be useless and you won't notice until disaster strikes.\n>\n> Agreed, but just knowing what the oldest xmin is doesn't help you\n> find *where* it is. Maybe what we need is a view showing all of\n> these potential sources of an old xmin.\n\n+1. This would be extermely useful. It's a very common occurance to\nhave to ask for a number of nontrivial queries when debugging xmin\nrelated bloat issues.\n\nThere's the slight complexity that one of the various xmin horizons is\ndatabase specific...\n\nWhich different xmin horizons, and which sources do we have? I can think\nof:\n\n- global xmin horizon from backends (for shared tables)\n- per-database xmin horizon from backends (for local tables)\n- catalog xmin horizon (from logical replication slots)\n- data xmin horizon (from physical replication slots)\n- streaming replication xmin horizon\n\n\nHaving a view that lists something like:\n\n- shared xmin horizon\n- pid of backend with oldest xmin across all backends\n\n- database xmin horizon of current database\n- pid of backend with oldest xmin in current database\n\n- catalog xmin of oldest slot by catalog xmin\n- name of oldest slot by catalog xmin\n\n- data xmin of oldest slot by data xmin\n- name of oldest slot by data xmin\n\n- xid of oldest prepared transaction\n- gid of oldest prepared transaction\n- database of oldest transaction?\n\n- xmin of oldest walsender with hot_standby_feedback active\n- pid of oldest ...\n\nwould be awesome. I think it'd make sense to also add the database with\nthe oldest datfrozenxid, the current database's relation with the oldest\nrelfrozenxid.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Apr 2020 10:50:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-01 19:57:32 -0400, Tom Lane wrote:\n>> Agreed, but just knowing what the oldest xmin is doesn't help you\n>> find *where* it is. Maybe what we need is a view showing all of\n>> these potential sources of an old xmin.\n\n> +1. This would be extermely useful. It's a very common occurance to\n> have to ask for a number of nontrivial queries when debugging xmin\n> related bloat issues.\n\n> Having a view that lists something like:\n\n> - shared xmin horizon\n> - pid of backend with oldest xmin across all backends\n\nI was envisioning a view that would show you *all* the active processes\nand their related xmins, then more entries for all active replication\nslots, prepared xacts, etc etc. Picking out the ones causing trouble is\nthen the user's concern. If the XID column is actually fullXid then\nsorting, aggregating, etc. is easy.\n\nThe point about database-local vs not is troublesome. Maybe two\nsuch views would be needed?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 15:15:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Expose oldest xmin as SQL function for monitoring" } ]
[ { "msg_contents": "Hi,\n\nIn thread [1], we are discussing to expose WAL usage data for each\nstatement in a way quite similar to how we expose BufferUsage data.\nThe way it exposes seems reasonable to me and no one else raises any\nobjection. It could be that it appears fine to others who have\nreviewed the patch but I thought it would be a good idea to write a\nseparate email just for its UI and see if anybody has objection.\n\nIt exposes three variables (a) wal_records (Number of WAL records\nproduced), (b) wal_num_fpw (Number of WAL full page image records),\n(c) wal_bytes (size of WAL records produced).\n\nThe patch has exposed these three variables via explain (analyze, wal)\n<statement>, auto_explain and pg_stat_statements.\n\nExposed via Explain\n------------------------------------\nNote the usage via line displaying WAL. This parameter may only be\nused when ANALYZE is also enabled.\n\npostgres=# explain (analyze, buffers, wal) update t1 set c2='cccc';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Update on t1 (cost=0.00..53.99 rows=1199 width=414) (actual\ntime=6.030..6.030 rows=0 loops=1)\n Buffers: shared hit=2484 dirtied=44\n WAL: records=2359 full page records=42 bytes=447788\n -> Seq Scan on t1 (cost=0.00..53.99 rows=1199 width=414) (actual\ntime=0.040..0.540 rows=1199 loops=1)\n Buffers: shared hit=42\n Planning Time: 0.179 ms\n Execution Time: 6.119 ms\n(7 rows)\n\nExposed via auto_explain\n------------------------------------------\nUsers need to set auto_explain.log_wal to print WAL usage statistics.\nThis parameter has no effect unless auto_explain.log_analyze is\nenabled. Note the usage via line displaying WAL.\n\nLOG: duration: 0.632 ms plan:\nQuery Text: update t1 set c2='cccc';\nUpdate on t1 (cost=0.00..16.10 rows=610 width=414) (actual\ntime=0.629..0.629 rows=0 loops=1)\n Buffers: shared hit=206 dirtied=5 written=2\n WAL: records=200 full page records=2 bytes=37387\n -> Seq Scan on t1 (cost=0.00..16.10 rows=610 width=414) (actual\ntime=0.022..0.069 rows=100 loops=1)\n Buffers: shared hit=2 dirtied=1\n\nExposed via pg_stat_statements\n------------------------------------------------\nThree new parameters are added to pg_stat_statements function.\n\nselect query, wal_bytes, wal_records, wal_num_fpw from\npg_stat_statements where query like 'VACUUM%';\n query | wal_bytes | wal_records | wal_num_fpw\n--------------------------+-----------+-------------+-------------\n VACUUM test | 72814331 | 8857 | 8855\n\nAny objections/suggestions?\n\n[1] - https://www.postgresql.org/message-id/CAB-hujrP8ZfUkvL5OYETipQwA%3De3n7oqHFU%3D4ZLxWS_Cza3kQQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 10:13:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "User Interface for WAL usage data" }, { "msg_contents": "On Thu, Apr 02, 2020 at 10:13:18AM +0530, Amit Kapila wrote:\n> In thread [1], we are discussing to expose WAL usage data for each\n> statement in a way quite similar to how we expose BufferUsage data.\n> The way it exposes seems reasonable to me and no one else raises any\n> objection. It could be that it appears fine to others who have\n> reviewed the patch but I thought it would be a good idea to write a\n> separate email just for its UI and see if anybody has objection.\n\n+1\n\nRegarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\nI think there should be additional spaces before \"full\" and before \"bytes\":\n\n> WAL: records=2359 full page records=42 bytes=447788\n\nCompare with these:\n\n\t \"Sort Method: %s %s: %ldkB\\n\",\n\t \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n\t \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n\nOtherwise \"records=2359 full page records=42\" is hard to parse.\n\n> Exposed via auto_explain\n> WAL: records=200 full page records=2 bytes=37387\n\nSame\n\nIn v10-0002:\n+\t * BufferUsage and WalUsage during executing maintenance command can be\nshould say \"during execution of a maintenance command\".\nI'm afraid that'll cause merge conflicts for you :(\n\nIn 0003:\n+\t/* Provide WAL update data to the instrumentation */\nRemove \"data\" ??\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Apr 2020 00:41:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "At Thu, 2 Apr 2020 00:41:20 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Apr 02, 2020 at 10:13:18AM +0530, Amit Kapila wrote:\n> > In thread [1], we are discussing to expose WAL usage data for each\n> > statement in a way quite similar to how we expose BufferUsage data.\n> > The way it exposes seems reasonable to me and no one else raises any\n> > objection. It could be that it appears fine to others who have\n> > reviewed the patch but I thought it would be a good idea to write a\n> > separate email just for its UI and see if anybody has objection.\n> \n> +1\n> \n> Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> I think there should be additional spaces before \"full\" and before \"bytes\":\n> \n> > WAL: records=2359 full page records=42 bytes=447788\n> \n> Compare with these:\n> \n> \t \"Sort Method: %s %s: %ldkB\\n\",\n> \t \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> \t \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> \n> Otherwise \"records=2359 full page records=42\" is hard to parse.\n\nI got the same feeling seeing the line.\n\n\"full page records\" seems to be showing the number of full page\nimages, not the record having full page images.\n\n> > Exposed via auto_explain\n> > WAL: records=200 full page records=2 bytes=37387\n> \n> Same\n> \n> In v10-0002:\n> +\t * BufferUsage and WalUsage during executing maintenance command can be\n> should say \"during execution of a maintenance command\".\n> I'm afraid that'll cause merge conflicts for you :(\n> \n> In 0003:\n> +\t/* Provide WAL update data to the instrumentation */\n> Remove \"data\" ??\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Apr 2020 14:58:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Thu, Apr 2, 2020 at 11:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 2 Apr 2020 00:41:20 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > On Thu, Apr 02, 2020 at 10:13:18AM +0530, Amit Kapila wrote:\n> > > In thread [1], we are discussing to expose WAL usage data for each\n> > > statement in a way quite similar to how we expose BufferUsage data.\n> > > The way it exposes seems reasonable to me and no one else raises any\n> > > objection. It could be that it appears fine to others who have\n> > > reviewed the patch but I thought it would be a good idea to write a\n> > > separate email just for its UI and see if anybody has objection.\n> >\n> > +1\n> >\n> > Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> > I think there should be additional spaces before \"full\" and before \"bytes\":\n> >\n> > > WAL: records=2359 full page records=42 bytes=447788\n> >\n> > Compare with these:\n> >\n> > \"Sort Method: %s %s: %ldkB\\n\",\n> > \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> > \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> >\n> > Otherwise \"records=2359 full page records=42\" is hard to parse.\n>\n> I got the same feeling seeing the line.\n>\n\nBut isn't this same as we have BufferUsage data? We can probably\ndisplay it as full_page_writes or something like that.\n\n> \"full page records\" seems to be showing the number of full page\n> images, not the record having full page images.\n>\n\nI am not sure what exactly is a difference but it is the records\nhaving full page images. Julien correct me if I am wrong.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:32:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "Thanks all for the feedback.\n\nOn Thu, Apr 2, 2020 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 11:28 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 2 Apr 2020 00:41:20 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > On Thu, Apr 02, 2020 at 10:13:18AM +0530, Amit Kapila wrote:\n> > > > In thread [1], we are discussing to expose WAL usage data for each\n> > > > statement in a way quite similar to how we expose BufferUsage data.\n> > > > The way it exposes seems reasonable to me and no one else raises any\n> > > > objection. It could be that it appears fine to others who have\n> > > > reviewed the patch but I thought it would be a good idea to write a\n> > > > separate email just for its UI and see if anybody has objection.\n> > >\n> > > +1\n> > >\n> > > Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> > > I think there should be additional spaces before \"full\" and before \"bytes\":\n> > >\n> > > > WAL: records=2359 full page records=42 bytes=447788\n> > >\n> > > Compare with these:\n> > >\n> > > \"Sort Method: %s %s: %ldkB\\n\",\n> > > \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> > > \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> > >\n> > > Otherwise \"records=2359 full page records=42\" is hard to parse.\n> >\n> > I got the same feeling seeing the line.\n> >\n>\n> But isn't this same as we have BufferUsage data? We can probably\n> display it as full_page_writes or something like that.\n>\n> > \"full page records\" seems to be showing the number of full page\n> > images, not the record having full page images.\n> >\n>\n> I am not sure what exactly is a difference but it is the records\n> having full page images. Julien correct me if I am wrong.\n\nThis counter should be showing the number of full page image included\nin the WAL record(s). The goal is to try to estimate how much FPI are\namplifying WAL records for a given cumulated size of WAL data. I had\nseen some pretty high amplification recently due to an autovacuum\nfreeze. Using pg_waldump to analyze a sample of ~ 100GB of WALs\nshowed that a 1.5% increase in the number of freeze records lead to\nmore than 15% increase on the total amount of WAL, due to a high\namount of those records being FPW.\n\nAlso note that the patchset Amit is referencing adds the same\ninstrumentation for vacuum (verbose) and autovacuum, although this\npart is showing the intended \"full page write\" label rather than the\nbogus \"full page records\". Example output:\n\nLOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 250000 removed, 250000 remain, 0 are dead but not yet\nremovable, oldest xmin: 502\n buffer usage: 4448 hits, 4 misses, 4 dirtied\n avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n WAL usage: 6643 records, 4 full page writes, 1402679 bytes\n\nVACUUM log sample:\n\n# vacuum VERBOSE t1;\nINFO: vacuuming \"public.t1\"\nINFO: \"t1\": removed 50000 row versions in 443 pages\nINFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443\nout of 443 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\nThere were 50000 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\n1332 WAL records, 4 WAL full page writes, 306901 WAL bytes\nCPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\nINFO: \"t1\": truncated 443 to 0 pages\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: vacuuming \"pg_toast.pg_toast_16385\"\nINFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row\nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\nThere were 0 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\n0 WAL records, 0 WAL full page writes, 0 WAL bytes\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nVACUUM\n\nObviously previous complaints about the meaning and parsability of\n\"full page writes\" should be addressed here for consistency.\n\n\n", "msg_date": "Thu, 2 Apr 2020 08:29:31 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Thu, Apr 02, 2020 at 11:32:16AM +0530, Amit Kapila wrote:\n> On Thu, Apr 2, 2020 at 11:28 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 2 Apr 2020 00:41:20 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> > > I think there should be additional spaces before \"full\" and before \"bytes\":\n> > >\n> > > > WAL: records=2359 full page records=42 bytes=447788\n> > >\n> > > Compare with these:\n> > >\n> > > \"Sort Method: %s %s: %ldkB\\n\",\n> > > \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> > > \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> > >\n> > > Otherwise \"records=2359 full page records=42\" is hard to parse.\n> >\n> > I got the same feeling seeing the line.\n> \n> But isn't this same as we have BufferUsage data? We can probably\n> display it as full_page_writes or something like that.\n\nI guess you mean this:\n Buffers: shared hit=994 read=11426 dirtied=466\n\nWhich can show shared/local/temp. Actually I would probably make the same\nsuggestion for \"Buffers\" (if it were a new patch). I would find this to be\npretty unfriendly output:\n\n Buffers: shared hit=12345 read=12345 dirtied=12345 local hit=12345 read=12345 dirtied=12345 temp hit=12345 read=12345 dirtied=12345\n\nAdding two extra spaces \" local\" and \" temp\" would have helped there, so\nwould commas, or parenthesis, dashes or almost anything - other than a\nbackslash.\n\nSo I think you're right that WAL is very similar to the Buffers case, but I\nsuggest that's not a good example to follow, especially since you're adding a\n\"field\" with spaces in it.\n\nI thought maybe the \"two spaces\" convention predated \"Buffers\". But sort has\nhad two spaces since it was added 2009-08-10 (9bd27b7c9). Buffers since it was\nadded 2009-12-15 (cddca5ec). And buckets since it was added 2010-02-01\n(42a8ab0a). \n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Apr 2020 01:35:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Thu, Apr 2, 2020 at 12:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 02, 2020 at 11:32:16AM +0530, Amit Kapila wrote:\n> > On Thu, Apr 2, 2020 at 11:28 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 2 Apr 2020 00:41:20 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > > > Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> > > > I think there should be additional spaces before \"full\" and before \"bytes\":\n> > > >\n> > > > > WAL: records=2359 full page records=42 bytes=447788\n> > > >\n> > > > Compare with these:\n> > > >\n> > > > \"Sort Method: %s %s: %ldkB\\n\",\n> > > > \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> > > > \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> > > >\n> > > > Otherwise \"records=2359 full page records=42\" is hard to parse.\n> > >\n> > > I got the same feeling seeing the line.\n> >\n> > But isn't this same as we have BufferUsage data? We can probably\n> > display it as full_page_writes or something like that.\n>\n> I guess you mean this:\n> Buffers: shared hit=994 read=11426 dirtied=466\n>\n> Which can show shared/local/temp. Actually I would probably make the same\n> suggestion for \"Buffers\" (if it were a new patch). I would find this to be\n> pretty unfriendly output:\n>\n> Buffers: shared hit=12345 read=12345 dirtied=12345 local hit=12345 read=12345 dirtied=12345 temp hit=12345 read=12345 dirtied=12345\n>\n> Adding two extra spaces \" local\" and \" temp\" would have helped there, so\n> would commas, or parenthesis, dashes or almost anything - other than a\n> backslash.\n>\n> So I think you're right that WAL is very similar to the Buffers case, but I\n> suggest that's not a good example to follow, especially since you're adding a\n> \"field\" with spaces in it.\n>\n\nAgreed, this is a good reason to keep two spaces as we have in cases\nof Buckets. We can display the variable as \"full page writes\".\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:10:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "> > > > > Regarding v10-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au.patch:\n> > > > > I think there should be additional spaces before \"full\" and before \"bytes\":\n> > > > >\n> > > > > > WAL: records=2359 full page records=42 bytes=447788\n> > > > >\n> > > > > Compare with these:\n> > > > >\n> > > > > \"Sort Method: %s %s: %ldkB\\n\",\n> > > > > \"Buckets: %d (originally %d) Batches: %d (originally %d) Memory Usage: %ldkB\\n\",\n> > > > > \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> > > > >\n> > > > > Otherwise \"records=2359 full page records=42\" is hard to parse.\n> > > >\n> > > > I got the same feeling seeing the line.\n> > >\n> > > But isn't this same as we have BufferUsage data? We can probably\n> > > display it as full_page_writes or something like that.\n> >\n> > I guess you mean this:\n> > Buffers: shared hit=994 read=11426 dirtied=466\n> >\n> > Which can show shared/local/temp. Actually I would probably make the same\n> > suggestion for \"Buffers\" (if it were a new patch). I would find this to be\n> > pretty unfriendly output:\n> >\n> > Buffers: shared hit=12345 read=12345 dirtied=12345 local hit=12345 read=12345 dirtied=12345 temp hit=12345 read=12345 dirtied=12345\n> >\n> > Adding two extra spaces \" local\" and \" temp\" would have helped there, so\n> > would commas, or parenthesis, dashes or almost anything - other than a\n> > backslash.\n> >\n> > So I think you're right that WAL is very similar to the Buffers case[...]\n\nActually, I take that back. If I understand correctly, there should be two\nspaces not only to make the 2nd field more clear, but because the three values\nhave different units:\n\n> > > > > > WAL: records=2359 full page records=42 bytes=447788\n\n1) records; 2) pages (\"full page images\"); 3) bytes\n\nThat is exactly like sort (method/type/size) and hash (buckets/batches/size),\nand *not* like buffers, which shows various values all in units of \"pages\".\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Apr 2020 00:11:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > > > > > > WAL: records=2359 full page records=42 bytes=447788\n>\n> 1) records; 2) pages (\"full page images\"); 3) bytes\n>\n> That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> and *not* like buffers, which shows various values all in units of \"pages\".\n>\n\nThe way you have written (2) appears to bit awkward. I would prefer\n\"full page writes\" or \"full page images\".\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 10:52:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 03, 2020 at 10:52:02AM +0530, Amit Kapila wrote:\n> On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > > > > > > > WAL: records=2359 full page records=42 bytes=447788\n> >\n> > 1) records; 2) pages (\"full page images\"); 3) bytes\n> >\n> > That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> > and *not* like buffers, which shows various values all in units of \"pages\".\n> >\n> \n> The way you have written (2) appears to bit awkward. I would prefer\n> \"full page writes\" or \"full page images\".\n\nI didn't mean it to be the description used in the patch or anywhere else, just\nthe list of units.\n\nI wonder if it should use colons instead of equals ? As in:\n| WAL: Records: 2359 Full Page Images: 42 Size: 437kB\n\nNote, that has: 1) two spaces; 2) capitalized \"fields\"; 3) size rather than\n\"bytes\". That's similar to Buckets:\n| Buckets: 1024 Batches: 1 Memory Usage: 44kB\n\nI'm not sure if it should say \"WAL: \" or \"WAL \", or perhaps \"WAL: \" If\nthere's no colon, then it looks like the first field is \"WAL Records\", but then\n\"size\" isn't as tightly associated with WAL. It could say:\n| WAL Records: n Full Page Images: n WAL Size: nkB\n\nFor comparison, buffers uses \"equals\" for the case showing multiple \"fields\",\nwhich are all in units of pages:\n| Buffers: shared hit=15 read=2006\n\nAlso, for now, the output can be in kB, but I think in the future we should\ntake a recent suggestion from Andres to make an ExplainPropertyBytes() which\nhandles conversion to and display of a reasonable unit. \n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Apr 2020 00:44:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 3, 2020 at 11:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Apr 03, 2020 at 10:52:02AM +0530, Amit Kapila wrote:\n> > On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > > > > > > > WAL: records=2359 full page records=42 bytes=447788\n> > >\n> > > 1) records; 2) pages (\"full page images\"); 3) bytes\n> > >\n> > > That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> > > and *not* like buffers, which shows various values all in units of \"pages\".\n> > >\n> >\n> > The way you have written (2) appears to bit awkward. I would prefer\n> > \"full page writes\" or \"full page images\".\n>\n> I didn't mean it to be the description used in the patch or anywhere else, just\n> the list of units.\n>\n> I wonder if it should use colons instead of equals ? As in:\n> | WAL: Records: 2359 Full Page Images: 42 Size: 437kB\n>\n> Note, that has: 1) two spaces; 2) capitalized \"fields\"; 3) size rather than\n> \"bytes\". That's similar to Buckets:\n> | Buckets: 1024 Batches: 1 Memory Usage: 44kB\n>\n> I'm not sure if it should say \"WAL: \" or \"WAL \", or perhaps \"WAL: \" If\n> there's no colon, then it looks like the first field is \"WAL Records\", but then\n> \"size\" isn't as tightly associated with WAL. It could say:\n> | WAL Records: n Full Page Images: n WAL Size: nkB\n>\n> For comparison, buffers uses \"equals\" for the case showing multiple \"fields\",\n> which are all in units of pages:\n> | Buffers: shared hit=15 read=2006\n>\n\nI think this is more close to the case of Buffers where all fields are\ndirectly related to buffers/blocks. Here all the fields we want to\ndisplay are related to WAL, so we should try to make it display\nsimilar to Buffers.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 11:29:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 3, 2020 at 11:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 11:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Apr 03, 2020 at 10:52:02AM +0530, Amit Kapila wrote:\n> > > On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > > > > > > > WAL: records=2359 full page records=42 bytes=447788\n> > > >\n> > > > 1) records; 2) pages (\"full page images\"); 3) bytes\n> > > >\n> > > > That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> > > > and *not* like buffers, which shows various values all in units of \"pages\".\n> > > >\n> > >\n> > > The way you have written (2) appears to bit awkward. I would prefer\n> > > \"full page writes\" or \"full page images\".\n> >\n> > I didn't mean it to be the description used in the patch or anywhere else, just\n> > the list of units.\n> >\n> > I wonder if it should use colons instead of equals ? As in:\n> > | WAL: Records: 2359 Full Page Images: 42 Size: 437kB\n> >\n> > Note, that has: 1) two spaces; 2) capitalized \"fields\"; 3) size rather than\n> > \"bytes\". That's similar to Buckets:\n> > | Buckets: 1024 Batches: 1 Memory Usage: 44kB\n> >\n> > I'm not sure if it should say \"WAL: \" or \"WAL \", or perhaps \"WAL: \" If\n> > there's no colon, then it looks like the first field is \"WAL Records\", but then\n> > \"size\" isn't as tightly associated with WAL. It could say:\n> > | WAL Records: n Full Page Images: n WAL Size: nkB\n> >\n> > For comparison, buffers uses \"equals\" for the case showing multiple \"fields\",\n> > which are all in units of pages:\n> > | Buffers: shared hit=15 read=2006\n> >\n>\n> I think this is more close to the case of Buffers where all fields are\n> directly related to buffers/blocks. Here all the fields we want to\n> display are related to WAL, so we should try to make it display\n> similar to Buffers.\n>\n\nDilip, Julien, others, do you have any suggestions here? I think we\nneed to decide something now. We can change a few things like from\n'two spaces' to 'one space' between fields later as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 13:56:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 3, 2020 at 1:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 11:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 11:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Fri, Apr 03, 2020 at 10:52:02AM +0530, Amit Kapila wrote:\n> > > > On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > >\n> > > > > > > > > > > WAL: records=2359 full page records=42 bytes=447788\n> > > > >\n> > > > > 1) records; 2) pages (\"full page images\"); 3) bytes\n> > > > >\n> > > > > That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> > > > > and *not* like buffers, which shows various values all in units of \"pages\".\n> > > > >\n> > > >\n> > > > The way you have written (2) appears to bit awkward. I would prefer\n> > > > \"full page writes\" or \"full page images\".\n> > >\n> > > I didn't mean it to be the description used in the patch or anywhere else, just\n> > > the list of units.\n> > >\n> > > I wonder if it should use colons instead of equals ? As in:\n> > > | WAL: Records: 2359 Full Page Images: 42 Size: 437kB\n> > >\n> > > Note, that has: 1) two spaces; 2) capitalized \"fields\"; 3) size rather than\n> > > \"bytes\". That's similar to Buckets:\n> > > | Buckets: 1024 Batches: 1 Memory Usage: 44kB\n> > >\n> > > I'm not sure if it should say \"WAL: \" or \"WAL \", or perhaps \"WAL: \" If\n> > > there's no colon, then it looks like the first field is \"WAL Records\", but then\n> > > \"size\" isn't as tightly associated with WAL. It could say:\n> > > | WAL Records: n Full Page Images: n WAL Size: nkB\n> > >\n> > > For comparison, buffers uses \"equals\" for the case showing multiple \"fields\",\n> > > which are all in units of pages:\n> > > | Buffers: shared hit=15 read=2006\n> > >\n> >\n> > I think this is more close to the case of Buffers where all fields are\n> > directly related to buffers/blocks. Here all the fields we want to\n> > display are related to WAL, so we should try to make it display\n> > similar to Buffers.\n> >\n>\n> Dilip, Julien, others, do you have any suggestions here? I think we\n> need to decide something now. We can change a few things like from\n> 'two spaces' to 'one space' between fields later as well.\n\nI also think it is more close to the BufferUsage so better to keep\nsimilar to that. If we think the parsing is the problem we can keep\n'_' in the multi-word name as shown below.\nWAL: records=n full_page_writes=n bytes=n\n\nAnd, all three fields are related to WAL so we can use WAL: followed\nby other fields as we are doing now in the current patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 15:28:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Fri, Apr 3, 2020 at 11:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 1:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 11:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 3, 2020 at 11:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Fri, Apr 03, 2020 at 10:52:02AM +0530, Amit Kapila wrote:\n> > > > > On Fri, Apr 3, 2020 at 10:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > > >\n> > > > > > > > > > > > WAL: records=2359 full page records=42 bytes=447788\n> > > > > >\n> > > > > > 1) records; 2) pages (\"full page images\"); 3) bytes\n> > > > > >\n> > > > > > That is exactly like sort (method/type/size) and hash (buckets/batches/size),\n> > > > > > and *not* like buffers, which shows various values all in units of \"pages\".\n> > > > > >\n> > > > >\n> > > > > The way you have written (2) appears to bit awkward. I would prefer\n> > > > > \"full page writes\" or \"full page images\".\n> > > >\n> > > > I didn't mean it to be the description used in the patch or anywhere else, just\n> > > > the list of units.\n> > > >\n> > > > I wonder if it should use colons instead of equals ? As in:\n> > > > | WAL: Records: 2359 Full Page Images: 42 Size: 437kB\n> > > >\n> > > > Note, that has: 1) two spaces; 2) capitalized \"fields\"; 3) size rather than\n> > > > \"bytes\". That's similar to Buckets:\n> > > > | Buckets: 1024 Batches: 1 Memory Usage: 44kB\n> > > >\n> > > > I'm not sure if it should say \"WAL: \" or \"WAL \", or perhaps \"WAL: \" If\n> > > > there's no colon, then it looks like the first field is \"WAL Records\", but then\n> > > > \"size\" isn't as tightly associated with WAL. It could say:\n> > > > | WAL Records: n Full Page Images: n WAL Size: nkB\n> > > >\n> > > > For comparison, buffers uses \"equals\" for the case showing multiple \"fields\",\n> > > > which are all in units of pages:\n> > > > | Buffers: shared hit=15 read=2006\n> > > >\n> > >\n> > > I think this is more close to the case of Buffers where all fields are\n> > > directly related to buffers/blocks. Here all the fields we want to\n> > > display are related to WAL, so we should try to make it display\n> > > similar to Buffers.\n> > >\n> >\n> > Dilip, Julien, others, do you have any suggestions here? I think we\n> > need to decide something now. We can change a few things like from\n> > 'two spaces' to 'one space' between fields later as well.\n>\n> I also think it is more close to the BufferUsage so better to keep\n> similar to that.\n\n+1 too for keeping consistency with BufferUsage, and adding extra\nspaces if needed.\n\n> If we think the parsing is the problem we can keep\n> '_' in the multi-word name as shown below.\n> WAL: records=n full_page_writes=n bytes=n\n\nI'm fine with it too.\n\nTo answer Justin too:\n\n> Also, for now, the output can be in kB, but I think in the future we should\n> take a recent suggestion from Andres to make an ExplainPropertyBytes() which\n> handles conversion to and display of a reasonable unit.\n\nThis could be nice, but I think that it raises some extra concerns.\nThere are multiple tools that parse those outputs, and having to deal\nwith a new and non-fixed units may cause some issues. And probably\nthe non text output would also need to be displayed differently.\n\n\n", "msg_date": "Fri, 3 Apr 2020 15:42:40 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Thu, Apr 02, 2020 at 08:29:31AM +0200, Julien Rouhaud wrote:\n> > > \"full page records\" seems to be showing the number of full page\n> > > images, not the record having full page images.\n> >\n> > I am not sure what exactly is a difference but it is the records\n> > having full page images. Julien correct me if I am wrong.\n\n> Obviously previous complaints about the meaning and parsability of\n> \"full page writes\" should be addressed here for consistency.\n\nThere's a couple places that say \"full page image records\" which I think is\nlanguage you were trying to avoid. It's the number of pages, not the number of\nrecords, no ? I see explain and autovacuum say what I think is wanted, but\nthese say the wrong thing? Find attached slightly larger patch.\n\n$ git grep 'image record'\ncontrib/pg_stat_statements/pg_stat_statements.c: int64 wal_num_fpw; /* # of WAL full page image records generated */\ndoc/src/sgml/ref/explain.sgml: number of records, number of full page image records and amount of WAL\n\n-- \nJustin", "msg_date": "Mon, 6 Apr 2020 11:34:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: User Interface for WAL usage data" }, { "msg_contents": "On Mon, Apr 6, 2020 at 10:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 02, 2020 at 08:29:31AM +0200, Julien Rouhaud wrote:\n> > > > \"full page records\" seems to be showing the number of full page\n> > > > images, not the record having full page images.\n> > >\n> > > I am not sure what exactly is a difference but it is the records\n> > > having full page images. Julien correct me if I am wrong.\n>\n> > Obviously previous complaints about the meaning and parsability of\n> > \"full page writes\" should be addressed here for consistency.\n>\n> There's a couple places that say \"full page image records\" which I think is\n> language you were trying to avoid. It's the number of pages, not the number of\n> records, no ? I see explain and autovacuum say what I think is wanted, but\n> these say the wrong thing? Find attached slightly larger patch.\n>\n> $ git grep 'image record'\n> contrib/pg_stat_statements/pg_stat_statements.c: int64 wal_num_fpw; /* # of WAL full page image records generated */\n> doc/src/sgml/ref/explain.sgml: number of records, number of full page image records and amount of WAL\n>\n\nFew comments:\n1.\n- int64 wal_num_fpw; /* # of WAL full page image records generated */\n+ int64 wal_num_fpw; /* # of WAL full page images generated */\n\nLet's change comment as \" /* # of WAL full page writes generated */\"\nto be consistent with other places like instrument.h. Also, make a\nsimilar change at other places if required.\n\n2.\n <entry>\n- Total amount of WAL bytes generated by the statement\n+ Total number of WAL bytes generated by the statement\n </entry>\n\nI feel the previous text was better as this field can give us the size\nof WAL with which we can answer \"how much WAL data is generated by a\nparticular statement?\". Julien, do you have any thoughts on this?\n\nCan we please post/discuss patches on the main thread [1]?\n\n[1] - https://www.postgresql.org/message-id/CAB-hujrP8ZfUkvL5OYETipQwA%3De3n7oqHFU%3D4ZLxWS_Cza3kQQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 08:01:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: User Interface for WAL usage data" } ]
[ { "msg_contents": "Imagine a function that was going to take a table of, say, images, and\nso something to them over and over, like:\n\n SELECT pixel_count(img), clr_name, img_name\n FROM images img\n CROSS JOIN colors clr\n\nWhen you run this function, you find that a great amount of time is\nbeing spend in the decompression/detoasting routines, so you think: I\nhave a nested loop here, driven on the 'img' side, if I can avoid\nre-loading the big image object over and over I can make things\nfaster.\n\nGetting the datum value is really fast, so I can have a cache that\nkeeps the latest detoasted object around, and update it when the datum\nchanges, and store the cache information in the parent context. Like\nso:\n\nstruct {\n Datum d;\n bytea *ba;\n} DatumCache;\n\nPG_FUNCTION_INFO_V1(pixel_count);\nDatum pixel_count(PG_FUNCTION_ARGS)\n{\n Datum d = PG_GETARG_DATUM(0);\n DatumCache *dcache = fcinfo->flinfo->fn_extra;\n bytea *ba;\n\n if (!dcache)\n {\n dcache = MemoryContextAllocZero(fcinfo->flinfo->fn_mcxt,\nsizeof(DatumCache));\n fcinfo->flinfo->fn_extra = dcache;\n }\n\n if (dcache->d != d)\n {\n if (dcache->ba) pfree(dcache->ba);\n MemoryContext old_context =\nMemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);\n dcache->ba = PG_GETARG_BYTEA_P_COPY(0);\n MemoryContextSwitchTo(old_context);\n }\n\n ba = dcache->ba;\n\n /* now do things with ba here */\n}\n\nNow, notwithstanding any concerns about the particularities of my\nexample (I've found order-of-magnitude improvements on PostGIS\nworkloads avoiding the detoasting overhead this way) is my core\nassumption correct: within the context of a single SQL statement, will\nthe Datum values for a particular object remain constant?\n\nThey *seem* to, in the examples I'm running. But do they always?\n\n\n", "msg_date": "Thu, 2 Apr 2020 15:48:28 -0700", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": true, "msg_subject": "Datum values consistency within one query" }, { "msg_contents": "Paul Ramsey <pramsey@cleverelephant.ca> writes:\n> Getting the datum value is really fast, so I can have a cache that\n> keeps the latest detoasted object around, and update it when the datum\n> changes, and store the cache information in the parent context. Like\n> so:\n\nJeez, no, not like that. You're just testing a pointer. Most likely,\nif this is happening in a table scan, the pointer is pointing into\nsome shared buffer. If that buffer gets re-used to hold some other\npage, you could receive the identical pointer value but it's pointing\nto completely different data. The risk of false pointer match would\nbe even higher at plan levels above a scan, I think, because it'd\npossibly just be pointing into a plan node's output tuple slot.\n\nThe case where this would actually be worth doing, probably, is where\nyou are receiving a toasted-out-of-line datum. In that case you could\nlegitimately use the toast pointer ID values (va_valueid + va_toastrelid)\nas a lookup key for a cache, as long as it had a lifespan of a statement\nor less. You'd have to get a bit in bed with the details of toast\npointers, but it's not like those are going anywhere.\n\nIt would be interesting to tie that into the \"expanded object\"\ninfrastructure, perhaps, especially if the contents of the objects\nyou're interested in aren't just flat blobs of data.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 19:30:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Datum values consistency within one query" }, { "msg_contents": "\n\n> On Apr 2, 2020, at 4:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Paul Ramsey <pramsey@cleverelephant.ca> writes:\n>> Getting the datum value is really fast, so I can have a cache that\n>> keeps the latest detoasted object around, and update it when the datum\n>> changes, and store the cache information in the parent context. Like\n>> so:\n> \n> Jeez, no, not like that. You're just testing a pointer. \n> ...\n> The case where this would actually be worth doing, probably, is where\n> you are receiving a toasted-out-of-line datum. In that case you could\n> legitimately use the toast pointer ID values (va_valueid + va_toastrelid)\n> as a lookup key for a cache, as long as it had a lifespan of a statement\n> or less. You'd have to get a bit in bed with the details of toast\n> pointers, but it's not like those are going anywhere.\n\nSo, if I tested for VARATT_IS_EXTENDED(), and then for VARATT_IS_EXTERNAL_ONDISK(attr) and then did VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr), I could use va_valueid + va_toastrelid as keys in the cache for things that passed that filter?\nWhat about large const values that haven't been stored in a table yet? (eg, ST_Buffer(ST_MakePoint(0, 0), 100, 10000)) is there a stable key I can use for them?\n\n> It would be interesting to tie that into the \"expanded object\"\n> infrastructure, perhaps, especially if the contents of the objects\n> you're interested in aren't just flat blobs of data.\n\nYeah, I'm wrestling with the right place to do this stuff, it's not just the detoasting going on, I also build in-memory trees on large objects and hold them around for as long as the object keeps showing repeatedly up in the query, I just test the cache right now by using memcmp on the previous value and that's really pricey.\n\nP\n\n\n\n", "msg_date": "Fri, 3 Apr 2020 10:23:39 -0700", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": true, "msg_subject": "Re: Datum values consistency within one query" }, { "msg_contents": "Paul Ramsey <pramsey@cleverelephant.ca> writes:\n> So, if I tested for VARATT_IS_EXTENDED(), and then for VARATT_IS_EXTERNAL_ONDISK(attr) and then did VARATT_EXTERNAL_GET_POINTER(toast_pointer, attr), I could use va_valueid + va_toastrelid as keys in the cache for things that passed that filter?\n\nI'm pretty sure VARATT_IS_EXTERNAL_ONDISK subsumes the other, so you\ndon't need to check VARATT_IS_EXTENDED, but yeah.\n\n> What about large const values that haven't been stored in a table yet? (eg, ST_Buffer(ST_MakePoint(0, 0), 100, 10000)) is there a stable key I can use for them?\n\nNope. If you could convert them into \"expanded datums\" then you\nmight have something ... but without any knowledge about where they\nare coming from it's hard to see how to detect that a value is\nthe same one you dealt with before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 13:35:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Datum values consistency within one query" } ]
[ { "msg_contents": "Hello,\n\nI guess no human or machine ever runs $SUBJECT, because when I tried\nit while hunting down users of txid_XXX functions, it failed (see\nend). To run it, you need a primary/standby pair, here 5441/5442, and\nthen:\n\nPGPORT=5441 psql postgres -f sql/hs_primary_setup.sql\nPGPORT=5442 ./pg_regress --use-existing --dbname=postgres --schedule\nstandby_schedule\n\nPerhaps the output changed in January with commit 2eb34ac3. Easy to\nfix, but I wonder if anyone has a good idea for how to get check-world\nto run it (probably via the \"recovery\" stuff).\n\ndiff -U3 /home/tmunro/projects/postgresql/src/test/regress/expected/hs_standby_disallowed.out\n/home/tmunro/projects/postgresql/src/test/regress/results/hs_standby_disallowed.out\n--- /home/tmunro/projects/postgresql/src/test/regress/expected/hs_standby_disallowed.out\n 2020-03-24 09:02:24.835023971 +1300\n+++ /home/tmunro/projects/postgresql/src/test/regress/results/hs_standby_disallowed.out\n2020-04-03 13:09:24.339672898 +1300\n@@ -64,7 +64,7 @@\n (1 row)\n\n COMMIT PREPARED 'foobar';\n-ERROR: COMMIT PREPARED cannot run inside a transaction block\n+ERROR: cannot execute COMMIT PREPARED during recovery\n ROLLBACK;\n BEGIN;\n SELECT count(*) FROM hs1;\n@@ -86,7 +86,7 @@\n (1 row)\n\n ROLLBACK PREPARED 'foobar';\n-ERROR: ROLLBACK PREPARED cannot run inside a transaction block\n+ERROR: cannot execute ROLLBACK PREPARED during recovery\n ROLLBACK;\n -- Locks\n BEGIN;\n\n\n", "msg_date": "Fri, 3 Apr 2020 13:24:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "src/test/regress/standby_schedule" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I guess no human or machine ever runs $SUBJECT, because when I tried\n> it while hunting down users of txid_XXX functions, it failed (see\n> end). To run it, you need a primary/standby pair, here 5441/5442, and\n> then:\n\n> PGPORT=5441 psql postgres -f sql/hs_primary_setup.sql\n> PGPORT=5442 ./pg_regress --use-existing --dbname=postgres --schedule\n> standby_schedule\n\n> Perhaps the output changed in January with commit 2eb34ac3. Easy to\n> fix, but I wonder if anyone has a good idea for how to get check-world\n> to run it (probably via the \"recovery\" stuff).\n\nThat stuff is very very ancient. I'd suggest nuking it and writing\nan equivalent TAP test, assuming that there's anything it does that's\nnot already covered by our existing TAP tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 20:29:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/test/regress/standby_schedule" } ]
[ { "msg_contents": "I’m using PostgreSQL Version 11.2. Try this:\n\ncreate type rt as (a text, b text);\ncreate table t(k serial primary key, r rt);\ninsert into t(r) values\n ('(\"a\",\"b\")'),\n ('( \"e\" , \"f\" )'),\n ('( \"g (h)\" , \"i, j\" )');\nselect\n k,\n '>'||(r).a||'<' as a,\n '>'||(r).b||'<' as b\nfrom t order by k;\n\nThis is the result.\n\n k | a | b \n---+-----------+----------\n 1 | >a< | >b<\n 2 | > e < | > f <\n 3 | > g (h) < | > i, j <\n\nThe documentation in section “8.16.2. Constructing Composite Values” here:\n\nhttps://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>\n\nshows examples of using double quotes to surround a text value inside the literal for a user-defined record type—and it states that this is optional unless the text value contains a comma or a parenthesis. But in every example there is no case where the syntax elements (the surrounding parentheses and the comma separators) outside of the values themselves have space(s) on either side. So it gives no basis to explain the result that I show for “k=2” and “k=3”.\n\nIntuition tells me that if a text value is surrounded by double quotes, then these delimit the string and that anything outside of this (baring the special case of a text value that is *not* surrounded by double quotes), then whitespace can be used—as it is in every other case that I can think of in PostgreSQL’s SQL and PL/pgSQL, at the programers discretion to improve readability.\n\nThis, by the way, is the rule for a JSON string value.\n\nIn fact, the rule seems to be this:\n\n“When a text value is written inside the literal for a user-defined type (which data type is given by its declaration), the entire run of characters between the syntax elements—the opening left parenthesis, an interior comma, or the closing right parenthesis— is taken to be the text value, including spaces outside of the quotes.”\n\nHave I stumbled on a bug? If not, please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.\n\n\n.\n\n\nI’m using PostgreSQL Version 11.2. Try this:create type rt as (a text, b text);create table t(k serial primary key, r rt);insert into t(r) values  ('(\"a\",\"b\")'),  ('( \"e\" , \"f\" )'),  ('( \"g (h)\" , \"i, j\" )');select  k,  '>'||(r).a||'<' as a,  '>'||(r).b||'<' as bfrom t order by k;This is the result. k |     a     |    b     ---+-----------+---------- 1 | >a<       | >b< 2 | > e <     | > f < 3 | > g (h) < | > i, j <The documentation in section “8.16.2. Constructing Composite Values” here:https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAXshows examples of using double quotes to surround a text value inside the literal for a user-defined record type—and it states that this is optional unless the text value contains a comma or a parenthesis. But in every example there is no case where the syntax elements (the surrounding parentheses and the comma separators) outside of the values themselves have space(s) on either side. So it gives no basis to explain the result that I show for “k=2” and “k=3”.Intuition tells me that if a text value is surrounded by double quotes, then these delimit the string and that anything outside of this (baring the special case of a text value that is *not* surrounded by double quotes), then whitespace can be used—as it is in every other case that I can think of in PostgreSQL’s SQL and PL/pgSQL, at the programers discretion to improve readability.This, by the way, is the rule for a JSON string value.In fact, the rule seems to be this:“When a text value is written inside the literal for a user-defined type (which data type is given by its declaration), the entire run of characters between the syntax elements—the opening left parenthesis, an interior comma, or the closing right parenthesis— is taken to be the text value, including spaces outside of the quotes.”Have I stumbled on a bug? If not, please explain the rationale for what seems to me to be a counter-intuitive syntax design choice..", "msg_date": "Thu, 2 Apr 2020 18:56:35 -0700", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Syntax_rules_for_a_text_value_inside_the_literal_for_a_?=\n =?utf-8?Q?user-defined_type=E2=80=94doc_section_=E2=80=9C8=2E16=2E2=2E_Co?=\n =?utf-8?Q?nstructing_Composite_Values=E2=80=9D?=" }, { "msg_contents": "Bryn Llewellyn <bryn@yugabyte.com> writes:\n> The documentation in section “8.16.2. Constructing Composite Values” here:\n> https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>\n\nThe authoritative documentation for that is at 8.16.6 \"Composite Type\nInput and Output Syntax\", and it says quite clearly that whitespace is\nnot ignored (except for before and after the outer parentheses).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 22:25:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?utf-8?Q?Syntax_rules_for_a_text_value_inside_the_literal_for_a_?=\n =?utf-8?Q?user-defined_type=E2=80=94doc_section_=E2=80=9C8=2E16=2E2=2E_Co?=\n =?utf-8?Q?nstructing_Composite_Values=E2=80=9D?=" }, { "msg_contents": "On 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nBryn Llewellyn <bryn@yugabyte.com> writes:\n> The documentation in section “8.16.2. Constructing Composite Values” here:\n> https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>\n\nThe authoritative documentation for that is at 8.16.6 \"Composite Type\nInput and Output Syntax\", and it says quite clearly that whitespace is\nnot ignored (except for before and after the outer parentheses).\n\n\t\t\tregards, tom lane\n\n\nThanks for the super-fast reply, Tom. Yes, I had read that part. I should have said this:\n\n“The documentation in section “8.16.2. Constructing Composite Values” et seq shows examples…”\n\nThis is the text to which you refer me:\n\n“Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in:\n\n'( 42)'\n\nthe whitespace will be ignored if the field type is integer, but not if it is text. As shown previously, when writing a composite value you can write double quotes around any individual field value.”\n\nNotice the wording “double quotes around any individual field value.” The word “around” was the source of my confusion. For the docs to communicate what, it seems, they ought to, then the word should be “within”. This demonstrates my point:\n\ncreate type rt as (a text, b text);\nwith v as (select '(a \"b c\" d, e \"f,g\" h)'::rt as r)\nselect\n '>'||(r).a||'<' as a,\n '>'||(r).b||'<' as b\nfrom v;\n\nIt shows this:\n\n a | b \n-----------+------------\n >a b c d< | > e f,g h<\n\nSo these are the resulting parsed-out text values:\n\na b c d\n\nand\n\n e f,g h\n\nThis demonstrates that, in my input, the double quotes are *within* each of the two text values—and definitely *do not surround* them.\n\nI really would appreciate a reply to the second part of my earlier question:\n\n“please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.”\n\nI say “counter-intuitive” because JSON had to solve the same high-level goal—to distinguish between a string value on the one hand and, for example, a number or boolean value on the other hand. They chose the rule that a string value *must* be surrounded by double quotes and that other values must *not* be so surrounded. The JSON model resonates with my intuition. It also has mechanisms to escape interior double quotes and other special characters. I am very interested to know why the PostgreSQL designers preferred their model.\n\n\n\n\n\n\n\n\nOn 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:Bryn Llewellyn <bryn@yugabyte.com> writes:The documentation in section “8.16.2. Constructing Composite Values” here:https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>The authoritative documentation for that is at 8.16.6 \"Composite TypeInput and Output Syntax\", and it says quite clearly that whitespace isnot ignored (except for before and after the outer parentheses). regards, tom laneThanks for the super-fast reply, Tom. Yes, I had read that part. I should have said this:“The documentation in section “8.16.2. Constructing Composite Values” et seq shows examples…”This is the text to which you refer me:“Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in:'(  42)'the whitespace will be ignored if the field type is integer, but not if it is text. As shown previously, when writing a composite value you can write double quotes around any individual field value.”Notice the wording “double quotes around any individual field value.” The word “around” was the source of my confusion. For the docs to communicate what, it seems, they ought to, then the word should be “within”. This demonstrates my point:create type rt as (a text, b text);with v as (select '(a \"b c\" d,     e \"f,g\" h)'::rt as r)select  '>'||(r).a||'<' as a,  '>'||(r).b||'<' as bfrom v;It shows this:     a     |     b      -----------+------------ >a b c d< | >     e f,g h<So these are the resulting parsed-out text values:a b c dand     e f,g hThis demonstrates that, in my input, the double quotes are *within* each of the two text values—and definitely *do not surround* them.I really would appreciate a reply to the second part of my earlier question:“please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.”I say “counter-intuitive” because JSON had to solve the same high-level goal—to distinguish between a string value on the one hand and, for example, a number or boolean value on the other hand. They chose the rule that a string value *must* be surrounded by double quotes and that other values must *not* be so surrounded. The JSON model resonates with my intuition. It also has mechanisms to escape interior double quotes and other special characters. I am very interested to know why the PostgreSQL designers preferred their model.", "msg_date": "Thu, 2 Apr 2020 20:46:37 -0700", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_Syntax_rules_for_a_text_value_inside_the_literal_?=\n =?utf-8?Q?for_a_user-defined_type=E2=80=94doc_section_=E2=80=9C8=2E16=2E2?=\n =?utf-8?Q?=2E_Constructing_Composite_Values=E2=80=9D?=" }, { "msg_contents": "On Thu, Apr 2, 2020 at 8:46 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:\n\n> On 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bryn Llewellyn <bryn@yugabyte.com> writes:\n>\n> The documentation in section “8.16.2. Constructing Composite Values” here:\n> https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <\n> https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>\n>\n>\n> The authoritative documentation for that is at 8.16.6 \"Composite Type\n> Input and Output Syntax\", and it says quite clearly that whitespace is\n> not ignored (except for before and after the outer parentheses).\n>\n[...]\n\n>\n> “Whitespace outside the parentheses is ignored, but within the parentheses\n> it is considered part of the field value, and might or might not be\n> significant depending on the input conversion rules for the field data\n> type. For example, in:\n>\n> '( 42)'\n>\n> the whitespace will be ignored if the field type is integer, but not if it\n> is text. As shown previously, when writing a composite value you can write\n> double quotes around any individual field value.”\n>\n> Notice the wording “double quotes around any individual field value.” The\n> word “around” was the source of my confusion. For the docs to communicate\n> what, it seems, they ought to, then the word should be “within”. This\n> demonstrates my point:\n>\n\nActually, they do mean around (as in, the double-quotes must be adjacent to\nthe commas that delimit the fields, or the parens).\n\nThe next two sentences clear things up a bit:\n\n\"You must do so if the field value would otherwise confuse the\ncomposite-value parser. In particular, fields containing parentheses,\ncommas, double quotes, or backslashes must be double-quoted.\"\n\nThat said the documentation doesn't match the behavior (which is\nconsiderably more forgiving and also willing to simply discard\ndouble-quotes instead of error-ing out when the documented rules are not\nadhered to)\n\nSpecifically: '(a \\\"b c\\\" d, e \\\"f,g\\\" h)'::rt leaves the double-quote\nwhile '(a \"\"b c\"\" d, e \"\"f,g\"\" h)'::rt does not. Neither have the\nfield surround with double-quotes so should be invalid per the\ndocumentation. When you follow the documentation they then both retain the\ndouble-quote.\n\nSo if you follow the guidelines set forth in the documentation you get the\nresult the documentation promises. If you fail to follow the guidelines\nyou may still get a result but there is no promise made as to its\ncontents. Not ideal but also not likely to be changed after all this time.\n\n\n>\n>\n>\n>\n> *create type rt as (a text, b text);with v as (select '(a \"b c\" d, e\n> \"f,g\" h)'::rt as r)select '>'||(r).a||'<' as a, '>'||(r).b||'<' as bfrom\n> v;*\n>\n> This demonstrates that, in my input, the double quotes are *within* each\n> of the two text values—and definitely *do not surround* them.\n>\n\nYep, which is why you have an issue. The \"surround them\" is indeed what\nthe text meant to say.\n\n\n> I really would appreciate a reply to the second part of my earlier\n> question:\n>\n> “please explain the rationale for what seems to me to be a\n> counter-intuitive syntax design choice.”\n>\n[...]\n\n> They chose the rule that a string value *must* be surrounded by double\n> quotes and that other values must *not* be so surrounded. The JSON model\n> resonates with my intuition.\n>\n\nThis point wasn't answered because there is no good answer to be given.\nThe above is how someone in the mid-90s or so decided PostgreSQL should\nhandle this. I'll personally agree that more verbose but explicit, and\nless forgiving, syntax and parsing, would have been a better choice. But\nthe choice has been made and isn't subject to change.\n\nBut regardless of what drove the original design choice if you really care\nabout it in a \"want to learn\" mode then it is very easy to observe the\ndefined behavior and critique it independently of how it came to be. If\nall you want to do is make a left-handed jab at PostgreSQL for having a\nnon-intuitive to you design choice do act surprised when people don't\nchoose to respond - especially when none of us made the original choice.\n\nThe only thing we can do today is describe the system more clearly if we\nhave failed to do so adequately. You are probably in the best position,\nthen, to learn what it does and propose new wording that someone with\ninexperienced (or biased toward a different system) eyes would understand\nquickly and clearly.\n\nDavid J.\n\nOn Thu, Apr 2, 2020 at 8:46 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:On 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:Bryn Llewellyn <bryn@yugabyte.com> writes:The documentation in section “8.16.2. Constructing Composite Values” here:https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>The authoritative documentation for that is at 8.16.6 \"Composite TypeInput and Output Syntax\", and it says quite clearly that whitespace isnot ignored (except for before and after the outer parentheses). [...] “Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in:'(  42)'the whitespace will be ignored if the field type is integer, but not if it is text. As shown previously, when writing a composite value you can write double quotes around any individual field value.”Notice the wording “double quotes around any individual field value.” The word “around” was the source of my confusion. For the docs to communicate what, it seems, they ought to, then the word should be “within”. This demonstrates my point:Actually, they do mean around (as in, the double-quotes must be adjacent to the commas that delimit the fields, or the parens).The next two sentences clear things up a bit:\"You must do so if the field value would otherwise confuse the composite-value parser.  In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted.\"That said the documentation doesn't match the behavior (which is considerably more forgiving and also willing to simply discard double-quotes instead of error-ing out when the documented rules are not adhered to)Specifically:  '(a \\\"b c\\\" d,     e \\\"f,g\\\" h)'::rt leaves the double-quote while '(a \"\"b c\"\" d,     e \"\"f,g\"\" h)'::rt does not.  Neither have the field surround with double-quotes so should be invalid per the documentation.  When you follow the documentation they then both retain the double-quote.So if you follow the guidelines set forth in the documentation you get the result the documentation promises.  If you fail to follow the guidelines you may still get a result but there is no promise made as to its contents.  Not ideal but also not likely to be changed after all this time.create type rt as (a text, b text);with v as (select '(a \"b c\" d,     e \"f,g\" h)'::rt as r)select  '>'||(r).a||'<' as a,  '>'||(r).b||'<' as bfrom v;This demonstrates that, in my input, the double quotes are *within* each of the two text values—and definitely *do not surround* them.Yep, which is why you have an issue.   The \"surround them\" is indeed what the text meant to say.I really would appreciate a reply to the second part of my earlier question:“please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.”[...]They chose the rule that a string value *must* be surrounded by double quotes and that other values must *not* be so surrounded. The JSON model resonates with my intuition.This point wasn't answered because there is no good answer to be given.  The above is how someone in the mid-90s or so decided PostgreSQL should handle this.  I'll personally agree that more verbose but explicit, and less forgiving, syntax and parsing, would have been a better choice.  But the choice has been made and isn't subject to change.But regardless of what drove the original design choice if you really care about it in a \"want to learn\" mode then it is very easy to observe the defined behavior and critique it independently of how it came to be.  If all you want to do is make a left-handed jab at PostgreSQL for having a non-intuitive to you design choice do act surprised when people don't choose to respond - especially when none of us made the original choice.The only thing we can do today is describe the system more clearly if we have failed to do so adequately.  You are probably in the best position, then, to learn what it does and propose new wording that someone with inexperienced (or biased toward a different system) eyes would understand quickly and clearly.David J.", "msg_date": "Fri, 3 Apr 2020 00:05:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_Syntax_rules_for_a_text_value_inside_the_literal?=\n\t=?UTF-8?Q?_for_a_user=2Ddefined_type=E2=80=94doc_section_=E2=80=9C8=2E16=2E2=2E_Constructi?=\n\t=?UTF-8?Q?ng_Composite_Values=E2=80=9D?=" }, { "msg_contents": "On 03-Apr-2020, at 00:05, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\nOn Thu, Apr 2, 2020 at 8:46 PM Bryn Llewellyn <bryn@yugabyte.com <mailto:bryn@yugabyte.com>> wrote:\nOn 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n\nBryn Llewellyn <bryn@yugabyte.com <mailto:bryn@yugabyte.com>> writes:\n> The documentation in section “8.16.2. Constructing Composite Values” here:\n> https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX> <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>>\n\nThe authoritative documentation for that is at 8.16.6 \"Composite Type\nInput and Output Syntax\", and it says quite clearly that whitespace is\nnot ignored (except for before and after the outer parentheses). \n[...] \n\n“Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in:\n\n'( 42)'\n\nthe whitespace will be ignored if the field type is integer, but not if it is text. As shown previously, when writing a composite value you can write double quotes around any individual field value.”\n\nNotice the wording “double quotes around any individual field value.” The word “around” was the source of my confusion. For the docs to communicate what, it seems, they ought to, then the word should be “within”. This demonstrates my point:\n\nActually, they do mean around (as in, the double-quotes must be adjacent to the commas that delimit the fields, or the parens).\n\nThe next two sentences clear things up a bit:\n\n\"You must do so if the field value would otherwise confuse the composite-value parser. In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted.\"\n\nThat said the documentation doesn't match the behavior (which is considerably more forgiving and also willing to simply discard double-quotes instead of error-ing out when the documented rules are not adhered to)\n\nSpecifically: '(a \\\"b c\\\" d, e \\\"f,g\\\" h)'::rt leaves the double-quote while '(a \"\"b c\"\" d, e \"\"f,g\"\" h)'::rt does not. Neither have the field surround with double-quotes so should be invalid per the documentation. When you follow the documentation they then both retain the double-quote.\n\nSo if you follow the guidelines set forth in the documentation you get the result the documentation promises. If you fail to follow the guidelines you may still get a result but there is no promise made as to its contents. Not ideal but also not likely to be changed after all this time.\n\ncreate type rt as (a text, b text);\nwith v as (select '(a \"b c\" d, e \"f,g\" h)'::rt as r)\nselect\n '>'||(r).a||'<' as a,\n '>'||(r).b||'<' as b\nfrom v;\n\nThis demonstrates that, in my input, the double quotes are *within* each of the two text values—and definitely *do not surround* them.\n\nYep, which is why you have an issue. The \"surround them\" is indeed what the text meant to say.\n\n\nI really would appreciate a reply to the second part of my earlier question:\n\n“please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.”\n[...]\nThey chose the rule that a string value *must* be surrounded by double quotes and that other values must *not* be so surrounded. The JSON model resonates with my intuition.\n\nThis point wasn't answered because there is no good answer to be given. The above is how someone in the mid-90s or so decided PostgreSQL should handle this. I'll personally agree that more verbose but explicit, and less forgiving, syntax and parsing, would have been a better choice. But the choice has been made and isn't subject to change.\n\nBut regardless of what drove the original design choice if you really care about it in a \"want to learn\" mode then it is very easy to observe the defined behavior and critique it independently of how it came to be. If all you want to do is make a left-handed jab at PostgreSQL for having a non-intuitive to you design choice do act surprised when people don't choose to respond - especially when none of us made the original choice.\n\nThe only thing we can do today is describe the system more clearly if we have failed to do so adequately. You are probably in the best position, then, to learn what it does and propose new wording that someone with inexperienced (or biased toward a different system) eyes would understand quickly and clearly.\n\nDavid J.\n\n\nThanks for the explanation, David. It helped me a lot. You said “If all you want to do is make a left-handed jab at PostgreSQL”. That was not at all my intention. Rather, my question was driven by my experience of learning things over the years—and by my experience of teaching others. Sometimes, at the early stage of learning, a new idea strikes one as counter intuitive, and one feels that there must, surely, have been a better way. The common answer goes along these lines: “No, you’re missing the big picture. Consider use case X. And use case Y. See how your proposed design would fail to handle these. And see how the adopted design does handle them.” And so the questioner is enlightened—but only by asking. This is a key aspect of learning. Occasionally, the answer is “You’re right. A different design would have been nicer. But it’s too late, now, to change things. However, if you follow the rules, you can handle all use cases.” That answer, too, is very useful. It tells the questioner that they are not missing any subtlety. This is *key*. Then they can simply move on and use the feature as it is.\n\nThanks for suggesting that I might propose some rewording of the PG doc. I believe that I can see how this might be done. I just submitted my proposal using this form:\n\nhttps://www.postgresql.org/account/comments/new/11/rowtypes.html/ <https://www.postgresql.org/account/comments/new/11/rowtypes.html/>\n\n\n\n\n\n\n\n\n\n\n\n\nOn 03-Apr-2020, at 00:05, David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Apr 2, 2020 at 8:46 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:On 02-Apr-2020, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:Bryn Llewellyn <bryn@yugabyte.com> writes:The documentation in section “8.16.2. Constructing Composite Values” here:https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX <https://www.postgresql.org/docs/11/rowtypes.html#ROWTYPES-IO-SYNTAX>The authoritative documentation for that is at 8.16.6 \"Composite TypeInput and Output Syntax\", and it says quite clearly that whitespace isnot ignored (except for before and after the outer parentheses). [...] “Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in:'(  42)'the whitespace will be ignored if the field type is integer, but not if it is text. As shown previously, when writing a composite value you can write double quotes around any individual field value.”Notice the wording “double quotes around any individual field value.” The word “around” was the source of my confusion. For the docs to communicate what, it seems, they ought to, then the word should be “within”. This demonstrates my point:Actually, they do mean around (as in, the double-quotes must be adjacent to the commas that delimit the fields, or the parens).The next two sentences clear things up a bit:\"You must do so if the field value would otherwise confuse the composite-value parser.  In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted.\"That said the documentation doesn't match the behavior (which is considerably more forgiving and also willing to simply discard double-quotes instead of error-ing out when the documented rules are not adhered to)Specifically:  '(a \\\"b c\\\" d,     e \\\"f,g\\\" h)'::rt leaves the double-quote while '(a \"\"b c\"\" d,     e \"\"f,g\"\" h)'::rt does not.  Neither have the field surround with double-quotes so should be invalid per the documentation.  When you follow the documentation they then both retain the double-quote.So if you follow the guidelines set forth in the documentation you get the result the documentation promises.  If you fail to follow the guidelines you may still get a result but there is no promise made as to its contents.  Not ideal but also not likely to be changed after all this time.create type rt as (a text, b text);with v as (select '(a \"b c\" d,     e \"f,g\" h)'::rt as r)select  '>'||(r).a||'<' as a,  '>'||(r).b||'<' as bfrom v;This demonstrates that, in my input, the double quotes are *within* each of the two text values—and definitely *do not surround* them.Yep, which is why you have an issue.   The \"surround them\" is indeed what the text meant to say.I really would appreciate a reply to the second part of my earlier question:“please explain the rationale for what seems to me to be a counter-intuitive syntax design choice.”[...]They chose the rule that a string value *must* be surrounded by double quotes and that other values must *not* be so surrounded. The JSON model resonates with my intuition.This point wasn't answered because there is no good answer to be given.  The above is how someone in the mid-90s or so decided PostgreSQL should handle this.  I'll personally agree that more verbose but explicit, and less forgiving, syntax and parsing, would have been a better choice.  But the choice has been made and isn't subject to change.But regardless of what drove the original design choice if you really care about it in a \"want to learn\" mode then it is very easy to observe the defined behavior and critique it independently of how it came to be.  If all you want to do is make a left-handed jab at PostgreSQL for having a non-intuitive to you design choice do act surprised when people don't choose to respond - especially when none of us made the original choice.The only thing we can do today is describe the system more clearly if we have failed to do so adequately.  You are probably in the best position, then, to learn what it does and propose new wording that someone with inexperienced (or biased toward a different system) eyes would understand quickly and clearly.David J.\nThanks for the explanation, David. It helped me a lot. You said “If all you want to do is make a left-handed jab at PostgreSQL”. That was not at all my intention. Rather, my question was driven by my experience of learning things over the years—and by my experience of teaching others. Sometimes, at the early stage of learning, a new idea strikes one as counter intuitive, and one feels that there must, surely, have been a better way. The common answer goes along these lines: “No, you’re missing the big picture. Consider use case X. And use case Y. See how your proposed design would fail to handle these. And see how the adopted design does handle them.” And so the questioner is enlightened—but only by asking. This is a key aspect of learning. Occasionally, the answer is “You’re right. A different design would have been nicer. But it’s too late, now, to change things. However, if you follow the rules, you can handle all use cases.” That answer, too, is very useful. It tells the questioner that they are not missing any subtlety. This is *key*. Then they can simply move on and use the feature as it is.Thanks for suggesting that I might propose some rewording of the PG doc. I believe that I can see how this might be done. I just submitted my proposal using this form:https://www.postgresql.org/account/comments/new/11/rowtypes.html/", "msg_date": "Fri, 3 Apr 2020 11:14:00 -0700", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_Syntax_rules_for_a_text_value_inside_the_literal_?=\n =?utf-8?Q?for_a_user-defined_type=E2=80=94doc_section_=E2=80=9C8=2E16=2E2?=\n =?utf-8?Q?=2E_Constructing_Composite_Values=E2=80=9D?=" } ]
[ { "msg_contents": "Currently there is no way to check if CAST will succeed.\n\nTherefore I propose adding new function: is_castable\n\nSELECT is_castable('foo' as time) // false\nSELECT is_castable('123' as numeric) // true\nSELECT is_castable(1.5 as int) // true\nSELECT is_castable('1.5' as int) // false\n\nMany users write their own functions:\n\nhttps://stackoverflow.com/q/10306830/2446102 (11k views, ~25 upvotes)\nhttps://stackoverflow.com/q/36775814/2446102\nhttps://stackoverflow.com/a/16206123/2446102 (72k views, 70 upvotes)\nhttps://stackoverflow.com/q/2082686/2446102 (174k views, ~150 upvotes)\n\nSimilar features are implemented in:\n- SQL Server (as TRY_CONVERT)\n- Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)\n\nI would love to implement it myself, but my knowledge of C is superficial.\n\nThanks,\nMichał Wadas\n\nCurrently there is no way to check if CAST will succeed. Therefore I propose adding new function: is_castableSELECT is_castable('foo' as time) // falseSELECT is_castable('123' as numeric) // trueSELECT is_castable(1.5 as int) // trueSELECT is_castable('1.5' as int) // falseMany users write their own functions:https://stackoverflow.com/q/10306830/2446102 (11k views, ~25 upvotes)https://stackoverflow.com/q/36775814/2446102https://stackoverflow.com/a/16206123/2446102 (72k views, 70 upvotes)https://stackoverflow.com/q/2082686/2446102 (174k views, ~150 upvotes)Similar features are implemented in:- SQL Server (as TRY_CONVERT)- Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)I would love to implement it myself, but my knowledge of C is superficial.Thanks,Michał Wadas", "msg_date": "Fri, 3 Apr 2020 13:45:31 +0200", "msg_from": "=?UTF-8?Q?Micha=C5=82_Wadas?= <michalwadas@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: is_castable" }, { "msg_contents": "Hi\n\npá 3. 4. 2020 v 13:45 odesílatel Michał Wadas <michalwadas@gmail.com>\nnapsal:\n\n> Currently there is no way to check if CAST will succeed.\n>\n> Therefore I propose adding new function: is_castable\n>\n> SELECT is_castable('foo' as time) // false\n> SELECT is_castable('123' as numeric) // true\n> SELECT is_castable(1.5 as int) // true\n> SELECT is_castable('1.5' as int) // false\n>\n> Many users write their own functions:\n>\n> https://stackoverflow.com/q/10306830/2446102 (11k views, ~25 upvotes)\n> https://stackoverflow.com/q/36775814/2446102\n> https://stackoverflow.com/a/16206123/2446102 (72k views, 70 upvotes)\n> https://stackoverflow.com/q/2082686/2446102 (174k views, ~150 upvotes)\n>\n> Similar features are implemented in:\n> - SQL Server (as TRY_CONVERT)\n> - Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)\n>\n> I would love to implement it myself, but my knowledge of C is superficial.\n>\n\nIt's is interesting feature - and implementation can be very easy - but\nwithout enhancing type API this function can be pretty slow.\n\nSo there is a dilemma - simple implementation (few work) but possible very\nnegative performance impact under higher load due work with savepoints, or\nmuch larger work (probably easy) without necessity to use safepoints.\n\nRegards\n\nPavel\n\n\n\n>\n> Thanks,\n> Michał Wadas\n>\n>\n\nHipá 3. 4. 2020 v 13:45 odesílatel Michał Wadas <michalwadas@gmail.com> napsal:Currently there is no way to check if CAST will succeed. Therefore I propose adding new function: is_castableSELECT is_castable('foo' as time) // falseSELECT is_castable('123' as numeric) // trueSELECT is_castable(1.5 as int) // trueSELECT is_castable('1.5' as int) // falseMany users write their own functions:https://stackoverflow.com/q/10306830/2446102 (11k views, ~25 upvotes)https://stackoverflow.com/q/36775814/2446102https://stackoverflow.com/a/16206123/2446102 (72k views, 70 upvotes)https://stackoverflow.com/q/2082686/2446102 (174k views, ~150 upvotes)Similar features are implemented in:- SQL Server (as TRY_CONVERT)- Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)I would love to implement it myself, but my knowledge of C is superficial.It's is interesting feature - and implementation can be very easy - but without enhancing type API this function can be pretty slow.So there is a dilemma - simple implementation (few work) but possible very negative performance impact under higher load due work with savepoints, or much larger work (probably easy) without necessity to use safepoints.RegardsPavel Thanks,Michał Wadas", "msg_date": "Fri, 3 Apr 2020 14:05:55 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: is_castable" }, { "msg_contents": "=?UTF-8?Q?Micha=C5=82_Wadas?= <michalwadas@gmail.com> writes:\n> Currently there is no way to check if CAST will succeed.\n> Therefore I propose adding new function: is_castable\n\n> SELECT is_castable('foo' as time) // false\n\nWhat would you actually do with it?\n\n> Similar features are implemented in:\n> - SQL Server (as TRY_CONVERT)\n> - Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)\n\nSomehow, I don't think those have the semantics of what you suggest here.\n\nI suspect you are imagining that you could write something like\n\nCASE WHEN is_castable(x as y) THEN cast(x as y) ELSE ...\n\nbut that will not work. The THEN condition has to pass parse analysis\nwhether or not execution will ever reach it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:19:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: is_castable" }, { "msg_contents": "> What would you actually do with it?\n\nI am one of the users of these do-it-yourself functions, and I use them in\nmy ETL pipelines heavily.\n\nFor me, data gets loaded into a staging table, all columns text, and I run\na whole bunch of validation queries\non the data prior to it moving to the next stage in the pipeline, a\nstrongly typed staging table, where more\nvalidations are performed. So I currently check each column type with my\ncustom can_convert_sometype(text)\nfunctions, and if the row has any columns that cannot convert, it marks a\nboolean to ignore moving that row\nto the next strongly typed table (thus avoiding the cast for those rows).\n\nFor this ETL process, I need to give users feedback about why specific\nspecific rows failed to be processed, so\neach of those validations also logs an error message for the user for each\nrow failing a specific validation.\n\nSo it's a two step process for me currently because of this, I would love\nif there was a better way to handle\nthis type of work though, because my plpgsql functions using exception\nblocks are not exactly great\nfor performance.\n\n>> Similar features are implemented in:\n>> - SQL Server (as TRY_CONVERT)\n>> - Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)\n>\n> Somehow, I don't think those have the semantics of what you suggest\nhere.\n\nAgreed that they aren't the same exact feature, but I would very much love\nthe ability to both\nknow \"will this cast fail?\", and also be able to \"try and cast, but if it\nfails just put this value and don't error\".\n\nThey both have uses IMO, and while having is_castable() functions built in\nwould be great, I just want to\nexpress my desire for something like the above feature in SQL Server or\nOracle as well.\n\n> \n\nWhat would you actually do with it?I am one of the users of these do-it-yourself functions, and I use them in my ETL pipelines heavily.For me, data gets loaded into a staging table, all columns text, and I run a whole bunch of validation queries on the data prior to it moving to the next stage in the pipeline, a strongly typed staging table, where more validations are performed. So I currently check each column type with my custom can_convert_sometype(text) functions, and if the row has any columns that cannot convert, it marks a boolean to ignore moving that rowto the next strongly typed table (thus avoiding the cast for those rows). For this ETL process, I need to give users feedback about why specific specific rows failed to be processed, soeach of those validations also logs an error message for the user for each row failing a specific validation.So it's a two step process for me currently because of this, I would love if there was a better way to handle this type of work though, because my plpgsql functions using exception blocks are not exactly great for performance.>> Similar features are implemented in:>> - SQL Server (as TRY_CONVERT)>> - Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR)>> Somehow, I don't think those have the semantics of what you suggest here.  Agreed that they aren't the same exact feature, but I would very much love the ability to both know \"will this cast fail?\", and also be able to \"try and cast, but if it fails just put this value and don't error\".They both have uses IMO, and while having is_castable() functions built in would be great, I just want toexpress my desire for something like the above feature in SQL Server or Oracle as well.", "msg_date": "Fri, 3 Apr 2020 21:05:58 -0400", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: is_castable" }, { "msg_contents": "Hi\n\n\n> So it's a two step process for me currently because of this, I would love\n> if there was a better way to handle\n> this type of work though, because my plpgsql functions using exception\n> blocks are not exactly great\n> for performance.\n>\n\nProbably we can for some important buildin types write method \"is_valid\",\nand this method can be called directly. For custom types or for types\nwithout this method, the solution based on exceptions can be used.\n\nThis should not be too much code, and can be fast for often used types.\n\nHiSo it's a two step process for me currently because of this, I would love if there was a better way to handle this type of work though, because my plpgsql functions using exception blocks are not exactly great for performance.Probably we can for some important buildin types write method \"is_valid\", and this method can be called directly. For custom types or for types without this method, the solution based on exceptions can be used.This should not be too much code, and can be fast for often used types.", "msg_date": "Sat, 4 Apr 2020 06:38:57 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: is_castable" } ]
[ { "msg_contents": "Hi,\n\nSuppose that the server is executing a lengthy query, and the client\nbreaks the connection. The operating system will be aware that the\nconnection is no more, but PostgreSQL doesn't notice, because it's not\ntry to read from or write to the socket. It's not paying attention to\nthe socket at all. In theory, the query could be one that runs for a\nmillion years and continue to chew up CPU and I/O, or at the very\nleast a connection slot, essentially forever. That's sad.\n\nI don't have a terribly specific idea about how to improve this, but\nis there some way that we could, at least periodically, check the\nsocket to see whether it's dead? Noticing the demise of the client\nafter a configurable interval (maybe 60s by default?) would be\ninfinitely better than never.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:29:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "zombie connections" }, { "msg_contents": "pá 3. 4. 2020 v 14:30 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:\n\n> Hi,\n>\n> Suppose that the server is executing a lengthy query, and the client\n> breaks the connection. The operating system will be aware that the\n> connection is no more, but PostgreSQL doesn't notice, because it's not\n> try to read from or write to the socket. It's not paying attention to\n> the socket at all. In theory, the query could be one that runs for a\n> million years and continue to chew up CPU and I/O, or at the very\n> least a connection slot, essentially forever. That's sad.\n>\n> I don't have a terribly specific idea about how to improve this, but\n> is there some way that we could, at least periodically, check the\n> socket to see whether it's dead? Noticing the demise of the client\n> after a configurable interval (maybe 60s by default?) would be\n> infinitely better than never.\n>\n\n+ 1\n\nPavel\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\npá 3. 4. 2020 v 14:30 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:Hi,\n\nSuppose that the server is executing a lengthy query, and the client\nbreaks the connection. The operating system will be aware that the\nconnection is no more, but PostgreSQL doesn't notice, because it's not\ntry to read from or write to the socket. It's not paying attention to\nthe socket at all. In theory, the query could be one that runs for a\nmillion years and continue to chew up CPU and I/O, or at the very\nleast a connection slot, essentially forever. That's sad.\n\nI don't have a terribly specific idea about how to improve this, but\nis there some way that we could, at least periodically, check the\nsocket to see whether it's dead? Noticing the demise of the client\nafter a configurable interval (maybe 60s by default?) would be\ninfinitely better than never.+ 1Pavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 3 Apr 2020 14:40:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, Apr 03, 2020 at 02:40:30PM +0200, Pavel Stehule wrote:\n> p� 3. 4. 2020 v 14:30 odes�latel Robert Haas <robertmhaas@gmail.com> napsal:\n> >\n> > Suppose that the server is executing a lengthy query, and the client\n> > breaks the connection. The operating system will be aware that the\n> > connection is no more, but PostgreSQL doesn't notice, because it's not\n> > try to read from or write to the socket. It's not paying attention to\n> > the socket at all. In theory, the query could be one that runs for a\n> > million years and continue to chew up CPU and I/O, or at the very\n> > least a connection slot, essentially forever. That's sad.\n> >\n> > I don't have a terribly specific idea about how to improve this, but\n> > is there some way that we could, at least periodically, check the\n> > socket to see whether it's dead? Noticing the demise of the client\n> > after a configurable interval (maybe 60s by default?) would be\n> > infinitely better than never.\n> >\n> \n> + 1\n\n\n+1 too, I already saw such behavior.\n\n\nMaybe the postmaster could send some new PROCSIG SIGUSR1 signal to backends at\na configurable interval and let ProcessInterrupts handle it?\n\n\n", "msg_date": "Fri, 3 Apr 2020 14:57:10 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "\n\nOn 03.04.2020 15:29, Robert Haas wrote:\n> Hi,\n>\n> Suppose that the server is executing a lengthy query, and the client\n> breaks the connection. The operating system will be aware that the\n> connection is no more, but PostgreSQL doesn't notice, because it's not\n> try to read from or write to the socket. It's not paying attention to\n> the socket at all. In theory, the query could be one that runs for a\n> million years and continue to chew up CPU and I/O, or at the very\n> least a connection slot, essentially forever. That's sad.\n>\n> I don't have a terribly specific idea about how to improve this, but\n> is there some way that we could, at least periodically, check the\n> socket to see whether it's dead? Noticing the demise of the client\n> after a configurable interval (maybe 60s by default?) would be\n> infinitely better than never.\n>\n\nThere was a patch on commitfest addressing this problem:\nhttps://commitfest.postgresql.org/21/1882/\nIt it currently included in PostgrePro EE, but the author of the patch \nis not working in our company any more.\nShould we resurrects this patch or there is something wrong with the \nproposed approach?\n\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:34:17 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:34 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> There was a patch on commitfest addressing this problem:\n> https://commitfest.postgresql.org/21/1882/\n> It it currently included in PostgrePro EE, but the author of the patch\n> is not working in our company any more.\n> Should we resurrects this patch or there is something wrong with the\n> proposed approach?\n\nThanks for the link.\n\nTom seems to have offered some fairly specific criticism in\nhttps://www.postgresql.org/message-id/31564.1563426253%40sss.pgh.pa.us\n\nI haven't studied that thread in detail, but I would suggest that if\nyou want to resurrect the patch, that might be a good place to start.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:50:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, 3 Apr 2020 at 08:30, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I don't have a terribly specific idea about how to improve this, but\n> is there some way that we could, at least periodically, check the\n> socket to see whether it's dead? Noticing the demise of the client\n> after a configurable interval (maybe 60s by default?) would be\n> infinitely better than never.\n>\n\nDoes it make any difference if the query is making changes? If the query is\njust computing a result and returning it to the client, there is no point\nin continuing once the socket is closed. But if it is updating data or\nmaking DDL changes, then at least some of the time it would be preferable\nfor the changes to be made. Having said that, in normal operation one\nwants, at the client end, to see the message from the server that the\nchanges have been completed, not just fire off a change and hope it\ncompletes.\n\nOn Fri, 3 Apr 2020 at 08:30, Robert Haas <robertmhaas@gmail.com> wrote:\nI don't have a terribly specific idea about how to improve this, but\nis there some way that we could, at least periodically, check the\nsocket to see whether it's dead? Noticing the demise of the client\nafter a configurable interval (maybe 60s by default?) would be\ninfinitely better than never.\nDoes it make any difference if the query is making changes? If the query is just computing a result and returning it to the client, there is no point in continuing once the socket is closed. But if it is updating data or making DDL changes, then at least some of the time it would be preferable for the changes to be made. Having said that, in normal operation one wants, at the client end, to see the message from the server that the changes have been completed, not just fire off a change and hope it completes.", "msg_date": "Fri, 3 Apr 2020 09:52:22 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:52 AM Isaac Morland <isaac.morland@gmail.com> wrote:\n> Does it make any difference if the query is making changes? If the query is just computing a result and returning it to the client, there is no point in continuing once the socket is closed. But if it is updating data or making DDL changes, then at least some of the time it would be preferable for the changes to be made. Having said that, in normal operation one wants, at the client end, to see the message from the server that the changes have been completed, not just fire off a change and hope it completes.\n\nThe system can't know whether the query is going to change anything,\nbecause even if the query is a SELECT, it doesn't know whether any of\nthe functions or operators called from that SELECT might write data.\n\nI don't think it would be smart to make behavior like this depend on\nwhether the statement is a SELECT vs. INSERT/UPDATE/DELETE, or on\nthings like whether there is an explicit transaction open. I think we\nshould just have a feature that kills the server process if the\nconnection goes away. If some people don't want that, it can be\noptional.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:57:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: zombie connections" }, { "msg_contents": "pá 3. 4. 2020 v 15:52 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Fri, 3 Apr 2020 at 08:30, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>>\n>> I don't have a terribly specific idea about how to improve this, but\n>> is there some way that we could, at least periodically, check the\n>> socket to see whether it's dead? Noticing the demise of the client\n>> after a configurable interval (maybe 60s by default?) would be\n>> infinitely better than never.\n>>\n>\n> Does it make any difference if the query is making changes? If the query\n> is just computing a result and returning it to the client, there is no\n> point in continuing once the socket is closed. But if it is updating data\n> or making DDL changes, then at least some of the time it would be\n> preferable for the changes to be made. Having said that, in normal\n> operation one wants, at the client end, to see the message from the server\n> that the changes have been completed, not just fire off a change and hope\n> it completes.\n>\n\nI prefer simple solution without any \"intelligence\". It is much safer to\nclose connect and rollback. Then it is clean protocol - when server didn't\nreported successful end of operation, then operation was reverted - always.\n\npá 3. 4. 2020 v 15:52 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Fri, 3 Apr 2020 at 08:30, Robert Haas <robertmhaas@gmail.com> wrote:\nI don't have a terribly specific idea about how to improve this, but\nis there some way that we could, at least periodically, check the\nsocket to see whether it's dead? Noticing the demise of the client\nafter a configurable interval (maybe 60s by default?) would be\ninfinitely better than never.\nDoes it make any difference if the query is making changes? If the query is just computing a result and returning it to the client, there is no point in continuing once the socket is closed. But if it is updating data or making DDL changes, then at least some of the time it would be preferable for the changes to be made. Having said that, in normal operation one wants, at the client end, to see the message from the server that the changes have been completed, not just fire off a change and hope it completes.I prefer simple solution without any \"intelligence\". It is much safer to close connect and rollback. Then it is clean protocol - when server didn't reported successful end of operation, then operation was reverted - always.", "msg_date": "Fri, 3 Apr 2020 16:00:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I prefer simple solution without any \"intelligence\". It is much safer to\n> close connect and rollback. Then it is clean protocol - when server didn't\n> reported successful end of operation, then operation was reverted - always.\n\nIt would be a fatal mistake to imagine that this feature would offer any\ngreater guarantees in that line than we have today (which is to say,\nnone really). It can be no better than the OS network stack's error\ndetection/reporting, which is necessarily pretty weak. The fact that\nthe kernel accepted a \"command complete\" message from us doesn't mean\nthat the client was still alive at that instant, much less that the\nmessage will be deliverable.\n\nIn general I think the threshold problem for a patch like this will be\n\"how do you keep the added overhead down\". As Robert noted upthread,\ntimeout.c is quite a bit shy of being able to handle timeouts that\npersist across statements. I don't think that there's any fundamental\nreason it can't be improved, but it will need improvements.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:34:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, Apr 3, 2020 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In general I think the threshold problem for a patch like this will be\n> \"how do you keep the added overhead down\". As Robert noted upthread,\n> timeout.c is quite a bit shy of being able to handle timeouts that\n> persist across statements. I don't think that there's any fundamental\n> reason it can't be improved, but it will need improvements.\n\nWhy do we need that? If we're not executing a statement, we're\nprobably trying to read() from the socket, and we'll notice if that\nreturns 0 or -1. So it seems like we only need periodic checks while\nthere's a statement in progress.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 10:43:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: zombie connections" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 3, 2020 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In general I think the threshold problem for a patch like this will be\n>> \"how do you keep the added overhead down\". As Robert noted upthread,\n>> timeout.c is quite a bit shy of being able to handle timeouts that\n>> persist across statements. I don't think that there's any fundamental\n>> reason it can't be improved, but it will need improvements.\n\n> Why do we need that? If we're not executing a statement, we're\n> probably trying to read() from the socket, and we'll notice if that\n> returns 0 or -1. So it seems like we only need periodic checks while\n> there's a statement in progress.\n\nMaybe you could build it that way, but I'm not sure it's a better way.\n\n(1) You'll need to build a concept of a timeout that's not a statement\ntimeout, but nonetheless should be canceled exactly when the statement\ntimeout is (not before or after, unless you'd like to incur additional\nsetitimer() calls). That's going to involve either timeout.c surgery\nor fragile requirements on the callers.\n\n(2) It only wins if a statement timeout is active, otherwise it makes\nthings worse, because then you need setitimer() at statement start\nand end just to enable/disable the socket check timeout. Whereas\nif you just let a once-a-minute timeout continue to run, you don't\nincur those kernel calls.\n\nIt's possible that we should run this timeout differently depending\non whether or not a statement timeout is active, though I'd prefer to\navoid such complexity if possible. On the whole, if we have to\noptimize just one of those cases, it should be the no-statement-timeout\ncase; with that timeout active, you're paying two setitimers per\nstatement anyway.\n\nAnyway, the core problem with the originally-submitted patch was that\nit was totally ignorant that timeout.c had restrictions it was breaking.\nYou can either fix the restrictions, or you can try to design around them,\nbut you've got to be aware of what that code can and can't do today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 11:50:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: zombie connections" }, { "msg_contents": "On Fri, Apr 3, 2020 at 11:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (2) It only wins if a statement timeout is active, otherwise it makes\n> things worse, because then you need setitimer() at statement start\n> and end just to enable/disable the socket check timeout. Whereas\n> if you just let a once-a-minute timeout continue to run, you don't\n> incur those kernel calls.\n\nOh, that's a really good point. I should have thought of that.\n\n> Anyway, the core problem with the originally-submitted patch was that\n> it was totally ignorant that timeout.c had restrictions it was breaking.\n> You can either fix the restrictions, or you can try to design around them,\n> but you've got to be aware of what that code can and can't do today.\n\nNo disagreement there.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 12:32:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: zombie connections" } ]
[ { "msg_contents": "There are a couple of things that pg_basebackup can't do that might be\nan issue for some users. One of them is that you might want to do\nsomething like encrypt your backup. Another is that you might want to\nstore someplace other than in the filesystem, like maybe S3. We could\ncertainly teach pg_basebackup how to do specifically those things, and\nmaybe that is worthwhile. However, I wonder if it would be useful to\nprovide a more general capability, either instead of doing those more\nspecific things or in addition to doing those more specific things.\n\nWhat I'm thinking about is: suppose we add an option to pg_basebackup\nwith a name like --pipe-output. This would be mutually exclusive with\n-D, but would work at least with -Ft and maybe also with -Fp. The\nargument to --pipe-output would be a shell command to be executed once\nper output file. Any instance of %f in the shell command would be\nreplaced with the name of the file that would have been written (and\n%% would turn into a single %). The shell command itself would be\nexecuted via system(). So if you want to compress, but using some\nother compression program instead of gzip, you could do something\nlike:\n\npg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n\nAnd if you want to encrypt, you could do something like:\n\npg_basebackup -Ft --pipe-output 'gpg -e -o %f.gpg'\n\nAnd if you want to ship it off to be stored in a concrete bunker deep\nunderground, you can just do something like:\n\npg_basebackup -Ft --pipe-output 'send-to-underground-storage.sh\nbackup-2020-04-03 %f'\n\nYou still have to write send-to-underground-storage.sh, of course, and\nthat may involve some work, and maybe also some expensive\nconstruction. But what you don't have to do is first copy the entire\nbackup to your local filesystem and then as a second step figure out\nhow to put it through whatever post-processing it needs. Instead, you\ncan simply take your backup and stick it anywhere you like.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 10:19:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "where should I stick that backup?" }, { "msg_contents": "On Fri, Apr 03, 2020 at 10:19:21AM -0400, Robert Haas wrote:\n> What I'm thinking about is: suppose we add an option to pg_basebackup\n> with a name like --pipe-output. This would be mutually exclusive with\n> -D, but would work at least with -Ft and maybe also with -Fp. The\n> argument to --pipe-output would be a shell command to be executed once\n> per output file. Any instance of %f in the shell command would be\n> replaced with the name of the file that would have been written (and\n> %% would turn into a single %). The shell command itself would be\n> executed via system(). So if you want to compress, but using some\n> other compression program instead of gzip, you could do something\n> like:\n> \n> pg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n\nSeems good to me. I agree -Fp is a \"maybe\" since the overhead will be high\nfor small files.\n\n\n", "msg_date": "Sun, 5 Apr 2020 06:53:28 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Noah Misch (noah@leadboat.com) wrote:\n> On Fri, Apr 03, 2020 at 10:19:21AM -0400, Robert Haas wrote:\n> > What I'm thinking about is: suppose we add an option to pg_basebackup\n> > with a name like --pipe-output. This would be mutually exclusive with\n> > -D, but would work at least with -Ft and maybe also with -Fp. The\n> > argument to --pipe-output would be a shell command to be executed once\n> > per output file. Any instance of %f in the shell command would be\n> > replaced with the name of the file that would have been written (and\n> > %% would turn into a single %). The shell command itself would be\n> > executed via system(). So if you want to compress, but using some\n> > other compression program instead of gzip, you could do something\n> > like:\n> > \n> > pg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n> \n> Seems good to me. I agree -Fp is a \"maybe\" since the overhead will be high\n> for small files.\n\nFor my 2c, at least, introducing more shell commands into critical parts\nof the system is absolutely the wrong direction to go in.\narchive_command continues to be a mess that we refuse to clean up or\neven properly document and the project would be much better off by\ntrying to eliminate it rather than add in new ways for users to end up\nwith bad or invalid backups.\n\nFurther, having a generic shell script approach like this would result\nin things like \"well, we don't need to actually add support for X, Y or\nZ, because we have this wonderful generic shell script thing and you can\nwrite your own, and therefore we won't accept patches which do add those\ncapabilities because then we'd have to actually maintain that support.\"\n\nIn short, -1 from me.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Apr 2020 10:45:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 6, 2020 at 10:45 AM Stephen Frost <sfrost@snowman.net> wrote:\n> For my 2c, at least, introducing more shell commands into critical parts\n> of the system is absolutely the wrong direction to go in.\n> archive_command continues to be a mess that we refuse to clean up or\n> even properly document and the project would be much better off by\n> trying to eliminate it rather than add in new ways for users to end up\n> with bad or invalid backups.\n>\n> Further, having a generic shell script approach like this would result\n> in things like \"well, we don't need to actually add support for X, Y or\n> Z, because we have this wonderful generic shell script thing and you can\n> write your own, and therefore we won't accept patches which do add those\n> capabilities because then we'd have to actually maintain that support.\"\n>\n> In short, -1 from me.\n\nI'm not sure that there's any point in responding to this because I\nbelieve that the wording of this email suggests that you've made up\nyour mind that it's bad and that position is not subject to change no\nmatter what anyone else may say. However, I'm going to try on reply\nanyway, on the theory that (1) I might be wrong and (2) even if I'm\nright, it might influence the opinions of others who have not spoken\nyet, and whose opinions may be less settled.\n\nFirst of all, while I agree that archive_command has some problems, I\ndon't think that means that every case where we use a shell command\nfor anything is a hopeless mess. The only problem I really see in this\ncase is that if you route to a local file via an intermediate program\nyou wouldn't get an fsync() any more. But we could probably figure out\nsome clever things to work around that problem, if that's the issue.\nIf there's some other problem, what is it?\n\nSecond, PostgreSQL is not realistically going to link pg_basebackup\nagainst every compression, encryption, and remote storage library out\nthere. One, yeah, we don't want to maintain that. Two, we don't want\nPostgreSQL to have build-time dependencies on a dozen or more\nlibraries that people might want to use for stuff like this. We might\nwell want to incorporate support for a few of the more popular things\nin this area, but people will always want support for newer things\nthan what existing server releases feature, and for more of them.\n\nThird, I am getting pretty tired of being told every time I try to do\nsomething that is related in any way to backup that it's wrong. If\nyour experience with pgbackrest motivated you to propose ways of\nimproving backup and restore functionality in the community, that\nwould be great. But in my experience so far, it seems to mostly\ninvolve making a lot of negative comments that make it hard to get\nanything done. I would appreciate it if you would adopt a more\nconstructive tone.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Apr 2020 11:13:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 6, 2020 at 4:45 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Noah Misch (noah@leadboat.com) wrote:\n> > On Fri, Apr 03, 2020 at 10:19:21AM -0400, Robert Haas wrote:\n> > > What I'm thinking about is: suppose we add an option to pg_basebackup\n> > > with a name like --pipe-output. This would be mutually exclusive with\n> > > -D, but would work at least with -Ft and maybe also with -Fp. The\n> > > argument to --pipe-output would be a shell command to be executed once\n> > > per output file. Any instance of %f in the shell command would be\n> > > replaced with the name of the file that would have been written (and\n> > > %% would turn into a single %). The shell command itself would be\n> > > executed via system(). So if you want to compress, but using some\n> > > other compression program instead of gzip, you could do something\n> > > like:\n> > >\n> > > pg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n> >\n> > Seems good to me. I agree -Fp is a \"maybe\" since the overhead will be high\n> > for small files.\n>\n> For my 2c, at least, introducing more shell commands into critical parts\n> of the system is absolutely the wrong direction to go in.\n> archive_command continues to be a mess that we refuse to clean up or\n> even properly document and the project would be much better off by\n> trying to eliminate it rather than add in new ways for users to end up\n> with bad or invalid backups.\n\nI think the bigger problem with archive_command more comes from how\nit's defined to work tbh. Which leaves a lot of things open.\n\nThis sounds to me like a much narrower use-case, which makes it a lot\nmore OK. But I agree we have to be careful not to get back into that\nwhole mess. One thing would be to clearly document such things *from\nthe beginning*, and not try to retrofit it years later like we ended\nup doing with archive_command.\n\nAnd as Robert mentions downthread, the fsync() issue is definitely a\nreal one, but if that is documented clearly ahead of time, that's a\nreasonable level foot-gun I'd say.\n\n\n> Further, having a generic shell script approach like this would result\n> in things like \"well, we don't need to actually add support for X, Y or\n> Z, because we have this wonderful generic shell script thing and you can\n> write your own, and therefore we won't accept patches which do add those\n> capabilities because then we'd have to actually maintain that support.\"\n\nIn principle, I agree with \"shellscripts suck\".\n\nNow, if we were just talking about compression, it would actually be\ninteresting to implement some sort of \"postgres compression API\" if\nyou will, that is implemented by a shared library. This library could\nthen be used from pg_basebackup or from anything else that needs\ncompression. And anybody who wants could then do a \"<compression X>\nfor PostgreSQL\" module, removing the need for us to carry such code\nupstream.\n\nThere's been discussions of that for the backend before IIRC, but I\ndon't recall the conclusions. And in particular, I don't recall if it\nincluded the idea of being able to use it in situations like this as\nwell, and with *run-time loading*.\n\nAnd that said, then we'd limit ourselves to compression. We'd still\nneed a way to deal with encryption...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 6 Apr 2020 19:32:45 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 6, 2020 at 10:45 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > For my 2c, at least, introducing more shell commands into critical parts\n> > of the system is absolutely the wrong direction to go in.\n> > archive_command continues to be a mess that we refuse to clean up or\n> > even properly document and the project would be much better off by\n> > trying to eliminate it rather than add in new ways for users to end up\n> > with bad or invalid backups.\n> >\n> > Further, having a generic shell script approach like this would result\n> > in things like \"well, we don't need to actually add support for X, Y or\n> > Z, because we have this wonderful generic shell script thing and you can\n> > write your own, and therefore we won't accept patches which do add those\n> > capabilities because then we'd have to actually maintain that support.\"\n> >\n> > In short, -1 from me.\n> \n> I'm not sure that there's any point in responding to this because I\n> believe that the wording of this email suggests that you've made up\n> your mind that it's bad and that position is not subject to change no\n> matter what anyone else may say. However, I'm going to try on reply\n> anyway, on the theory that (1) I might be wrong and (2) even if I'm\n> right, it might influence the opinions of others who have not spoken\n> yet, and whose opinions may be less settled.\n\nChances certainly aren't good that you'll convince me that putting more\nabsolutely crticial-to-get-perfect shell scripts into the backup path is\na good idea.\n\n> First of all, while I agree that archive_command has some problems, I\n> don't think that means that every case where we use a shell command\n> for anything is a hopeless mess. The only problem I really see in this\n> case is that if you route to a local file via an intermediate program\n> you wouldn't get an fsync() any more. But we could probably figure out\n> some clever things to work around that problem, if that's the issue.\n> If there's some other problem, what is it?\n\nWe certainly haven't solved the issues with archive_command (at least,\nnot in core), so this \"well, maybe we could fix all the issues\" claim\nreally doesn't hold any water. Having commands like this ends up just\npunting on the whole problem and saying \"here user, you deal with it.\"\n*Maybe* if we *also* wrote dedicated tools to be used with these\ncommands (as has been proposed multiple times with archive_command, but\nhasn't actually happened, at least, not in core), we could build\nsomething where this would work reasonably well and it'd be alright, but\nthat wasn't what seemed to be suggested here, and if we're going to\nwrite all that code anyway, it doesn't really seem like a shell\ninterface is a best one to go with.\n\nThere's also been something of an expectation that if we're going to\nprovide an interface then we should have an example of something that\nuses it- but when it comes to something like archive_command, the\nexample we came up with was terrible and yet it's still in our\ndocumentation and is commonly used, much to the disservice of our users.\nSure, we can point to our users and say \"well, that's now how you should\nactually use that feature, you should do all this other stuff in that\ncommand\" and punt on this and push it back on our users and tell them\nthat they're using the interface we provide wrong but the only folks who\ncan possibly actually like that answer is ourselves- our users aren't\nhappy with it because they're left with a broken backup that they can't\nrestore from when they needed to.\n\nThat your initial email had more-or-less the exact same kind of\n\"example\" certainly doesn't inspire confidence that this would end up\nbeing used sensibly by our users.\n\nYes, fsync() is part of the issue but it's not the only one- retry\nlogic, and making sure the results are correct, is pretty darn important\ntoo, especially with things like s3 (even dedicated tools have issues in\nthis area- I just saw a report about wal-g failing to archive a WAL file\nproperly because there was an error which resulted in a 0-byte WAL file\nbeing stored; wal-g did properly retry, but then it saw the file was\nthere and figured \"all is well\" and returned success even though the\nfile was 0-byte in s3). I don't doubt that David could point out a few\nother issues- he routinely does whenever I chat with him about various\nideas I've got.\n\nSo, instead of talking about 'bzip2 > %f.bz2', and then writing into our\ndocumentation that that's how this feature can be used, what about\nproposing something that would actually work reliably with this\ninterface? Which properly fsync's everything, has good retry logic for\nwhen failures happen, is able to actually detect when a failure\nhappened, how to restore from a backup taken this way, and it'd probably\nbe good to show how pg_verifybackup could be used to make sure the\nbackup is actually correct and valid too.\n\n> Second, PostgreSQL is not realistically going to link pg_basebackup\n> against every compression, encryption, and remote storage library out\n> there. One, yeah, we don't want to maintain that. Two, we don't want\n> PostgreSQL to have build-time dependencies on a dozen or more\n> libraries that people might want to use for stuff like this. We might\n> well want to incorporate support for a few of the more popular things\n> in this area, but people will always want support for newer things\n> than what existing server releases feature, and for more of them.\n\nWe don't need to link to 'every compression, encryption and remote\nstorage library out there'. In some cases, yes, it makes sense to use\nan existing library (OpenSSL, zlib, lz4), but in many other cases it\nmakes more sense to build support directly into the system (s3, gcs,\nprobably others) because a good library doesn't exist. It'd also be\ngood to build a nicely extensible system which people can add to, to\nsupport other storage or compression options but I don't think that's\nreasonable to do with a shell-script based interface- maybe with\nshared libraries, as Magnus suggests elsewhere, maybe, but even there I\nhave some doubts.\n\n> Third, I am getting pretty tired of being told every time I try to do\n> something that is related in any way to backup that it's wrong. If\n> your experience with pgbackrest motivated you to propose ways of\n> improving backup and restore functionality in the community, that\n> would be great. But in my experience so far, it seems to mostly\n> involve making a lot of negative comments that make it hard to get\n> anything done. I would appreciate it if you would adopt a more\n> constructive tone.\n\npgbackrest is how we're working to improve backup and restore\nfunctionality in the community, and we've come a long way and gone\nthrough a great deal of fire getting there. I appreciate that it's not\nin core and I'd love to discuss how we can change that, but it's\nabsolutely a part of the PG community and ecosystem- with changes being\nmade in core routinely which improve the in-core tools as well as\npgbackrest by the authors contributing back.\n\nAs far as my tone, I'm afraid that's simply coming from having dealt\nwith and discussed many of these, well, shortcuts, to trying to improve\nbackup and recovery. Did David and I discuss using s3cmd? Of course.\nDid we research various s3 libraries? http libraries? SSL libraries?\ncompression libraries? Absolutely, which is why we ended up using\nOpenSSL (PG links to it already, so if you're happy enough with PG's SSL\nthen you'll probably accept pgbackrest using the same one- and yes,\nwe've talked about supporting others as PG is moving in that direction\ntoo), and zlib (same reasons), we've now added lz4 (after researching it\nand deciding it was pretty reasonable to include), but when it came to\ndealing with s3, we wrote our own HTTP and s3 code- none of the existing\nlibraries were a great answer and trying to make it work with s3cmd was,\nwell, about like saying that you should just use CSV files and forget\nabout this whole database thing. We're very likely to write our own\ncode for gcs too, but we already have the HTTP code, which means it's\nnot actually all that heavy of a lift to do.\n\nI'm not against trying to improve the situation in core, and I've even\ntalked about and tried to give feedback about what would make the most\nsense for that to look like, but I feel like every time I do that\nthere's a bunch of push-back that I want it to look like pgbackrest or\nthat I'm being negative about things that don't look like pgbackrest.\nGuess what? Yes, I do think it should look like pgbackrest, but that's\nnot because I have some not invented here syndrome issue, it's because\nwe've been through this and have learned a great deal and have taken\nwhat we've learned and worked to build the best tool we can, much the\nway the PG community works to make the best database we can.\n\nYes, we were able to argue and make it clear that a manifest really did\nmake sense and even that it should be in json format, and then argue\nthat checking WAL was a pretty important part of verifying any backup,\nbut each and every one of these ends up being a long and drawn out\nargument and it's draining. The thing is, this stuff isn't new to us.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Apr 2020 14:23:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Apr 6, 2020 at 4:45 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Noah Misch (noah@leadboat.com) wrote:\n> > > On Fri, Apr 03, 2020 at 10:19:21AM -0400, Robert Haas wrote:\n> > > > What I'm thinking about is: suppose we add an option to pg_basebackup\n> > > > with a name like --pipe-output. This would be mutually exclusive with\n> > > > -D, but would work at least with -Ft and maybe also with -Fp. The\n> > > > argument to --pipe-output would be a shell command to be executed once\n> > > > per output file. Any instance of %f in the shell command would be\n> > > > replaced with the name of the file that would have been written (and\n> > > > %% would turn into a single %). The shell command itself would be\n> > > > executed via system(). So if you want to compress, but using some\n> > > > other compression program instead of gzip, you could do something\n> > > > like:\n> > > >\n> > > > pg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n> > >\n> > > Seems good to me. I agree -Fp is a \"maybe\" since the overhead will be high\n> > > for small files.\n> >\n> > For my 2c, at least, introducing more shell commands into critical parts\n> > of the system is absolutely the wrong direction to go in.\n> > archive_command continues to be a mess that we refuse to clean up or\n> > even properly document and the project would be much better off by\n> > trying to eliminate it rather than add in new ways for users to end up\n> > with bad or invalid backups.\n> \n> I think the bigger problem with archive_command more comes from how\n> it's defined to work tbh. Which leaves a lot of things open.\n> \n> This sounds to me like a much narrower use-case, which makes it a lot\n> more OK. But I agree we have to be careful not to get back into that\n> whole mess. One thing would be to clearly document such things *from\n> the beginning*, and not try to retrofit it years later like we ended\n> up doing with archive_command.\n\nThis sounds like a much broader use-case to me, not a narrower one. I\nagree that we don't want to try and retrofit things years later.\n\n> And as Robert mentions downthread, the fsync() issue is definitely a\n> real one, but if that is documented clearly ahead of time, that's a\n> reasonable level foot-gun I'd say.\n\nDocumented how..?\n\n> > Further, having a generic shell script approach like this would result\n> > in things like \"well, we don't need to actually add support for X, Y or\n> > Z, because we have this wonderful generic shell script thing and you can\n> > write your own, and therefore we won't accept patches which do add those\n> > capabilities because then we'd have to actually maintain that support.\"\n> \n> In principle, I agree with \"shellscripts suck\".\n> \n> Now, if we were just talking about compression, it would actually be\n> interesting to implement some sort of \"postgres compression API\" if\n> you will, that is implemented by a shared library. This library could\n> then be used from pg_basebackup or from anything else that needs\n> compression. And anybody who wants could then do a \"<compression X>\n> for PostgreSQL\" module, removing the need for us to carry such code\n> upstream.\n\nGetting a bit off-track here, but I actually think we should absolutely\nfigure out a way to support custom compression options in PG. I had\nbeen thinking of something along the lines of per-datatype actually,\nwhere each data type could define it's own compression method, since we\nknow that different data has different characteristics and therefore\nmight benefit from different ways of compressing it. Though it's also\ntrue that generically there are tradeoffs between cpu time, memory size,\nresulting size on disk, etc, and having ways to pick between those could\nalso be interesting.\n\n> There's been discussions of that for the backend before IIRC, but I\n> don't recall the conclusions. And in particular, I don't recall if it\n> included the idea of being able to use it in situations like this as\n> well, and with *run-time loading*.\n\nRun-time loading brings in the fun that maybe we aren't able to load the\nlibrary when we need to too, and what then? :)\n\n> And that said, then we'd limit ourselves to compression. We'd still\n> need a way to deal with encryption...\n\nAnd shipping stuff off to some remote server too, at least if we are\ngoing to tell users that they can use this approach to send their\nbackups to s3... (and that reminds me- there's other things to think\nabout there too, like maybe you don't want to ship off 0-byte files to\ns3, or maybe you don't want to ship tiny files, because there's costs\nassociated with these things...).\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Apr 2020 14:31:50 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 6, 2020 at 2:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> So, instead of talking about 'bzip2 > %f.bz2', and then writing into our\n> documentation that that's how this feature can be used, what about\n> proposing something that would actually work reliably with this\n> interface? Which properly fsync's everything, has good retry logic for\n> when failures happen, is able to actually detect when a failure\n> happened, how to restore from a backup taken this way, and it'd probably\n> be good to show how pg_verifybackup could be used to make sure the\n> backup is actually correct and valid too.\n\nI don't really understand the problem here. Suppose I do:\n\nmkdir ~/my-brand-new-empty-directory\ncd ~/my-brand-new-empty-directory\npg_basebackup -Ft --pipe-output 'bzip2 > %f.bz2'\ninitdb -S --dont-expect-that-this-is-a-data-directory . # because\nright now it would complain about pg_wal and pg_tblspc being missing\n\nI think if all that works, my backup should be good and durably on\ndisk. If it's not, then either pg_basebackup or bzip2 or initdb didn't\nreport errors that they should have reported. If you're worried about\nthat, say because you suspect those programs are buggy or because you\nthink the kernel may not be reporting errors properly, you can use tar\n-jxvf + pg_validatebackup to check.\n\nWhat *exactly* do you think can go wrong here?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Apr 2020 17:09:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 6, 2020 at 1:32 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Now, if we were just talking about compression, it would actually be\n> interesting to implement some sort of \"postgres compression API\" if\n> you will, that is implemented by a shared library. This library could\n> then be used from pg_basebackup or from anything else that needs\n> compression. And anybody who wants could then do a \"<compression X>\n> for PostgreSQL\" module, removing the need for us to carry such code\n> upstream.\n\nI think it could be more general than a compression library. It could\nbe a store-my-stuff-and-give-it-back-to-me library, which might do\ncompression or encryption or cloud storage or any combination of the\nthree, and probably other stuff too. Imagine that you first call an\ninit function with a namespace that is basically a string provided by\nthe user. Then you open a file either for read or for write (but not\nboth). Then you read or write a series of chunks (depending on the\nfile mode). Then you close the file. Then you can do the same with\nmore files. Finally at the end you close the namespace. You don't\nreally need to care where or how the functions you are calling store\nthe data. You just need them to return proper error indicators if by\nchance they fail.\n\nAs compared with my previous proposal, this would work much better for\npg_basebackup -Fp, because you wouldn't launch a new bzip2 process for\nevery file. You'd just bzopen(), which is presumably quite lightweight\nby comparison. The reasons I didn't propose it are:\n\n1. Running bzip2 on every file in a plain-format backup seems a lot\nsillier than running it on every tar file in a tar-format backup.\n2. I'm not confident that the command specified here actually needs to\nbe anything very complicated (unlike archive_command).\n3. The barrier to entry for a loadable module is a lot higher than for\na shell command.\n4. I think that all of our existing infrastructure for loadable\nmodules is backend-only.\n\nNow all of these are up for discussion. I am sure we can make the\nloadable module stuff work in frontend code; it would just take some\nwork. A C interface for extensibility is very significantly harder to\nuse than a shell interface, but it's still way better than no\ninterface. The idea that this shell command can be something simple is\nmy current belief, but it may turn out to be wrong. And I'm sure\nsomebody can propose a good reason to do something with every file in\na plain-format backup rather than using tar format.\n\nAll that being said, I still find it hard to believe that we will want\nto add dependencies for libraries that we'd need to do encryption or\nS3 cloud storage to PostgreSQL itself. So if we go with this more\nintegrated approach we should consider the possibility that, when the\ndust settles, PostgreSQL will only have pg_basebackup\n--output-plugin=lz4 and Aurora will also have pg_basebackup\n--output-plugin=s3. From my point of view, that would be less than\nideal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Apr 2020 17:27:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 6, 2020 at 2:23 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > So, instead of talking about 'bzip2 > %f.bz2', and then writing into our\n> > documentation that that's how this feature can be used, what about\n> > proposing something that would actually work reliably with this\n> > interface? Which properly fsync's everything, has good retry logic for\n> > when failures happen, is able to actually detect when a failure\n> > happened, how to restore from a backup taken this way, and it'd probably\n> > be good to show how pg_verifybackup could be used to make sure the\n> > backup is actually correct and valid too.\n> \n> I don't really understand the problem here. Suppose I do:\n> \n> mkdir ~/my-brand-new-empty-directory\n> cd ~/my-brand-new-empty-directory\n> pg_basebackup -Ft --pipe-output 'bzip2 > %f.bz2'\n> initdb -S --dont-expect-that-this-is-a-data-directory . # because\n> right now it would complain about pg_wal and pg_tblspc being missing\n> \n> I think if all that works, my backup should be good and durably on\n> disk. If it's not, then either pg_basebackup or bzip2 or initdb didn't\n> report errors that they should have reported. If you're worried about\n> that, say because you suspect those programs are buggy or because you\n> think the kernel may not be reporting errors properly, you can use tar\n> -jxvf + pg_validatebackup to check.\n> \n> What *exactly* do you think can go wrong here?\n\nWhat if %f.bz2 already exists? How about if %f has a space in it? What\nabout if I'd like to verify that the backup looks reasonably valid\nwithout having to find space to store it entirely decompressed?\n\nAlso, this argument feels disingenuous to me. That isn't the only thing\nyou're promoting this feature be used for, as you say below. If the\nonly thing this feature is *actually* intended for is to add bzip2\nsupport, then we should just add bzip2 support directly and call it a\nday, but what you're really talking about here is a generic interface\nthat you'll want to push users to for things like \"how do I back up to\ns3\" or \"how do I back up to GCS\" and so we should be thinking about\nthose cases and not just a relatively simple use case.\n\nThis is the same kind of slippery slope that our archive command is\nbuilt on- sure, if everything \"works\" then it's \"fine\", even with our\ndocumented example, but we know that not everything works in the real\nworld, and just throwing an 'initdb -S' in there isn't a general\nsolution because users want to do things like send WAL to s3 or GCS or\nsuch.\n\nI don't think there's any doubt that there'll be no shortage of shell\nscripts and various other things that'll be used with this that, yes,\nwill be provided by our users and therefore we can blame them for doing\nit wrong, but then they'll complain on our lists and we'll spend time\neducating them as to how to write proper software to be used, or\npointing them to a solution that someone writes specifically for this.\nI don't view that as, ultimately, a good solution.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Apr 6, 2020 at 1:32 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > Now, if we were just talking about compression, it would actually be\n> > interesting to implement some sort of \"postgres compression API\" if\n> > you will, that is implemented by a shared library. This library could\n> > then be used from pg_basebackup or from anything else that needs\n> > compression. And anybody who wants could then do a \"<compression X>\n> > for PostgreSQL\" module, removing the need for us to carry such code\n> > upstream.\n> \n> I think it could be more general than a compression library. It could\n> be a store-my-stuff-and-give-it-back-to-me library, which might do\n> compression or encryption or cloud storage or any combination of the\n> three, and probably other stuff too. Imagine that you first call an\n> init function with a namespace that is basically a string provided by\n> the user. Then you open a file either for read or for write (but not\n> both). Then you read or write a series of chunks (depending on the\n> file mode). Then you close the file. Then you can do the same with\n> more files. Finally at the end you close the namespace. You don't\n> really need to care where or how the functions you are calling store\n> the data. You just need them to return proper error indicators if by\n> chance they fail.\n\nYes, having a storage layer makes a lot of sense here, with features\nthat are understood by the core system and which each driver\nunderstands, and then having a filter system which is also pluggable and\ncan support things like compression and hashing for this would also be\ngreat.\n\nI can point you to examples of all of the above, already implemented, in\nC, all OSS. Sure seems like a pretty good and reasonable approach to\ntake when other folks are doing it.\n\n> As compared with my previous proposal, this would work much better for\n> pg_basebackup -Fp, because you wouldn't launch a new bzip2 process for\n> every file. You'd just bzopen(), which is presumably quite lightweight\n> by comparison. The reasons I didn't propose it are:\n> \n> 1. Running bzip2 on every file in a plain-format backup seems a lot\n> sillier than running it on every tar file in a tar-format backup.\n\nThis is circular, isn't it? It's silly because you're launching new\nbzip2 processes for every file, but if you were using bzopen() then you\nwouldn't have that issue and therefore compressing every file in a\nplain-format backup would be entirely reasonable.\n\n> 2. I'm not confident that the command specified here actually needs to\n> be anything very complicated (unlike archive_command).\n\nThis.. just doesn't make sense to me. The above talks about pushing\nthings to cloud storage and such, which is definitely much more\ncomplicated than what had really been contemplated when archive_command\nwas introduced.\n\n> 3. The barrier to entry for a loadable module is a lot higher than for\n> a shell command.\n\nSure.\n\n> 4. I think that all of our existing infrastructure for loadable\n> modules is backend-only.\n\nThat certainly doesn't seem a terrible hurdle, but I'm not convinced\nthat we'd actually need or want this to be done through loadable\nmodules- I'd argue that we should, instead, be thinking about building a\nsystem where we could accept patches that add in new drivers and new\nfilters to core, where they're reviewed and well written.\n\n> Now all of these are up for discussion. I am sure we can make the\n> loadable module stuff work in frontend code; it would just take some\n> work. A C interface for extensibility is very significantly harder to\n> use than a shell interface, but it's still way better than no\n> interface. The idea that this shell command can be something simple is\n> my current belief, but it may turn out to be wrong. And I'm sure\n> somebody can propose a good reason to do something with every file in\n> a plain-format backup rather than using tar format.\n\nI've already tried to point out that the shell command you're talking\nabout isn't going to be able to just be a simple command if the idea is\nthat it'd be used to send things to s3 or gcs or anything like that.\n*Maybe* it could be simple if the only thing it's used for is a simple\ncompression filter (though we'd have to deal with the whole fsync thing,\nas discussed), but it seems very likely that everyone would be a lot\nhappier if we just built in support for bzip2, lz4, gzip, whatever, and\nthat certainly doesn't strike me as a large ask in terms of code\ncomplexity or level of effort.\n\n> All that being said, I still find it hard to believe that we will want\n> to add dependencies for libraries that we'd need to do encryption or\n> S3 cloud storage to PostgreSQL itself. So if we go with this more\n> integrated approach we should consider the possibility that, when the\n> dust settles, PostgreSQL will only have pg_basebackup\n> --output-plugin=lz4 and Aurora will also have pg_basebackup\n> --output-plugin=s3. From my point of view, that would be less than\n> ideal.\n\nWe already have libraries for encryption and we do not have to add\nlibraries for s3 storage to support it as an option, as I mentioned\nup-thread. I don't find the argument that someone else might extend\npg_basebackup (or whatever) to add on new features to be one that\nconcerns me terribly much, provided we give people the opportunity to\nadd those same features into core if they're willing to put in the\neffort to make it happen. I'm quite concerned that using this generic\n\"you can just write a shell script to do it\" approach will be used, over\nand over again, as an argument or at least a deterrent to having\nsomething proper in core and will ultimately result in us not having any\ngood solution in core for the very common use cases that our users have\ntoday.\n\nThat certainly seems like what's happened with archive_command.\n\nThanks,\n\nStephen", "msg_date": "Wed, 8 Apr 2020 13:05:28 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Wed, Apr 8, 2020 at 1:05 PM Stephen Frost <sfrost@snowman.net> wrote:\n> What if %f.bz2 already exists?\n\nThat cannot occur in the scenario I described.\n\n> How about if %f has a space in it?\n\nFor a tar-format backup I don't think that can happen, because the\nfile names will be base.tar and ${tablespace_oid}.tar. For a plain\nformat backup it's a potential issue.\n\n> What\n> about if I'd like to verify that the backup looks reasonably valid\n> without having to find space to store it entirely decompressed?\n\nThen we need to make pg_validatebackup better.\n\n> Also, this argument feels disingenuous to me.\n> [ lots more stuff ]\n\nThis all just sounds like fearmongering to me. \"archive_command\ndoesn't work very well, so maybe your thing won't either.\" Maybe it\nwon't, but the fact that archive_command doesn't isn't a reason.\n\n> Yes, having a storage layer makes a lot of sense here, with features\n> that are understood by the core system and which each driver\n> understands, and then having a filter system which is also pluggable and\n> can support things like compression and hashing for this would also be\n> great.\n\nIt's good to know that you prefer a C interface to one based on shell\nscripting. I hope that we will also get some other opinions on that\nquestion, as my own feelings are somewhat divided (but with some bias\ntoward trying to making the shell scripting thing work, because I\nbelieve it gives a lot more practical flexibility).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 13:33:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greeitngs,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 8, 2020 at 1:05 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > What if %f.bz2 already exists?\n> \n> That cannot occur in the scenario I described.\n\nOf course it can.\n\n> > How about if %f has a space in it?\n> \n> For a tar-format backup I don't think that can happen, because the\n> file names will be base.tar and ${tablespace_oid}.tar. For a plain\n> format backup it's a potential issue.\n\nI agree that it might not be an issue for tar-format.\n\n> > What\n> > about if I'd like to verify that the backup looks reasonably valid\n> > without having to find space to store it entirely decompressed?\n> \n> Then we need to make pg_validatebackup better.\n\nSure- but shouldn't the design be contemplating how these various tools\nwill work together?\n\n> > Also, this argument feels disingenuous to me.\n> > [ lots more stuff ]\n> \n> This all just sounds like fearmongering to me. \"archive_command\n> doesn't work very well, so maybe your thing won't either.\" Maybe it\n> won't, but the fact that archive_command doesn't isn't a reason.\n\nI was trying to explain that we have literally gone down exactly this\npath before and it's not been a good result, hence we should be really\ncareful before going down it again. I don't consider that to be\nfearmongering, nor that we should be dismissing that concern out of\nhand.\n\n> > Yes, having a storage layer makes a lot of sense here, with features\n> > that are understood by the core system and which each driver\n> > understands, and then having a filter system which is also pluggable and\n> > can support things like compression and hashing for this would also be\n> > great.\n> \n> It's good to know that you prefer a C interface to one based on shell\n> scripting. I hope that we will also get some other opinions on that\n> question, as my own feelings are somewhat divided (but with some bias\n> toward trying to making the shell scripting thing work, because I\n> believe it gives a lot more practical flexibility).\n\nYes, I do prefer a C interface. One might even say \"been there, done\nthat.\" Hopefully sharing such experience is still useful to do on these\nlists.\n\nThanks,\n\nStephen", "msg_date": "Wed, 8 Apr 2020 14:06:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Wed, Apr 8, 2020 at 2:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Wed, Apr 8, 2020 at 1:05 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > What if %f.bz2 already exists?\n> >\n> > That cannot occur in the scenario I described.\n>\n> Of course it can.\n\nNot really. The steps I described involved creating a new directory.\nYeah, in theory, somebody could inject a file into that directory\nafter we created it and before bzip writes any files into it, but\npg_basebackup already has the exact same race condition.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:38:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 8, 2020 at 2:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > On Wed, Apr 8, 2020 at 1:05 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > What if %f.bz2 already exists?\n> > >\n> > > That cannot occur in the scenario I described.\n> >\n> > Of course it can.\n> \n> Not really. The steps I described involved creating a new directory.\n> Yeah, in theory, somebody could inject a file into that directory\n> after we created it and before bzip writes any files into it, but\n> pg_basebackup already has the exact same race condition.\n\nWith pg_basebackup, at least we could reasonably fix that race\ncondition.\n\nThanks,\n\nStephen", "msg_date": "Wed, 8 Apr 2020 15:43:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 6, 2020 at 07:32:45PM +0200, Magnus Hagander wrote:\n> On Mon, Apr 6, 2020 at 4:45 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > For my 2c, at least, introducing more shell commands into critical parts\n> > of the system is absolutely the wrong direction to go in.\n> > archive_command continues to be a mess that we refuse to clean up or\n> > even properly document and the project would be much better off by\n> > trying to eliminate it rather than add in new ways for users to end up\n> > with bad or invalid backups.\n> \n> I think the bigger problem with archive_command more comes from how\n> it's defined to work tbh. Which leaves a lot of things open.\n> \n> This sounds to me like a much narrower use-case, which makes it a lot\n> more OK. But I agree we have to be careful not to get back into that\n> whole mess. One thing would be to clearly document such things *from\n> the beginning*, and not try to retrofit it years later like we ended\n> up doing with archive_command.\n> \n> And as Robert mentions downthread, the fsync() issue is definitely a\n> real one, but if that is documented clearly ahead of time, that's a\n> reasonable level foot-gun I'd say.\n\nI think we need to step back and look at the larger issue. The real\nargument goes back to the Unix command-line API vs the VMS/Windows API. \nThe former has discrete parts that can be stitched together, while the\nVMS/Windows API presents a more duplicative but more holistic API for\nevery piece. We have discussed using shell commands for\narchive_command, and even more recently, for the server pass phrase. \n\nTo get more specific, I think we have to understand how the\n_requirements_ of the job match the shell script API, with stdin,\nstdout, stderr, return code, and command-line arguments. Looking at\narchive_command, the command-line arguments allow specification of file\nnames, but quoting can be complex. The error return code and stderr\noutput seem to work fine. There is no clean API for fsync and testing\nif the file exists, so that all that has to be hand done in one\ncommand-line. This is why many users use pre-written archive_command\nshell scripts.\n\nThis brings up a few questions:\n\n* Should we have split apart archive_command into file-exists, copy,\nfsync-file? Should we add that now?\n\n* How well does this backup requirement match with the shell command\nAPI?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:57:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> I think we need to step back and look at the larger issue. The real\n> argument goes back to the Unix command-line API vs the VMS/Windows API. \n> The former has discrete parts that can be stitched together, while the\n> VMS/Windows API presents a more duplicative but more holistic API for\n> every piece. We have discussed using shell commands for\n> archive_command, and even more recently, for the server pass phrase. \n\nWhen it comes to something like the server pass phrase, it seems much\nmore reasonable to consider using a shell script (though still perhaps\nnot ideal) because it's not involved directly in ensuring that the data\nis reliably stored and it's pretty clear that if it doesn't work the\nworst thing that happens is that the database doesn't start up, but it\nwon't corrupt any data or destroy it or do other bad things.\n\n> To get more specific, I think we have to understand how the\n> _requirements_ of the job match the shell script API, with stdin,\n> stdout, stderr, return code, and command-line arguments. Looking at\n> archive_command, the command-line arguments allow specification of file\n> names, but quoting can be complex. The error return code and stderr\n> output seem to work fine. There is no clean API for fsync and testing\n> if the file exists, so that all that has to be hand done in one\n> command-line. This is why many users use pre-written archive_command\n> shell scripts.\n\nWe aren't considering all of the use-cases really though, in specific,\nthings like pushing to s3 or gcs require, at least, good retry logic,\nand that's without starting to think about things like high-rate systems\n(spawning lots of new processes isn't free, particularly if they're\nwritten in shell script but any interpreted language is expensive) and\nwanting to parallelize.\n\n> This brings up a few questions:\n> \n> * Should we have split apart archive_command into file-exists, copy,\n> fsync-file? Should we add that now?\n\nNo.. The right approach to improving on archive command is to add a way\nfor an extension to take over that job, maybe with a complete background\nworker of its own, or perhaps a shared library that can be loaded by the\narchiver process, at least if we're talking about how to allow people to\nextend it.\n\nPotentially a better answer is to just build this stuff into PG- things\nlike \"archive WAL to s3/GCS with these credentials\" are what an awful\nlot of users want. There's then some who want \"archive first to this\nother server, and then archive to s3/GCS\", or more complex options.\n\nI'll also point out that there's not one \"s3\".. there's quite a few\nalternatives, including some which are open source, which talk the s3\nprotocol (sadly, they don't all do it perfectly, which is why we are\ntalking about building a GCS-specific driver for gcs rather than using\ntheir s3 gateway, but still, s3 isn't just 'one thing').\n\n> * How well does this backup requirement match with the shell command\n> API?\n\nFor my part, it's not just a question of an API, but it's a question of\nwho is going to implement a good and reliable solution- PG developers,\nor some admin who is just trying to get PG up and running in their\nenvironment..? One aspect of that is being knowledgable about where all\nthe land mines are- like the whole fsync thing. Sure, if you're a PG\ndeveloper or you've been around long enough, you're going to realize\nthat 'cp' isn't going to fsync() the file and therefore it's a pretty\nhigh risk choice for archive_command, and you'll understand just how\nimportant WAL is, but there's certainly an awful lot of folks out there\nwho don't realize that or at least don't think about it when they're\nstanding up a new system and instead they just are following our docs\nwith the expectation that those docs are providing good advice.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Apr 2020 16:15:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Thu, Apr 9, 2020 at 04:15:07PM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > I think we need to step back and look at the larger issue. The real\n> > argument goes back to the Unix command-line API vs the VMS/Windows API. \n> > The former has discrete parts that can be stitched together, while the\n> > VMS/Windows API presents a more duplicative but more holistic API for\n> > every piece. We have discussed using shell commands for\n> > archive_command, and even more recently, for the server pass phrase. \n> \n> When it comes to something like the server pass phrase, it seems much\n> more reasonable to consider using a shell script (though still perhaps\n> not ideal) because it's not involved directly in ensuring that the data\n> is reliably stored and it's pretty clear that if it doesn't work the\n> worst thing that happens is that the database doesn't start up, but it\n> won't corrupt any data or destroy it or do other bad things.\n\nWell, the pass phrase relates to security, so it is important too. I\ndon't think the _importance_ of the action is the most determining\nissue. Rather, I think it is how well the action fits the shell script\nAPI.\n\n> > To get more specific, I think we have to understand how the\n> > _requirements_ of the job match the shell script API, with stdin,\n> > stdout, stderr, return code, and command-line arguments. Looking at\n> > archive_command, the command-line arguments allow specification of file\n> > names, but quoting can be complex. The error return code and stderr\n> > output seem to work fine. There is no clean API for fsync and testing\n> > if the file exists, so that all that has to be hand done in one\n> > command-line. This is why many users use pre-written archive_command\n> > shell scripts.\n> \n> We aren't considering all of the use-cases really though, in specific,\n> things like pushing to s3 or gcs require, at least, good retry logic,\n> and that's without starting to think about things like high-rate systems\n> (spawning lots of new processes isn't free, particularly if they're\n> written in shell script but any interpreted language is expensive) and\n> wanting to parallelize.\n\nGood point, but if there are multiple APIs, it makes shell script\nflexibility even more useful.\n\n> > This brings up a few questions:\n> > \n> > * Should we have split apart archive_command into file-exists, copy,\n> > fsync-file? Should we add that now?\n> \n> No.. The right approach to improving on archive command is to add a way\n> for an extension to take over that job, maybe with a complete background\n> worker of its own, or perhaps a shared library that can be loaded by the\n> archiver process, at least if we're talking about how to allow people to\n> extend it.\n\nThat seems quite vague, which is the issue we had years ago when\nconsidering doing archive_command as a link to a C library.\n\n> Potentially a better answer is to just build this stuff into PG- things\n> like \"archive WAL to s3/GCS with these credentials\" are what an awful\n> lot of users want. There's then some who want \"archive first to this\n> other server, and then archive to s3/GCS\", or more complex options.\n\nYes, we certainly know how to do a file system copy, but what about\ncopying files to other things like S3? I don't know how we would do\nthat and allow users to change things like file paths or URLs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 9 Apr 2020 18:44:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Good point, but if there are multiple APIs, it makes shell script\n> flexibility even more useful.\n\nThis is really the key point for me. There are so many existing tools\nthat store a file someplace that we really can't ever hope to support\nthem all in core, or even to have well-written extensions that support\nthem all available on PGXN or wherever. We need to integrate with the\ntools that other people have created, not try to reinvent them all in\nPostgreSQL.\n\nNow what I understand Stephen to be saying is that a lot of those\ntools actually suck, and I think that's a completely valid point. But\nI also think that it's unwise to decide that such problems are our\nproblems rather than problems with those tools. That's a hole with no\nbottom.\n\nOne thing I do think would be realistic would be to invent a set of\ntools that are perform certain local filesystem operations in a\n\"hardened\" way. Maybe a single tool with subcommands and options. So\nyou could say, e.g. 'pgfile cp SOURCE TARGET' and it would create a\ntemporary file in the target directory, write the contents of the\nsource into that file, fsync the file, rename it into place, and do\nmore fsyncs to make sure it's all durable in case of a crash. You\ncould have a variant of this that instead of using the temporary file\nand rename in place approach, does the thing where you open the target\nfile with O_CREAT|O_EXCL, writes the bytes, and then closes and fsyncs\nit. And you could have other things too, like 'pgfile mkdir DIR' to\ncreate a directory and fsync it for durability. A toolset like this\nwould probably help people write better archive commands - it would\ncertainly been an improvement over what we have now, anyway, and it\ncould also be used with the feature that I proposed upthread.\n\nFor example, if you're concerned that bzip might overwrite an existing\nfile and that it might not fsync, then instead of saying:\n\npg_basebackup -Ft --pipe-output 'bzip > %f.bz2'\n\nYou could instead write:\n\npg_basebackup -Ft --pipe-output 'bzip | pgfile create-exclusive - %f.bz2'\n\nor whatever we pick for actual syntax. And that provides a kind of\nhardening that can be used with any other command line tool that can\nbe used as a filter.\n\nIf you want to compress with bzip, encrypt, and then copy the file to\na remote system, you could do:\n\npg_basebackup -Ft --pipe-output 'bzip | gpg -e | ssh someuser@somehost\npgfile create-exclusive - /backups/tuesday/%f.bz2'\n\nIt is of course not impossible to teach pg_basebackup to do all of\nthat stuff internally, but I have a really difficult time imagining us\never getting it done. There are just too many possibilities, and new\nones arise all the time.\n\nA 'pgfile' utility wouldn't help at all for people who are storing to\nS3 or whatever. They could use 'aws s3' as a target for --pipe-output,\nbut if it turns out that said tool is insufficiently robust in terms\nof overwriting files or doing fsyncs or whatever, then they might have\nproblems. Now, Stephen or anyone else could choose to provide\nalternative tools with more robust behavior, and that would be great.\nBut even if he didn't, people could take their chances with what's\nalready out there. To me, that's a good thing. Yeah, maybe they'll do\ndumb things that don't work, but realistically, they can do dumb stuff\nwithout the proposed option too.\n\n> Yes, we certainly know how to do a file system copy, but what about\n> copying files to other things like S3? I don't know how we would do\n> that and allow users to change things like file paths or URLs.\n\nRight. I think it's key that we provide people with tools that are\nhighly flexible and, ideally, also highly composable.\n\n(Incidentally, pg_basebackup already has an option to output the\nentire backup as a tarfile on standard output, and a user can already\npipe that into any tool they like. However, it doesn't work with\ntablespaces. So you could think of this proposal as extending the\nexisting functionality to cover that case.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:49:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Apr 9, 2020 at 04:15:07PM -0400, Stephen Frost wrote:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> > > I think we need to step back and look at the larger issue. The real\n> > > argument goes back to the Unix command-line API vs the VMS/Windows API. \n> > > The former has discrete parts that can be stitched together, while the\n> > > VMS/Windows API presents a more duplicative but more holistic API for\n> > > every piece. We have discussed using shell commands for\n> > > archive_command, and even more recently, for the server pass phrase. \n> > \n> > When it comes to something like the server pass phrase, it seems much\n> > more reasonable to consider using a shell script (though still perhaps\n> > not ideal) because it's not involved directly in ensuring that the data\n> > is reliably stored and it's pretty clear that if it doesn't work the\n> > worst thing that happens is that the database doesn't start up, but it\n> > won't corrupt any data or destroy it or do other bad things.\n> \n> Well, the pass phrase relates to security, so it is important too. I\n> don't think the _importance_ of the action is the most determining\n> issue. Rather, I think it is how well the action fits the shell script\n> API.\n\nThere isn't a single 'shell script API' though, and it's possible to\ncraft a 'shell script API' to fit nearly any use-case, but that doesn't\nmake it a good solution. The amount we depend on the external code for\nthe correct operation of the system is relevant, and important to\nconsider.\n\n> > > To get more specific, I think we have to understand how the\n> > > _requirements_ of the job match the shell script API, with stdin,\n> > > stdout, stderr, return code, and command-line arguments. Looking at\n> > > archive_command, the command-line arguments allow specification of file\n> > > names, but quoting can be complex. The error return code and stderr\n> > > output seem to work fine. There is no clean API for fsync and testing\n> > > if the file exists, so that all that has to be hand done in one\n> > > command-line. This is why many users use pre-written archive_command\n> > > shell scripts.\n> > \n> > We aren't considering all of the use-cases really though, in specific,\n> > things like pushing to s3 or gcs require, at least, good retry logic,\n> > and that's without starting to think about things like high-rate systems\n> > (spawning lots of new processes isn't free, particularly if they're\n> > written in shell script but any interpreted language is expensive) and\n> > wanting to parallelize.\n> \n> Good point, but if there are multiple APIs, it makes shell script\n> flexibility even more useful.\n\nThis doesn't seem to answer the concerns that I brought up.\n\nTrying to understand it did make me think of another relevant question\nthat was brought up in this discussion- can we really expect users to\nactually implement a C library for this, if we provided a way for them\nto? For that, I'd point to FDWs, where we certainly don't have any\nshortage of external, written in C, solutions. Another would be logical\ndecoding.\n\n> > > This brings up a few questions:\n> > > \n> > > * Should we have split apart archive_command into file-exists, copy,\n> > > fsync-file? Should we add that now?\n> > \n> > No.. The right approach to improving on archive command is to add a way\n> > for an extension to take over that job, maybe with a complete background\n> > worker of its own, or perhaps a shared library that can be loaded by the\n> > archiver process, at least if we're talking about how to allow people to\n> > extend it.\n> \n> That seems quite vague, which is the issue we had years ago when\n> considering doing archive_command as a link to a C library.\n\nThat prior discussion isn't really relevant though, as it was before we\nhad extensions, and before we had background workers that can run as part\nof an extension.\n\n> > Potentially a better answer is to just build this stuff into PG- things\n> > like \"archive WAL to s3/GCS with these credentials\" are what an awful\n> > lot of users want. There's then some who want \"archive first to this\n> > other server, and then archive to s3/GCS\", or more complex options.\n> \n> Yes, we certainly know how to do a file system copy, but what about\n> copying files to other things like S3? I don't know how we would do\n> that and allow users to change things like file paths or URLs.\n\nThere's a few different ways we could go about this. The simple answer\nwould be to use GUCs, which would simplify things like dealing with the\nrestore side too. Another option would be to have a concept of\n'repository' objects in the system, not unlike tablespaces, but they'd\nhave more options. To deal with that during recovery though, we'd need\na way to get the relevant information from the catalogs (maybe we write\nthe catalog out to a flat file on update, not unlike what we used to do\nwith pg_shadow), perhaps even in a format that users could modify if\nthey needed to. The nice thing about having actual objects in the\nsystem is that it'd be a bit cleaner to be able to define multiple ones\nand then have SQL-level functions/commands that work with them.\n\nA good deal of this does involve the question about how to deal with\nrecovery though, since you might want to, or need to, use different\noptions when it comes to recovery. Back to the use-case that I was\nmentioning, you could certainly want something like \"try to get the WAL\nfrom the local archive, and if that doesn't work, try to get it from the\ns3 repo\". What that implies then is that you'd really like a way to\nconfigure multiple repos, which is where we start to see the fragility\nof our GUC system. Pushing that out to something external doesn't\nstrike me as the right answer though, but rather, we should think about\nhow to resolve these issues with the GUC system, or come up with\nsomething better. This isn't the only area where the GUC system isn't\nreally helping us- synchronous standby names is getting to be a pretty\ncomplicated GUC, for example.\n\nOf course, we could start out with just supporting a single repo with\njust a few new GUCs to configure it, that wouldn't be hard and there's\ngood examples out there about what's needed to configure an s3 repo.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Apr 2020 09:51:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Good point, but if there are multiple APIs, it makes shell script\n> > flexibility even more useful.\n> \n> This is really the key point for me. There are so many existing tools\n> that store a file someplace that we really can't ever hope to support\n> them all in core, or even to have well-written extensions that support\n> them all available on PGXN or wherever. We need to integrate with the\n> tools that other people have created, not try to reinvent them all in\n> PostgreSQL.\n\nSo, this goes to what I was just mentioning to Bruce independently- you\ncould have made the same argument about FDWs, but it just doesn't\nactually hold any water. Sure, some of the FDWs aren't great, but\nthere's certainly no shortage of them, and the ones that are\nparticularly important (like postgres_fdw) are well written and in core.\n\n> Now what I understand Stephen to be saying is that a lot of those\n> tools actually suck, and I think that's a completely valid point. But\n> I also think that it's unwise to decide that such problems are our\n> problems rather than problems with those tools. That's a hole with no\n> bottom.\n\nI don't really think 'bzip2' sucks as a tool, or that bash does. They\nweren't designed or intended to meet the expectations that we have for\ndata durability though, which is why relying on them for exactly that\nends up being a bad recipe.\n\n> One thing I do think would be realistic would be to invent a set of\n> tools that are perform certain local filesystem operations in a\n> \"hardened\" way. Maybe a single tool with subcommands and options. So\n> you could say, e.g. 'pgfile cp SOURCE TARGET' and it would create a\n> temporary file in the target directory, write the contents of the\n> source into that file, fsync the file, rename it into place, and do\n> more fsyncs to make sure it's all durable in case of a crash. You\n> could have a variant of this that instead of using the temporary file\n> and rename in place approach, does the thing where you open the target\n> file with O_CREAT|O_EXCL, writes the bytes, and then closes and fsyncs\n> it. And you could have other things too, like 'pgfile mkdir DIR' to\n> create a directory and fsync it for durability. A toolset like this\n> would probably help people write better archive commands - it would\n> certainly been an improvement over what we have now, anyway, and it\n> could also be used with the feature that I proposed upthread.\n\nThis argument leads in a direction to justify anything as being sensible\nto implement using shell scripts. If we're open to writing the shell\nlevel tools that would be needed, we could reimplement all of our\nindexes that way, or FDWs, or TDE, or just about anything else.\n\nWhat we would end up with though is that we'd have more complications\nchanging those interfaces because people will be using those tools, and\nmaybe those tools don't get updated at the same time as PG does, and\nmaybe there's critical changes that need to be made in back branches and\nwe can't really do that with these interfaces.\n\n> It is of course not impossible to teach pg_basebackup to do all of\n> that stuff internally, but I have a really difficult time imagining us\n> ever getting it done. There are just too many possibilities, and new\n> ones arise all the time.\n\nI agree that it's certainly a fair bit of work, but it can be\naccomplished incrementally and, with a good design, allow for adding in\nnew options in the future with relative ease. Now is the time to\ndiscuss what that design looks like, think about how we can implement it\nin a way that all of the tools we have are able to work together, and\nhave them all support and be tested together with these different\noptions.\n\nThe concerns about there being too many possibilities and new ones\ncoming up all the time could be applied equally to FDWs, but rather than\nending up with a dearth of options and external solutions there, what\nwe've actually seen is an explosion of options and externally written\nlibraries for a large variety of options.\n\n> A 'pgfile' utility wouldn't help at all for people who are storing to\n> S3 or whatever. They could use 'aws s3' as a target for --pipe-output,\n> but if it turns out that said tool is insufficiently robust in terms\n> of overwriting files or doing fsyncs or whatever, then they might have\n> problems. Now, Stephen or anyone else could choose to provide\n> alternative tools with more robust behavior, and that would be great.\n> But even if he didn't, people could take their chances with what's\n> already out there. To me, that's a good thing. Yeah, maybe they'll do\n> dumb things that don't work, but realistically, they can do dumb stuff\n> without the proposed option too.\n\nHow does this solution give them a good way to do the right thing\nthough? In a way that will work with large databases and complex\nrequirements? The answer seems to be \"well, everyone will have to write\ntheir own tool to do that\" and that basically means that, at best, we're\nonly providing half of a solution and expecting all of our users to\nprovide the other half, and to always do it correctly and in a well\nwritten way. Acknowledging that most users aren't going to actually do\nthat and instead they'll implement half measures that aren't reliable\nshouldn't be seen as an endorsement of this approach.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Apr 2020 10:54:10 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Fri, Apr 10, 2020 at 10:54:10AM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Good point, but if there are multiple APIs, it makes shell script\n> > > flexibility even more useful.\n> > \n> > This is really the key point for me. There are so many existing tools\n> > that store a file someplace that we really can't ever hope to support\n> > them all in core, or even to have well-written extensions that support\n> > them all available on PGXN or wherever. We need to integrate with the\n> > tools that other people have created, not try to reinvent them all in\n> > PostgreSQL.\n> \n> So, this goes to what I was just mentioning to Bruce independently- you\n> could have made the same argument about FDWs, but it just doesn't\n> actually hold any water. Sure, some of the FDWs aren't great, but\n> there's certainly no shortage of them, and the ones that are\n> particularly important (like postgres_fdw) are well written and in core.\n\nNo, no one made that argument. It isn't clear how a shell script API\nwould map to relational database queries. The point is how well the\nAPIs match, and then if they are close, does it give us the flexibility\nwe need. You can't just look at flexibility without an API match.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:48:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Fri, Apr 10, 2020 at 10:54 AM Stephen Frost <sfrost@snowman.net> wrote:\n> So, this goes to what I was just mentioning to Bruce independently- you\n> could have made the same argument about FDWs, but it just doesn't\n> actually hold any water. Sure, some of the FDWs aren't great, but\n> there's certainly no shortage of them, and the ones that are\n> particularly important (like postgres_fdw) are well written and in core.\n\nThat's a fairly different use case. In the case of the FDW interface:\n\n- The number of interface method calls is very high, at least one per\ntuple and a bunch of extra ones for each query.\n- There is a significant amount of complex state that needs to be\nmaintained across API calls.\n- The return values are often tuples, which are themselves an\nin-memory data structure.\n\nBut here:\n\n- We're only talking about writing a handful of tar files, and that's\nin the context of a full-database backup, which is a much\nheavier-weight operation than a query.\n- There is not really any state that needs to be maintained across calls.\n- The expected result is that a file gets written someplace, which is\nnot an in-memory data structure but something that gets written to a\nplace outside of PostgreSQL.\n\n> The concerns about there being too many possibilities and new ones\n> coming up all the time could be applied equally to FDWs, but rather than\n> ending up with a dearth of options and external solutions there, what\n> we've actually seen is an explosion of options and externally written\n> libraries for a large variety of options.\n\nSure, but a lot of those FDWs are relatively low-quality, and it's\noften hard to find one that does what you want. And even if you do,\nyou don't really know how good it is. Unfortunately, in that case\nthere's no real alternative, because implementing something based on\nshell commands couldn't ever have reasonable performance or a halfway\ndecent feature set. That's not the case here.\n\n> How does this solution give them a good way to do the right thing\n> though? In a way that will work with large databases and complex\n> requirements? The answer seems to be \"well, everyone will have to write\n> their own tool to do that\" and that basically means that, at best, we're\n> only providing half of a solution and expecting all of our users to\n> provide the other half, and to always do it correctly and in a well\n> written way. Acknowledging that most users aren't going to actually do\n> that and instead they'll implement half measures that aren't reliable\n> shouldn't be seen as an endorsement of this approach.\n\nI don't acknowledge that. I think it's possible to use tools like the\nproposed option in a perfectly reliable way, and I've already given a\nbunch of examples of how it could be done. Writing a file is not such\na complex operation that every bit of code that writes one reliably\nhas to be written by someone associated with the PostgreSQL project. I\nstrongly suspect that people who use a cloud provider's tools to\nupload their backup files will be quite happy with the results, and if\nthey aren't, I hope they will blame the cloud provider's tool for\neating the data rather than this option for making it easy to give the\ndata to the thing that ate it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:20:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 12:20:01 -0400, Robert Haas wrote:\n> - We're only talking about writing a handful of tar files, and that's\n> in the context of a full-database backup, which is a much\n> heavier-weight operation than a query.\n> - There is not really any state that needs to be maintained across calls.\n> - The expected result is that a file gets written someplace, which is\n> not an in-memory data structure but something that gets written to a\n> place outside of PostgreSQL.\n\nWouldn't there be state like a S3/ssh/https/... connection? And perhaps\na 'backup_id' in the backup metadata DB that'd one would want to update\nat the end?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:38:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On 10/4/20 15:49, Robert Haas wrote:\n> On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> Good point, but if there are multiple APIs, it makes shell script\n>> flexibility even more useful.\n> [snip]\n>\n> One thing I do think would be realistic would be to invent a set of\n> tools that are perform certain local filesystem operations in a\n> \"hardened\" way.\n+10\n> Maybe a single tool with subcommands and options. So\n> you could say, e.g. 'pgfile cp SOURCE TARGET' and it would create a\n> temporary file in the target directory, write the contents of the\n> source into that file, fsync the file, rename it into place, and do\n> more fsyncs to make sure it's all durable in case of a crash. You\n> could have a variant of this that instead of using the temporary file\n> and rename in place approach, does the thing where you open the target\n> file with O_CREAT|O_EXCL, writes the bytes, and then closes and fsyncs\n> it.\nBehaviour might be decided in the same way as the default for \n'wal_sync_method' gets chosen, as the most appropriate for a particular \nsystem.\n> And you could have other things too, like 'pgfile mkdir DIR' to\n> create a directory and fsync it for durability. A toolset like this\n> would probably help people write better archive commands\n\nDefinitely, \"mkdir\" and \"create-exclusive\" (along with cp) would be a \ngreat addition and simplify the kind of tasks properly (i.e. with \nrisking data loss every time)\n> [excerpted]\n>\n> pg_basebackup -Ft --pipe-output 'bzip | pgfile create-exclusive - %f.bz2'\n>\n> [....]\n>\n> pg_basebackup -Ft --pipe-output 'bzip | gpg -e | ssh someuser@somehost\n> pgfile create-exclusive - /backups/tuesday/%f.bz2'\nYep. Would also fit the case for non-synchronous NFS mounts for backups...\n> It is of course not impossible to teach pg_basebackup to do all of\n> that stuff internally, but I have a really difficult time imagining us\n> ever getting it done. There are just too many possibilities, and new\n> ones arise all the time.\n\nIndeed. The beauty of Unix-like OSs is precisely this.\n\n> A 'pgfile' utility wouldn't help at all for people who are storing to\n> S3 or whatever. They could use 'aws s3' as a target for --pipe-output,\n> [snip]\n> (Incidentally, pg_basebackup already has an option to output the\n> entire backup as a tarfile on standard output, and a user can already\n> pipe that into any tool they like. However, it doesn't work with\n> tablespaces. So you could think of this proposal as extending the\n> existing functionality to cover that case.)\n\nBeen there already :S  Having pg_basebackup output multiple tarballs \n(one per tablespace), ideally separated via something so that splitting \ncan be trivially done on the receiving end.\n\n...but that's probably matter for another thread.\n\n\nThanks,\n\n     / J.L.\n\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 18:24:13 +0200", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Fri, Apr 10, 2020 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Wouldn't there be state like a S3/ssh/https/... connection? And perhaps\n> a 'backup_id' in the backup metadata DB that'd one would want to update\n> at the end?\n\nGood question. I don't know that there would be but, uh, maybe? It's\nnot obvious to me why all of that would need to be done using the same\nconnection, but if it is, the idea I proposed isn't going to work very\nnicely.\n\nMore generally, can you think of any ideas for how to structure an API\nhere that are easier to use than \"write some C code\"? Or do you think\nwe should tell people to write some C code if they want to\ncompress/encrypt/relocate their backup in some non-standard way?\n\nFor the record, I'm not against eventually having more than one way to\ndo this, maybe a shell-script interface for simpler things and some\nkind of API for more complex needs (e.g. NetBackup integration,\nperhaps). And I did wonder if there was some other way we could do\nthis. For instance, we could add an option --tar-everything that\nsticks all the things that would have been returned by the backup\ninside another level of tar file and sends the result to stdout. Then\nyou can pipe it into a single command that gets invoked only once for\nall the data, rather than once per tablespace. That might be better,\nbut I'm not sure it's better. It's better if you want to do\ncomplicated things that involve steps that happen before and after and\npersistent connections and so on, but it seems worse for simple things\nlike piping through a non-default compressor.\n\nLarry Wall somewhat famously commented that a good programming\nlanguage should (and I paraphrase) make simple things simple and\ncomplex things possible. My hesitation in going straight to a C API is\nthat it does not make simple things simple; and I'd like to be really\nsure that there is no way of achieving that valuable goal before we\ngive up on it. However, there is no doubt that a C API is potentially\nmore powerful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 11 Apr 2020 16:22:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On 10/4/20 21:38, Andres Freund wrote:\n> Hi,\n>\n> On 2020-04-10 12:20:01 -0400, Robert Haas wrote:\n>> - We're only talking about writing a handful of tar files, and that's\n>> in the context of a full-database backup, which is a much\n>> heavier-weight operation than a query.\n>> - There is not really any state that needs to be maintained across calls.\n>> - The expected result is that a file gets written someplace, which is\n>> not an in-memory data structure but something that gets written to a\n>> place outside of PostgreSQL.\n> Wouldn't there be state like a S3/ssh/https/... connection?\n...to try and save opening a new connection in the context of a \n(potentially) multi-TB backup? :S\n> And perhaps\n> a 'backup_id' in the backup metadata DB that'd one would want to update\n> at the end?\n\nThis is, indeed, material for external tools. Each cater for a \nparticular set of end-user requirements.\n\nWe got many examples already, with most even co-authored by this list's \nregulars... and IMHO none is suitable for ALL use-cases.\n\n\nBUT I agree that providing better tools with Postgres itself, ready to \nuse --- that is, uncomment the default \"archive_command\" and get going \nfor a very basic starting point --- is a huge advancement in the right \ndirection. More importantly (IMO): including the call to \"pgfile\" or \nequivalent quite clearly signals any inadvertent user that there is more \nto safely archiving WAL segments than just doing \"cp -a\" blindly and \nhoping that the tool magically does all required steps [needed to ensure \ndata safety in this case, rather than the usual behaviour]. It's \nprobably more effective than just ammending the existing comments to \npoint users to a (new?) section within the documentation.\n\n\nThis comment is from experience: I've lost count of how many times I \nhave had to \"fix\" the default command for WAL archiving --- precisely \nbecause it had been blindly copied from the default without further \nthinking of the implications should there happen any \n(deviation-from-expected-behaviour) during WAL archiving .... only to be \nnoticed at (attempted) recovery time :\\\n\n\nHTH.\n\nThanks,\n\n     J.L.\n\n\n\n\n", "msg_date": "Sun, 12 Apr 2020 02:38:23 +0200", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 10:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Apr 10, 2020 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > Wouldn't there be state like a S3/ssh/https/... connection? And perhaps\n> > a 'backup_id' in the backup metadata DB that'd one would want to update\n> > at the end?\n>\n> Good question. I don't know that there would be but, uh, maybe? It's\n> not obvious to me why all of that would need to be done using the same\n> connection, but if it is, the idea I proposed isn't going to work very\n> nicely.\n>\n\nThere are certainly cases for it. It might not be they have to be the same\nconnection, but still be the same session, meaning before the first time\nyou perform some step of authentication, get a token, and then use that for\nall the files. You'd need somewhere to maintain that state, even if it\ndoesn't happen to be a socket. But there are definitely plenty of cases\nwhere keeping an open socket can be a huge performance gain -- especially\nwhen it comes to not re-negotiating encryption etc.\n\n\nMore generally, can you think of any ideas for how to structure an API\n> here that are easier to use than \"write some C code\"? Or do you think\n> we should tell people to write some C code if they want to\n> compress/encrypt/relocate their backup in some non-standard way?\n>\n\nFor compression and encryption, it could perhaps be as simple as \"the\ncommand has to be pipe on both input and output\" and basically send the\nresponse back to pg_basebackup.\n\nBut that won't help if the target is to relocate things...\n\n\n\nFor the record, I'm not against eventually having more than one way to\n> do this, maybe a shell-script interface for simpler things and some\n> kind of API for more complex needs (e.g. NetBackup integration,\n> perhaps). And I did wonder if there was some other way we could do\n> this. For instance, we could add an option --tar-everything that\n> sticks all the things that would have been returned by the backup\n> inside another level of tar file and sends the result to stdout. Then\n> you can pipe it into a single command that gets invoked only once for\n> all the data, rather than once per tablespace. That might be better,\n> but I'm not sure it's better. It's better if you want to do\n> complicated things that involve steps that happen before and after and\n> persistent connections and so on, but it seems worse for simple things\n> like piping through a non-default compressor.\n>\n\n\nThat is one way to go for it -- and in a case like that, I'd suggest the\nshellscript interface would be an implementation of the other API. A number\nof times through the years I've bounced ideas around for what to do with\narchive_command with different people (never quite to the level of \"it's\ntime to write a patch\"), and it's mostly come down to some sort of shlib\napi where in turn we'd ship a backwards compatible implementation that\nwould behave like archive_command. I'd envision something similar here.\n\n\n\nLarry Wall somewhat famously commented that a good programming\n> language should (and I paraphrase) make simple things simple and\n> complex things possible. My hesitation in going straight to a C API is\n> that it does not make simple things simple; and I'd like to be really\n> sure that there is no way of achieving that valuable goal before we\n> give up on it. However, there is no doubt that a C API is potentially\n> more powerful.\n>\n\n\nIs there another language that it would make sense to support in the form\nof \"native plugins\". Assume we had some generic way to say let people write\nsuch plugins in python (we can then bikeshed about which language we should\nuse). That would give them a much higher level language, while also making\nit possible for a \"better\" API.\n\nNote that I'm not suggesting supporting a python script running as a\nregular script -- that could easily be done by anybody making a\nshellscript implementation. It would be an actual API where the postgres\ntool would instantiate the python interpreter in-process and create an\nobject there. This would allow things like keeping state across calls, and\nwould also give access to the extensive library availability of the\nlanguage (e.g. you could directly import an S3 compatible library to upload\nfiles etc).\n\nDoing that for just pg_basebackup would probably be overkill, but it might\nbe a generic choice that could extend to other things as well.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Apr 11, 2020 at 10:22 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Apr 10, 2020 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Wouldn't there be state like a S3/ssh/https/... connection? And perhaps\n> a 'backup_id' in the backup metadata DB that'd one would want to update\n> at the end?\n\nGood question. I don't know that there would be but, uh, maybe? It's\nnot obvious to me why all of that would need to be done using the same\nconnection, but if it is, the idea I proposed isn't going to work very\nnicely.There are certainly cases for it. It might not be they have to be the same connection, but still be the same session, meaning before the first time you perform some step of authentication, get a token, and then use that for all the files. You'd need somewhere to maintain that state, even if it doesn't happen to be a socket. But there are definitely plenty of cases where keeping an open socket can be a huge performance gain -- especially when it comes to not re-negotiating encryption etc.\nMore generally, can you think of any ideas for how to structure an API\nhere that are easier to use than \"write some C code\"? Or do you think\nwe should tell people to write some C code if they want to\ncompress/encrypt/relocate their backup in some non-standard way?For compression and encryption, it could perhaps be as simple as \"the command has to be pipe on both input and output\" and basically send the response back to pg_basebackup.But that won't help if the target is to relocate things...\nFor the record, I'm not against eventually having more than one way to\ndo this, maybe a shell-script interface for simpler things and some\nkind of API for more complex needs (e.g. NetBackup integration,\nperhaps). And I did wonder if there was some other way we could do\nthis. For instance, we could add an option --tar-everything that\nsticks all the things that would have been returned by the backup\ninside another level of tar file and sends the result to stdout. Then\nyou can pipe it into a single command that gets invoked only once for\nall the data, rather than once per tablespace. That might be better,\nbut I'm not sure it's better. It's better if you want to do\ncomplicated things that involve steps that happen before and after and\npersistent connections and so on, but it seems worse for simple things\nlike piping through a non-default compressor.That is one way to go for it -- and in a case like that, I'd suggest the shellscript interface would be an implementation of the other API. A number of times through the years I've bounced ideas around for what to do with archive_command with different people (never quite to the level of \"it's time to write a patch\"), and it's mostly come down to some sort of shlib api where in turn we'd ship a backwards compatible implementation that would behave like archive_command. I'd envision something similar here.\nLarry Wall somewhat famously commented that a good programming\nlanguage should (and I paraphrase) make simple things simple and\ncomplex things possible. My hesitation in going straight to a C API is\nthat it does not make simple things simple; and I'd like to be really\nsure that there is no way of achieving that valuable goal before we\ngive up on it. However, there is no doubt that a C API is potentially\nmore powerful.Is there another language that it would make sense to support in the form of \"native plugins\". Assume we had some generic way to say let people write such plugins in python (we can then bikeshed about which language we should use). That would give them a much higher level language, while also making it possible for a \"better\" API.Note that I'm not suggesting supporting a python script running as a regular script -- that could easily be done by anybody making a shellscript implementation. It would be an actual API where the postgres tool would instantiate the python interpreter in-process and create an object there. This would allow things like keeping state across calls, and would also give access to the extensive library availability of the language (e.g. you could directly import an S3 compatible library to upload files etc).Doing that for just pg_basebackup would probably be overkill, but it might be a generic choice that could extend to other things as well. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 12 Apr 2020 16:09:13 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 10:09 AM Magnus Hagander <magnus@hagander.net> wrote:\n> There are certainly cases for it. It might not be they have to be the same connection, but still be the same session, meaning before the first time you perform some step of authentication, get a token, and then use that for all the files. You'd need somewhere to maintain that state, even if it doesn't happen to be a socket. But there are definitely plenty of cases where keeping an open socket can be a huge performance gain -- especially when it comes to not re-negotiating encryption etc.\n\nHmm, OK.\n\n> For compression and encryption, it could perhaps be as simple as \"the command has to be pipe on both input and output\" and basically send the response back to pg_basebackup.\n>\n> But that won't help if the target is to relocate things...\n\nRight. And, also, it forces things to be sequential in a way I'm not\ntoo happy about. Like, if we have some kind of parallel backup, which\nI hope we will, then you can imagine (among other possibilities)\ngetting files for each tablespace concurrently, and piping them\nthrough the output command concurrently. But if we emit the result in\na tarfile, then it has to be sequential; there's just no other choice.\nI think we should try to come up with something that can work in a\nmulti-threaded environment.\n\n> That is one way to go for it -- and in a case like that, I'd suggest the shellscript interface would be an implementation of the other API. A number of times through the years I've bounced ideas around for what to do with archive_command with different people (never quite to the level of \"it's time to write a patch\"), and it's mostly come down to some sort of shlib api where in turn we'd ship a backwards compatible implementation that would behave like archive_command. I'd envision something similar here.\n\nI agree. Let's imagine that there are a conceptually unlimited number\nof \"targets\" and \"filters\". Targets and filters accept data via the\nsame API, but a target is expected to dispose of the data, whereas a\nfilter is expected to pass it, via that same API, to a subsequent\nfilter or target. So filters could include things like \"gzip\", \"lz4\",\nand \"encrypt-with-rot13\", whereas targets would include things like\n\"file\" (the thing we have today - write my data into some local\nfiles!), \"shell\" (which writes my data to a shell command, as\noriginally proposed), and maybe eventually things like \"netbackup\" and\n\"s3\". Ideally this will all eventually be via a loadable module\ninterface so that third-party filters and targets can be fully\nsupported, but perhaps we could consider that an optional feature for\nv1. Note that there is quite a bit of work to do here just to\nreorganize the code.\n\nI would expect that we would want to provide a flexible way for a\ntarget or filter to be passed options from the pg_basebackup command\nline. So one might for example write this:\n\npg_basebackup --filter='lz4 -9' --filter='encrypt-with-rot13\nrotations=2' --target='shell ssh rhaas@depository pgfile\ncreate-exclusive - %f.lz4'\n\nThe idea is that the first word of the filter or target identifies\nwhich one should be used, and the rest is just options text in\nwhatever form the provider cares to accept them; but with some\n%<character> substitutions allowed, for things like the file name.\n(The aforementioned escaping problems for things like filenames with\nspaces in them still need to be sorted out, but this is just a sketch,\nso while I think it's quite solvable, I am going to refrain from\nproposing a precise solution here.)\n\nAs to the underlying C API behind this, I propose approximately the\nfollowing set of methods:\n\n1. Begin a session. Returns a pointer to a session handle. Gets the\noptions provided on the command line. In the case of a filter, also\ngets a pointer to the session handle for the next filter, or for the\ntarget (which means we set up the final target first, and then stack\nthe filters on top of it).\n\n2. Begin a file. Gets a session handle and a file name. Returns a\npointer to a file handle.\n\n3. Write data to a file. Gets a file handle, a byte count, and some bytes.\n\n4. End a file. Gets a file handle.\n\n5. End a session. Gets a session handle.\n\nIf we get parallelism at some point, then there could be multiple\nfiles in progress at the same time. Maybe some targets, or even\nfilters, won't be able to handle that, so we could have a flag\nsomeplace indicating that a particular target or filter isn't\nparallelism-capable. As an example, writing output to a bunch of files\nin a directory is fine to do in parallel, but if you want the entire\nbackup in one giant tar file, you need each file sequentially.\n\n> Is there another language that it would make sense to support in the form of \"native plugins\". Assume we had some generic way to say let people write such plugins in python (we can then bikeshed about which language we should use). That would give them a much higher level language, while also making it possible for a \"better\" API.\n\nThe idea of using LUA has been floated before, and I imagine that an\ninterface like the above could also be made to have language bindings\nfor the scripting language of your choice - e.g. Python. However, I\nthink we should start by trying to square away the C interface and\nthen anybody who feels motivated can try to put language bindings on\ntop of it. I tend to feel that's a bit of a fringe feature myself,\nsince realistically shell commands are about as much as (and\noccasionally more than) typical users can manage. However, it would\nnot surprise me very much if there are power users out there for whom\nC is too much but Python or LUA or something is just right, and if\nsomebody builds something nifty that caters to that audience, I think\nthat's great.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 12 Apr 2020 11:04:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-11 16:22:09 -0400, Robert Haas wrote:\n> On Fri, Apr 10, 2020 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > Wouldn't there be state like a S3/ssh/https/... connection? And perhaps\n> > a 'backup_id' in the backup metadata DB that'd one would want to update\n> > at the end?\n> \n> Good question. I don't know that there would be but, uh, maybe? It's\n> not obvious to me why all of that would need to be done using the same\n> connection, but if it is, the idea I proposed isn't going to work very\n> nicely.\n\nWell, it depends on what you want to support. If you're only interested\nin supporting tarball mode ([1]), *maybe* you can get away without\nlonger lived sessions (but I'm doubtful). But if you're interested in\nalso supporting archiving plain files, then the cost of establishing\nsessions, and the latency penalty of having to wait for command\ncompletion would imo be prohibitive. A lot of solutions for storing\nbackups can achieve pretty decent throughput, but have very significant\nlatency. That's of course in addition to network latency itself.\n\n\n[1] I don't think we should restrict it that way. Would make it much\n more complicated to support incremental backup, pg_rewind,\n deduplication, etc.\n\n\n> More generally, can you think of any ideas for how to structure an API\n> here that are easier to use than \"write some C code\"? Or do you think\n> we should tell people to write some C code if they want to\n> compress/encrypt/relocate their backup in some non-standard way?\n\n> For the record, I'm not against eventually having more than one way to\n> do this, maybe a shell-script interface for simpler things and some\n> kind of API for more complex needs (e.g. NetBackup integration,\n> perhaps). And I did wonder if there was some other way we could do\n> this.\n\nI'm doubtful that an API based on string replacement is the way to\ngo. It's hard for me to see how that's not either going to substantially\nrestrict the way the \"tasks\" are done, or yield a very complicated\ninterface.\n\nI wonder whether the best approach here could be that pg_basebackup (and\nperhaps other tools) opens pipes to/from a subcommand and over the pipe\nit communicates with the subtask using a textual ([2]) description of\ntasks. Like:\n\nbackup mode=files base_directory=/path/to/data/directory\nbackup_file name=base/14037/16396.14 size=1073741824\nbackup_file name=pg_wal/XXXX size=16777216\nor\nbackup mode=tar\nbase_directory /path/to/data/\nbackup_tar name=dir.tar size=983498875687487\n\nThe obvious problem with that proposal is that we don't want to\nunnecessarily store the incoming data on the system pg_basebackup is\nrunning on, just for the subcommand to get access to them. More on that\nin a second.\n\nA huge advantage of a scheme like this would be that it wouldn't have to\nbe specific to pg_basebackup. It could just as well work directly on the\nserver, avoiding an unnecesary loop through the network. Which\ne.g. could integrate with filesystem snapshots etc. Without needing to\nbuild the 'archive target' once with server libraries, and once with\nclient libraries.\n\nOne reason I think something like this could be advantageous over a C\nAPI is that it's quite feasible to implement it from a number of\ndifferent language, including shell if really desired, without needing\nto provide a C API via a FFI.\n\nIt'd also make it quite natural to split out compression from\npg_basebackup's main process, which IME currently makes it not really\nfeasible to use pg_basebackup's compression.\n\n\nThere's various ways we could address the issue for how the subcommand\ncan access the file data. The most flexible probably would be to rely on\nexchanging file descriptors between basebackup and the subprocess (these\ndays all supported platforms have that, I think). Alternatively we\ncould invoke the subcommand before really starting the backup, and ask\nhow many files it'd like to receive in parallel, and restart the\nsubcommand with that number of file descriptors open.\n\nIf we relied on FDs, here's an example for how a trace between\npg_basebackup (BB) a backup target command (TC) could look like:\n\nBB: required_capabilities fd_send files\nBB: provided_capabilities fd_send file_size files tar\nTC: required_capabilities fd_send files file_size\nBB: backup mode=files base_directory=/path/to/data/directory\nBB: backup_file method=fd name=base/14037/16396.1 size=1073741824\nBB: backup_file method=fd name=base/14037/16396.2 size=1073741824\nBB: backup_file method=fd name=base/14037/16396.3 size=1073741824\nTC: fd name=base/14037/16396.1 (contains TC fd 4)\nTC: fd name=base/14037/16396.2 (contains TC fd 5)\nBB: backup_file method=fd name=base/14037/16396.4 size=1073741824\nTC: fd name=base/14037/16396.3 (contains TC fd 6)\nBB: backup_file method=fd name=base/14037/16396.5 size=1073741824\nTC: fd name=base/14037/16396.4 (contains TC fd 4)\nTC: fd name=base/14037/16396.5 (contains TC fd 5)\nBB: done\nTC: done\n\nbackup_file type=fd mode=fd base/14037/16396.4 1073741824\nor\nbackup_features tar\nbackup_mode tar\nbase_directory /path/to/data/\nbackup_tar dir.tar 983498875687487\n\n\n[2] yes, I already hear json. A line deliminated format would have some\nadvantages though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 12:17:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-12 11:04:46 -0400, Robert Haas wrote:\n> I would expect that we would want to provide a flexible way for a\n> target or filter to be passed options from the pg_basebackup command\n> line. So one might for example write this:\n> \n> pg_basebackup --filter='lz4 -9' --filter='encrypt-with-rot13\n> rotations=2' --target='shell ssh rhaas@depository pgfile\n> create-exclusive - %f.lz4'\n\nMy gut feeling is that this would end up with too complicated\npg_basebackup invocations, resulting in the complexity getting\nreimplemented in the target command. A lot of users don't want to\nfigure out what compression, encryption, ... command makes sense for\nwhich archiving target. And e.g. an s3 target might want to integrate\nwith an AWS HSM etc, making it unattractive to do the encryption outside\nthe target.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 12:24:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On 4/12/20 11:04 AM, Robert Haas wrote:\n> On Sun, Apr 12, 2020 at 10:09 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> There are certainly cases for it. It might not be they have to be the same connection, but still be the same session, meaning before the first time you perform some step of authentication, get a token, and then use that for all the files. You'd need somewhere to maintain that state, even if it doesn't happen to be a socket. But there are definitely plenty of cases where keeping an open socket can be a huge performance gain -- especially when it comes to not re-negotiating encryption etc.\n> \n> Hmm, OK.\n\nWhen we implemented connection-sharing for S3 in pgBackRest it was a \nsignificant performance boost, even for large files since they must be \nuploaded in parts. The same goes for files transferred over SSH, though \nin this case the overhead is per-file and can be mitigated with control \nmaster.\n\nWe originally (late 2013) implemented everything with commmand-line \ntools during the POC phase. The idea was to get something viable quickly \nand then improve as needed. At the time our config file had entries \nsomething like this:\n\n[global:command]\ncompress=/usr/bin/gzip --stdout %file%\ndecompress=/usr/bin/gzip -dc %file%\nchecksum=/usr/bin/shasum %file% | awk '{print $1}'\nmanifest=/opt/local/bin/gfind %path% -printf \n'%P\\t%y\\t%u\\t%g\\t%m\\t%T@\\t%i\\t%s\\t%l\\n'\npsql=/Library/PostgreSQL/9.3/bin/psql -X %option%\n\n[db]\npsql_options=--cluster=9.3/main\n\n[db:command:option]\npsql=--port=6001\n\nThese appear to be for MacOS, but Linux would be similar.\n\nThis *did* work, but it was really hard to debug when things went wrong, \nthe per-file cost was high, and the slight differences between the \ncommand-line tools on different platforms was maddening. For example, \nlots of versions of 'find' would error if a file disappeared while \nbuilding the manifest, which is a pretty common occurrence in PostgreSQL \n(most newer distros had an option to fix this). I know that doesn't \napply here, but it's an example. Also, debugging was complicated with so \nmany processes, with any degree of parallelism the process list got \npretty crazy, fsync was not happening, etc. It's been a long time but I \ndon't have any good memories of the solution that used all command-line \ntools.\n\nOnce we had a POC that solved our basic problem, i.e. backup up about \n50TB of data reasonably efficiently, we immediately started working on a \nversion that did not rely on command-line tools and we never looked \nback. Currently the only command-line tool we use is ssh.\n\nI'm sure it would be possible to create a solution that worked better \nthan ours, but I'm pretty certain it would still be hard for users to \nmake it work correctly and to prove it worked correctly.\n\n>> For compression and encryption, it could perhaps be as simple as \"the command has to be pipe on both input and output\" and basically send the response back to pg_basebackup.\n>>\n>> But that won't help if the target is to relocate things...\n> \n> Right. And, also, it forces things to be sequential in a way I'm not\n> too happy about. Like, if we have some kind of parallel backup, which\n> I hope we will, then you can imagine (among other possibilities)\n> getting files for each tablespace concurrently, and piping them\n> through the output command concurrently. But if we emit the result in\n> a tarfile, then it has to be sequential; there's just no other choice.\n> I think we should try to come up with something that can work in a\n> multi-threaded environment.\n> \n>> That is one way to go for it -- and in a case like that, I'd suggest the shellscript interface would be an implementation of the other API. A number of times through the years I've bounced ideas around for what to do with archive_command with different people (never quite to the level of \"it's time to write a patch\"), and it's mostly come down to some sort of shlib api where in turn we'd ship a backwards compatible implementation that would behave like archive_command. I'd envision something similar here.\n> \n> I agree. Let's imagine that there are a conceptually unlimited number\n> of \"targets\" and \"filters\". Targets and filters accept data via the\n> same API, but a target is expected to dispose of the data, whereas a\n> filter is expected to pass it, via that same API, to a subsequent\n> filter or target. So filters could include things like \"gzip\", \"lz4\",\n> and \"encrypt-with-rot13\", whereas targets would include things like\n> \"file\" (the thing we have today - write my data into some local\n> files!), \"shell\" (which writes my data to a shell command, as\n> originally proposed), and maybe eventually things like \"netbackup\" and\n> \"s3\". Ideally this will all eventually be via a loadable module\n> interface so that third-party filters and targets can be fully\n> supported, but perhaps we could consider that an optional feature for\n> v1. Note that there is quite a bit of work to do here just to\n> reorganize the code.\n> \n> I would expect that we would want to provide a flexible way for a\n> target or filter to be passed options from the pg_basebackup command\n> line. So one might for example write this:\n> \n> pg_basebackup --filter='lz4 -9' --filter='encrypt-with-rot13\n> rotations=2' --target='shell ssh rhaas@depository pgfile\n> create-exclusive - %f.lz4'\n> \n> The idea is that the first word of the filter or target identifies\n> which one should be used, and the rest is just options text in\n> whatever form the provider cares to accept them; but with some\n> %<character> substitutions allowed, for things like the file name.\n> (The aforementioned escaping problems for things like filenames with\n> spaces in them still need to be sorted out, but this is just a sketch,\n> so while I think it's quite solvable, I am going to refrain from\n> proposing a precise solution here.)\n\nThis is basically the solution we have landed on after many iterations.\n\nWe implement two types of filters, In and InOut. The In filters process \ndata and produce a result, e.g. SHA1, size, page checksum, etc. The \nInOut filters modify data, e.g. compression, encryption. Yeah, the names \ncould probably be better...\n\nI have attached our filter interface (filter.intern.h) as a concrete \nexample of how this works.\n\nWe call 'targets' storage and have a standard interface for creating \nstorage drivers. I have also attached our storage interface \n(storage.intern.h) as a concrete example of how this works.\n\nNote that for just performing backup this is overkill, but once you \nconsider verify this is pretty much the minimum storage interface \nneeded, according to our experience.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Sun, 12 Apr 2020 17:19:00 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On 4/12/20 3:17 PM, Andres Freund wrote:\n> \n>> More generally, can you think of any ideas for how to structure an API\n>> here that are easier to use than \"write some C code\"? Or do you think\n>> we should tell people to write some C code if they want to\n>> compress/encrypt/relocate their backup in some non-standard way?\n> \n>> For the record, I'm not against eventually having more than one way to\n>> do this, maybe a shell-script interface for simpler things and some\n>> kind of API for more complex needs (e.g. NetBackup integration,\n>> perhaps). And I did wonder if there was some other way we could do\n>> this.\n> \n> I'm doubtful that an API based on string replacement is the way to\n> go. It's hard for me to see how that's not either going to substantially\n> restrict the way the \"tasks\" are done, or yield a very complicated\n> interface.\n> \n> I wonder whether the best approach here could be that pg_basebackup (and\n> perhaps other tools) opens pipes to/from a subcommand and over the pipe\n> it communicates with the subtask using a textual ([2]) description of\n> tasks. Like:\n> \n> backup mode=files base_directory=/path/to/data/directory\n> backup_file name=base/14037/16396.14 size=1073741824\n> backup_file name=pg_wal/XXXX size=16777216\n> or\n> backup mode=tar\n> base_directory /path/to/data/\n> backup_tar name=dir.tar size=983498875687487\n\nThis is pretty much what pgBackRest does. We call them \"local\" processes \nand they do most of the work during backup/restore/archive-get/archive-push.\n\n> The obvious problem with that proposal is that we don't want to\n> unnecessarily store the incoming data on the system pg_basebackup is\n> running on, just for the subcommand to get access to them. More on that\n> in a second.\n\nWe also implement \"remote\" processes so the local processes can get data \nthat doesn't happen to be local, i.e. on a remote PostgreSQL cluster.\n\n> A huge advantage of a scheme like this would be that it wouldn't have to\n> be specific to pg_basebackup. It could just as well work directly on the\n> server, avoiding an unnecesary loop through the network. Which\n> e.g. could integrate with filesystem snapshots etc. Without needing to\n> build the 'archive target' once with server libraries, and once with\n> client libraries.\n\nYes -- needing to store the data locally or stream it through one main \nprocess is a major bottleneck.\n\nWorking on the server is key because it allows you to compress before \ntransferring the data. With parallel processing it is trivial to flood a \nnetwork. We have a recent example from a community user of backing up \n25TB in 4 hours. Compression on the server makes this possible (and a \nfast network, in this case).\n\nFor security reasons, it's also nice to be able to encrypt data before \nit leaves the database server. Calculating checksums/size at the source \nis also ideal.\n\n> One reason I think something like this could be advantageous over a C\n> API is that it's quite feasible to implement it from a number of\n> different language, including shell if really desired, without needing\n> to provide a C API via a FFI.\n\nWe migrated from Perl to C and kept our local/remote protocol the same, \nwhich really helped. So, we had times when the C code was using a Perl \nlocal/remote and vice versa. The idea is certainly workable in our \nexperience.\n\n> It'd also make it quite natural to split out compression from\n> pg_basebackup's main process, which IME currently makes it not really\n> feasible to use pg_basebackup's compression.\n\nThis is a major advantage.\n\n> There's various ways we could address the issue for how the subcommand\n> can access the file data. The most flexible probably would be to rely on\n> exchanging file descriptors between basebackup and the subprocess (these\n> days all supported platforms have that, I think). Alternatively we\n> could invoke the subcommand before really starting the backup, and ask\n> how many files it'd like to receive in parallel, and restart the\n> subcommand with that number of file descriptors open.\n\nWe don't exchange FDs. Each local is responsible for getting the data \nfrom PostgreSQL or the repo based on knowing the data source and a path. \nFor pg_basebackup, however, I'd imagine each local would want a \nreplication connection with the ability to request specific files that \nwere passed to it by the main process.\n\n> [2] yes, I already hear json. A line deliminated format would have some\n> advantages though.\n\nWe use JSON, but each protocol request/response is linefeed-delimited. \nSo for example here's what it looks like when the main process requests \na local process to backup a specific file:\n\n{\"{\"cmd\":\"backupFile\",\"param\":[\"base/32768/33001\",true,65536,null,true,0,\"pg_data/base/32768/33001\",false,0,3,\"20200412-213313F\",false,null]}\"}\n\nAnd the local responds with:\n\n{\"{\"out\":[1,65536,65536,\"6bf316f11d28c28914ea9be92c00de9bea6d9a6b\",{\"align\":true,\"error\":[0,[3,5],7],\"valid\":false}]}\"}\n\nWe use arrays for parameters but of course these could be done with \nobjects for more readability.\n\nWe are considering a move to HTTP since lots of services (e.g. S3, GCS, \nAzure, etc.) require it (so we implement it) and we're not sure it makes \nsense to maintain our own protocol format. That said, we'd still prefer \nto use JSON for our payloads (like GCS) rather than XML (as S3 does).\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sun, 12 Apr 2020 17:57:05 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-12 17:57:05 -0400, David Steele wrote:\n> On 4/12/20 3:17 PM, Andres Freund wrote:\n> > [proposal outline[\n>\n> This is pretty much what pgBackRest does. We call them \"local\" processes and\n> they do most of the work during backup/restore/archive-get/archive-push.\n\nHah. I swear, I didn't look.\n\n\n> > The obvious problem with that proposal is that we don't want to\n> > unnecessarily store the incoming data on the system pg_basebackup is\n> > running on, just for the subcommand to get access to them. More on that\n> > in a second.\n> \n> We also implement \"remote\" processes so the local processes can get data\n> that doesn't happen to be local, i.e. on a remote PostgreSQL cluster.\n\nWhat is the interface between those? I.e. do the files have to be\nspooled as a whole locally?\n\n\n> > There's various ways we could address the issue for how the subcommand\n> > can access the file data. The most flexible probably would be to rely on\n> > exchanging file descriptors between basebackup and the subprocess (these\n> > days all supported platforms have that, I think). Alternatively we\n> > could invoke the subcommand before really starting the backup, and ask\n> > how many files it'd like to receive in parallel, and restart the\n> > subcommand with that number of file descriptors open.\n> \n> We don't exchange FDs. Each local is responsible for getting the data from\n> PostgreSQL or the repo based on knowing the data source and a path. For\n> pg_basebackup, however, I'd imagine each local would want a replication\n> connection with the ability to request specific files that were passed to it\n> by the main process.\n\nI don't like this much. It'll push more complexity into each of the\n\"targets\" and we can't easily share that complexity. And also, needing\nto request individual files will add a lot of back/forth, and thus\nlatency issues. The server would always have to pre-send a list of\nfiles, we'd have to deal with those files vanishing, etc.\n\n\n> > [2] yes, I already hear json. A line deliminated format would have some\n> > advantages though.\n> \n> We use JSON, but each protocol request/response is linefeed-delimited. So\n> for example here's what it looks like when the main process requests a local\n> process to backup a specific file:\n> \n> {\"{\"cmd\":\"backupFile\",\"param\":[\"base/32768/33001\",true,65536,null,true,0,\"pg_data/base/32768/33001\",false,0,3,\"20200412-213313F\",false,null]}\"}\n> \n> And the local responds with:\n> \n> {\"{\"out\":[1,65536,65536,\"6bf316f11d28c28914ea9be92c00de9bea6d9a6b\",{\"align\":true,\"error\":[0,[3,5],7],\"valid\":false}]}\"}\n\nAs long as it's line delimited, I don't really care :)\n\n\n> We are considering a move to HTTP since lots of services (e.g. S3, GCS,\n> Azure, etc.) require it (so we implement it) and we're not sure it makes\n> sense to maintain our own protocol format. That said, we'd still prefer to\n> use JSON for our payloads (like GCS) rather than XML (as S3 does).\n\nI'm not quite sure what you mean here? You mean actual requests for each\nof what currently are lines? If so, that sounds *terrible*.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 15:37:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On 4/12/20 6:37 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2020-04-12 17:57:05 -0400, David Steele wrote:\n>> On 4/12/20 3:17 PM, Andres Freund wrote:\n>>> [proposal outline[\n>>\n>> This is pretty much what pgBackRest does. We call them \"local\" processes and\n>> they do most of the work during backup/restore/archive-get/archive-push.\n> \n> Hah. I swear, I didn't look.\n\nI believe you. If you spend enough time thinking about this (and we've \nspent a lot) then I think this is is where you arrive.\n\n>>> The obvious problem with that proposal is that we don't want to\n>>> unnecessarily store the incoming data on the system pg_basebackup is\n>>> running on, just for the subcommand to get access to them. More on that\n>>> in a second.\n>>\n>> We also implement \"remote\" processes so the local processes can get data\n>> that doesn't happen to be local, i.e. on a remote PostgreSQL cluster.\n> \n> What is the interface between those? I.e. do the files have to be\n> spooled as a whole locally?\n\nCurrently we use SSH to talk to a remote, but we are planning on using \nour own TLS servers in the future. We don't spool anything -- the file \nis streamed from the PostgreSQL server (via remote protocol if needed) \nto the repo (which could also be remote, e.g. S3) without spoolng to \ndisk. We have buffers, of course, which are configurable with the \nbuffer-size option.\n\n>>> There's various ways we could address the issue for how the subcommand\n>>> can access the file data. The most flexible probably would be to rely on\n>>> exchanging file descriptors between basebackup and the subprocess (these\n>>> days all supported platforms have that, I think). Alternatively we\n>>> could invoke the subcommand before really starting the backup, and ask\n>>> how many files it'd like to receive in parallel, and restart the\n>>> subcommand with that number of file descriptors open.\n>>\n>> We don't exchange FDs. Each local is responsible for getting the data from\n>> PostgreSQL or the repo based on knowing the data source and a path. For\n>> pg_basebackup, however, I'd imagine each local would want a replication\n>> connection with the ability to request specific files that were passed to it\n>> by the main process.\n> \n> I don't like this much. It'll push more complexity into each of the\n> \"targets\" and we can't easily share that complexity. And also, needing\n> to request individual files will add a lot of back/forth, and thus\n> latency issues. The server would always have to pre-send a list of\n> files, we'd have to deal with those files vanishing, etc.\n\nSure, unless we had a standard interface to \"get a file from the \nPostgreSQL cluster\", which is what pgBackRest has via the storage interface.\n\nAttached is our implementation for \"backupFile\". I think it's pretty \nconcise considering what it does. Most of it is dedicated to checksum \ndeltas and backup resume. The straight copy with filters starts at line 189.\n\n>>> [2] yes, I already hear json. A line deliminated format would have some\n>>> advantages though.\n>>\n>> We use JSON, but each protocol request/response is linefeed-delimited. So\n>> for example here's what it looks like when the main process requests a local\n>> process to backup a specific file:\n>>\n>> {\"{\"cmd\":\"backupFile\",\"param\":[\"base/32768/33001\",true,65536,null,true,0,\"pg_data/base/32768/33001\",false,0,3,\"20200412-213313F\",false,null]}\"}\n>>\n>> And the local responds with:\n>>\n>> {\"{\"out\":[1,65536,65536,\"6bf316f11d28c28914ea9be92c00de9bea6d9a6b\",{\"align\":true,\"error\":[0,[3,5],7],\"valid\":false}]}\"}\n> \n> As long as it's line delimited, I don't really care :)\n\nAgreed.\n\n>> We are considering a move to HTTP since lots of services (e.g. S3, GCS,\n>> Azure, etc.) require it (so we implement it) and we're not sure it makes\n>> sense to maintain our own protocol format. That said, we'd still prefer to\n>> use JSON for our payloads (like GCS) rather than XML (as S3 does).\n> \n> I'm not quite sure what you mean here? You mean actual requests for each\n> of what currently are lines? If so, that sounds *terrible*.\n\nI know it sounds like a lot, but in practice the local (currently) only \nperforms four operations: backup file, restore file, push file to \narchive, get file from archive. In that context a little protocol \noverhead won't be noticed so if it means removing redundant code I'm all \nfor it. That said, we have not done this yet -- it's just under \nconsideration.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Sun, 12 Apr 2020 19:04:32 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> A huge advantage of a scheme like this would be that it wouldn't have to\n> be specific to pg_basebackup. It could just as well work directly on the\n> server, avoiding an unnecesary loop through the network. Which\n> e.g. could integrate with filesystem snapshots etc. Without needing to\n> build the 'archive target' once with server libraries, and once with\n> client libraries.\n\nThat's quite appealing. One downside - IMHO significant - is that you\nhave to have a separate process to do *anything*. If you want to add a\nfilter that just logs everything it's asked to do, for example, you've\ngotta have a whole process for that, which likely adds a lot of\noverhead even if you can somehow avoid passing all the data through an\nextra set of pipes. The interface I proposed would allow you to inject\nvery lightweight filters at very low cost. This design really doesn't.\n\nNote that you could build this on top of what I proposed, but not the\nother way around.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 12 Apr 2020 20:02:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 4/12/20 6:37 PM, Andres Freund wrote:\n> >On 2020-04-12 17:57:05 -0400, David Steele wrote:\n> >>On 4/12/20 3:17 PM, Andres Freund wrote:\n> >>>There's various ways we could address the issue for how the subcommand\n> >>>can access the file data. The most flexible probably would be to rely on\n> >>>exchanging file descriptors between basebackup and the subprocess (these\n> >>>days all supported platforms have that, I think). Alternatively we\n> >>>could invoke the subcommand before really starting the backup, and ask\n> >>>how many files it'd like to receive in parallel, and restart the\n> >>>subcommand with that number of file descriptors open.\n> >>\n> >>We don't exchange FDs. Each local is responsible for getting the data from\n> >>PostgreSQL or the repo based on knowing the data source and a path. For\n> >>pg_basebackup, however, I'd imagine each local would want a replication\n> >>connection with the ability to request specific files that were passed to it\n> >>by the main process.\n> >\n> >I don't like this much. It'll push more complexity into each of the\n> >\"targets\" and we can't easily share that complexity. And also, needing\n> >to request individual files will add a lot of back/forth, and thus\n> >latency issues. The server would always have to pre-send a list of\n> >files, we'd have to deal with those files vanishing, etc.\n> \n> Sure, unless we had a standard interface to \"get a file from the PostgreSQL\n> cluster\", which is what pgBackRest has via the storage interface.\n\nThere's a couple of other pieces here that I think bear mentioning. The\nfirst is that pgBackRest has an actual 'restore' command- and that works\nwith the filters and works with the storage drivers, so what you're\nlooking at when it comes to these interfaces isn't just \"put a file\" but\nit's also \"get a file\". That's actually quite important to have when\nyou start thinking about these more complicated methods of doing\nbackups.\n\nThat then leads into the fact that, with a manifest, you can do things\nlike excluding 0-byte files from going through any of this processing or\nfrom being stored (which costs actual money too, with certain cloud\nstorage options..), or even for just storing *small* files, which we\ntend to have lots of in PG and which also end up costing more and you\nend up 'losing' money because you've got lots of 8K files around.\n\nWe haven't fully optimized for it in pgBackRest, yet, but avoiding\nhaving lots of little files (again, because there's real $$ costs\ninvolved) is something we actively think about and consider and is made\npossible when you've got a 'restore' command. Having a manifest where a\ngiven file might actually be a reference to a *part* of a file (ie:\npgbackrest_smallfiles, offset: 8192, length: 16384) could result in\nsavings when using cloud storage.\n\nThese are the kinds of things we're thinking about today. Maybe there's\nsome way you could implement something like that using shell commands as\nan API, but it sure looks like it'd be pretty hard from here. Even just\nmanaging to get users to use the right shell commands for backup, and\nthen the right ones for restore, seems awful daunting.\n\nI get that I'm probably going to get flak for playing up the 'worst\ncase', but the reality is that far too many people don't fully test\ntheir restore processes and trying to figure out the right shell\ncommands to pass into some 'restore' command, or even just to pull all\nof the data back down from $cloudstorage to perform a restore, when\neverything is down and your boss is breathing down your neck to get it\nall back online as fast as possible, isn't how I want this project to be\nremembered. David and I are constantly talking about how to make the\nrestore process as smooth and as fast as possible, because that's where\nthe rubber really meets the road- you've gotta make that part easy and\nfast because that's the high-pressure situation. Taking backups is\nrarely where the real pressure is at- sure, take it today, take it\ntomorrow, let it run for a few hours, it's all fine, but when you need\nsomething restored, you best make that as simple and as fast as\nabsolutely possible because that's the time when your entire business is\npotentially going to be offline and waiting for you to get everything\nback up.\n\nThanks,\n\nStephen", "msg_date": "Sun, 12 Apr 2020 20:27:43 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-12 20:02:50 -0400, Robert Haas wrote:\n> On Sun, Apr 12, 2020 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > A huge advantage of a scheme like this would be that it wouldn't have to\n> > be specific to pg_basebackup. It could just as well work directly on the\n> > server, avoiding an unnecesary loop through the network. Which\n> > e.g. could integrate with filesystem snapshots etc. Without needing to\n> > build the 'archive target' once with server libraries, and once with\n> > client libraries.\n> \n> That's quite appealing. One downside - IMHO significant - is that you\n> have to have a separate process to do *anything*. If you want to add a\n> filter that just logs everything it's asked to do, for example, you've\n> gotta have a whole process for that, which likely adds a lot of\n> overhead even if you can somehow avoid passing all the data through an\n> extra set of pipes. The interface I proposed would allow you to inject\n> very lightweight filters at very low cost. This design really doesn't.\n\nWell, in what you described it'd still be all done inside pg_basebackup,\nor did I misunderstand? Once you fetched it from the server, I can't\nimagine the overhead of filtering it a bit differently would matter.\n\nBut even if, the \"target\" could just reply with \"skip\" or such, instead\nof providing an fd.\n\nWhat kind of filtering are you thinking of where this is a problem?\nBesides just logging the filenames? I just can't imagine how that's a\nrelevant overhead compared to having to do things like\n'shell ssh rhaas@depository pgfile create-exclusive - %f.lz4'\n\n\nI really think we want the option to eventually do this server-side. And\nI don't quite see it as viable to go for an API that allows to specify\nshell fragments that are going to be executed server side.\n\n\n> Note that you could build this on top of what I proposed, but not the\n> other way around.\n\nWhy should it not be possible the other way round?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 17:27:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\nAnswering both in one since they're largely the same.\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Apr 10, 2020 at 10:54:10AM -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > Good point, but if there are multiple APIs, it makes shell script\n> > > > flexibility even more useful.\n> > > \n> > > This is really the key point for me. There are so many existing tools\n> > > that store a file someplace that we really can't ever hope to support\n> > > them all in core, or even to have well-written extensions that support\n> > > them all available on PGXN or wherever. We need to integrate with the\n> > > tools that other people have created, not try to reinvent them all in\n> > > PostgreSQL.\n> > \n> > So, this goes to what I was just mentioning to Bruce independently- you\n> > could have made the same argument about FDWs, but it just doesn't\n> > actually hold any water. Sure, some of the FDWs aren't great, but\n> > there's certainly no shortage of them, and the ones that are\n> > particularly important (like postgres_fdw) are well written and in core.\n> \n> No, no one made that argument. It isn't clear how a shell script API\n> would map to relational database queries. The point is how well the\n> APIs match, and then if they are close, does it give us the flexibility\n> we need. You can't just look at flexibility without an API match.\n\nIf what we're talking about is the file_fdw, which certainly isn't very\ncomplicated, it's not hard to see how you could use shell scripts for\nit. What happens is that it starts to get harder and require custom\ncode when you want to do something more complex- which is very nearly\nwhat we're talking about here too. Sure, for a simple 'bzip2' filter, a\nshell script might be alright, but it's not going to cut it for the more\ncomplex use-cases that users, today, expect solutions to.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Apr 10, 2020 at 10:54 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > So, this goes to what I was just mentioning to Bruce independently- you\n> > could have made the same argument about FDWs, but it just doesn't\n> > actually hold any water. Sure, some of the FDWs aren't great, but\n> > there's certainly no shortage of them, and the ones that are\n> > particularly important (like postgres_fdw) are well written and in core.\n> \n> That's a fairly different use case. In the case of the FDW interface:\n\nThere's two different questions we're talking about here and I feel like\nthey're being conflated. To try and clarify:\n\n- Could you implement FDWs with shell scripts, and custom programs? I'm\n pretty confident that the answer is yes, but the thrust of that\n argument is primarily to show that you *can* implement just about\n anything using a shell script \"API\", so just saying it's possible to\n do doesn't make it necessarily a good solution. The FDW system is\n complicated, and also good, because we made it so and because it's\n possible to do more sophisticated things with a C API, but it could\n have started out with shell scripts that just returned data in much\n the same way that COPY PROGRAM works today. What matters is that\n forward thinking to consider what you're going to want to do tomorrow,\n not just thinking about how you can solve for the simple cases today\n with a shell out to an existing command.\n\n- Does providing a C-library interface deter people from implementing\n solutions that use that interface? Perhaps it does, but it doesn't\n have nearly the dampening effect that is being portrayed here, and we\n can see that pretty clearly from the FDW situation. Sure, not all of\n those are good solutions, but lots and lots of archive command shell\n scripts are also pretty terrible, and there *are* a few good solutions\n out there, including the ones that we ourselves ship. At least when\n it comes to FDWs, there's an option there for us to ship a *good*\n answer ourselves for certain (and, in particular, the very very\n common) use-cases.\n\n> - We're only talking about writing a handful of tar files, and that's\n> in the context of a full-database backup, which is a much\n> heavier-weight operation than a query.\n\nThis is true for -Ft, but not -Fp, and I don't think there's enough\nthought being put into this when it comes to parallelism and that you\ndon't want to be limited to one process per tablespace.\n\n> - There is not really any state that needs to be maintained across calls.\n\nAs mentioned elsewhere, this isn't really true.\n\n> > How does this solution give them a good way to do the right thing\n> > though? In a way that will work with large databases and complex\n> > requirements? The answer seems to be \"well, everyone will have to write\n> > their own tool to do that\" and that basically means that, at best, we're\n> > only providing half of a solution and expecting all of our users to\n> > provide the other half, and to always do it correctly and in a well\n> > written way. Acknowledging that most users aren't going to actually do\n> > that and instead they'll implement half measures that aren't reliable\n> > shouldn't be seen as an endorsement of this approach.\n> \n> I don't acknowledge that. I think it's possible to use tools like the\n> proposed option in a perfectly reliable way, and I've already given a\n> bunch of examples of how it could be done. Writing a file is not such\n> a complex operation that every bit of code that writes one reliably\n> has to be written by someone associated with the PostgreSQL project. I\n> strongly suspect that people who use a cloud provider's tools to\n> upload their backup files will be quite happy with the results, and if\n> they aren't, I hope they will blame the cloud provider's tool for\n> eating the data rather than this option for making it easy to give the\n> data to the thing that ate it.\n\nThe examples you've given of how this could be done \"right\" involve\nsomeone writing custom code (or having code that's been written by the\nPG project) to be executed from this shell command interface, even just\nto perform a local backup.\n\nAs for where the blame goes, I don't find that to be a particularly\nuseful thing to argue about. In any of this, if we are ultimately\nsaying \"well, it's the user's fault, or the fault of the tools that the\nuser chose to use with our interface\" then it seems like we've lost.\nMaybe that's going to far and maybe we can't hold ourselves to that high\nof a standard but I like to think of this project, in particular, as\nbeing the one that's trying really hard to go as far in that direction\nas possible.\n\nTo that end, if we contemplate adding support for some cloud vendor's\nstorage, as an example, and discover that the command line tools for it\nsuck or don't meet our expectations, I'd expect us to either refuse to\nsupport it, or to forgo using the command-line tools and instead\nimplement support for talking to the cloud storage interface directly,\nif it works well.\n\nThanks,\n\nStephen", "msg_date": "Sun, 12 Apr 2020 21:18:28 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 09:18:28PM -0400, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Fri, Apr 10, 2020 at 10:54:10AM -0400, Stephen Frost wrote:\n> > > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > > On Thu, Apr 9, 2020 at 6:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > > Good point, but if there are multiple APIs, it makes shell script\n> > > > > flexibility even more useful.\n> > > > \n> > > > This is really the key point for me. There are so many existing tools\n> > > > that store a file someplace that we really can't ever hope to support\n> > > > them all in core, or even to have well-written extensions that support\n> > > > them all available on PGXN or wherever. We need to integrate with the\n> > > > tools that other people have created, not try to reinvent them all in\n> > > > PostgreSQL.\n> > > \n> > > So, this goes to what I was just mentioning to Bruce independently- you\n> > > could have made the same argument about FDWs, but it just doesn't\n> > > actually hold any water. Sure, some of the FDWs aren't great, but\n> > > there's certainly no shortage of them, and the ones that are\n> > > particularly important (like postgres_fdw) are well written and in core.\n> > \n> > No, no one made that argument. It isn't clear how a shell script API\n> > would map to relational database queries. The point is how well the\n> > APIs match, and then if they are close, does it give us the flexibility\n> > we need. You can't just look at flexibility without an API match.\n> \n> If what we're talking about is the file_fdw, which certainly isn't very\n> complicated, it's not hard to see how you could use shell scripts for\n> it. What happens is that it starts to get harder and require custom\n> code when you want to do something more complex- which is very nearly\n> what we're talking about here too. Sure, for a simple 'bzip2' filter, a\n> shell script might be alright, but it's not going to cut it for the more\n> complex use-cases that users, today, expect solutions to.\n\nWell, file_fdw is the simplest FDW, and we might have been able to do\nthat in shell script, but almost all the other FDWs couldn't, so we\nmight as well choose a C API for FDWs and use the same one for file_fdw.\nIt seems like basic engineering that you choose the closest API that\nmeets most of your deployment requirements, and meets all of the\nrequired ones.\n\n> To that end, if we contemplate adding support for some cloud vendor's\n> storage, as an example, and discover that the command line tools for it\n> suck or don't meet our expectations, I'd expect us to either refuse to\n> support it, or to forgo using the command-line tools and instead\n> implement support for talking to the cloud storage interface directly,\n> if it works well.\n\nDo we choose a more inflexible API on a hypothetical risk?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 13 Apr 2020 09:59:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 9:18 PM Stephen Frost <sfrost@snowman.net> wrote:\n> There's two different questions we're talking about here and I feel like\n> they're being conflated. To try and clarify:\n>\n> - Could you implement FDWs with shell scripts, and custom programs? I'm\n> pretty confident that the answer is yes, but the thrust of that\n> argument is primarily to show that you *can* implement just about\n> anything using a shell script \"API\", so just saying it's possible to\n> do doesn't make it necessarily a good solution. The FDW system is\n> complicated, and also good, because we made it so and because it's\n> possible to do more sophisticated things with a C API, but it could\n> have started out with shell scripts that just returned data in much\n> the same way that COPY PROGRAM works today. What matters is that\n> forward thinking to consider what you're going to want to do tomorrow,\n> not just thinking about how you can solve for the simple cases today\n> with a shell out to an existing command.\n>\n> - Does providing a C-library interface deter people from implementing\n> solutions that use that interface? Perhaps it does, but it doesn't\n> have nearly the dampening effect that is being portrayed here, and we\n> can see that pretty clearly from the FDW situation. Sure, not all of\n> those are good solutions, but lots and lots of archive command shell\n> scripts are also pretty terrible, and there *are* a few good solutions\n> out there, including the ones that we ourselves ship. At least when\n> it comes to FDWs, there's an option there for us to ship a *good*\n> answer ourselves for certain (and, in particular, the very very\n> common) use-cases.\n>\n> > - We're only talking about writing a handful of tar files, and that's\n> > in the context of a full-database backup, which is a much\n> > heavier-weight operation than a query.\n>\n> This is true for -Ft, but not -Fp, and I don't think there's enough\n> thought being put into this when it comes to parallelism and that you\n> don't want to be limited to one process per tablespace.\n>\n> > - There is not really any state that needs to be maintained across calls.\n>\n> As mentioned elsewhere, this isn't really true.\n\nThese are fair points, and my thinking has been somewhat refined by\nthis discussion, so let me try to clarify my (current) position a bit.\nI believe that there are two subtly different questions here.\n\nQuestion #1 is \"Would it be useful to people to be able to pipe the\ntar files that they get from pg_basebackup into some other command\nrather than writing them to the filesystem, and should we give them\nthe option to do so?\"\n\nQuestion #2 is \"Is piping the tar files that pg_basebackup would\nproduce into some other program the best possible way of providing\nmore flexibility about where backups get written?\"\n\nI'm prepared to concede that the answer to question #2 is no. I had\nearlier assumed that establishing connections was pretty fast and\nthat, even if not, there were solutions to that problem, like setting\nup an SSH tunnel in advance. Several people have said, well, no,\nestablishing connections is a problem. As I acknowledged from the\nbeginning, plain format backups are a problem. So I think a convincing\nargument has been made that a shell command won't meet everyone's\nneeds, and a more complex API is required for some cases.\n\nBut I still think the answer to question #1 is yes. I disagree\nentirely with any argument to the effect that because some users might\ndo unsafe things with the option, we ought not to provide it.\nPractically speaking, it would work fine for many people even with no\nother changes, and if we add something like pgfile, which I'm willing\nto do, it would work for more people in more situations. It is a\nuseful thing to have, full stop.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:20:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 8:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > That's quite appealing. One downside - IMHO significant - is that you\n> > have to have a separate process to do *anything*. If you want to add a\n> > filter that just logs everything it's asked to do, for example, you've\n> > gotta have a whole process for that, which likely adds a lot of\n> > overhead even if you can somehow avoid passing all the data through an\n> > extra set of pipes. The interface I proposed would allow you to inject\n> > very lightweight filters at very low cost. This design really doesn't.\n>\n> Well, in what you described it'd still be all done inside pg_basebackup,\n> or did I misunderstand? Once you fetched it from the server, I can't\n> imagine the overhead of filtering it a bit differently would matter.\n>\n> But even if, the \"target\" could just reply with \"skip\" or such, instead\n> of providing an fd.\n>\n> What kind of filtering are you thinking of where this is a problem?\n> Besides just logging the filenames? I just can't imagine how that's a\n> relevant overhead compared to having to do things like\n> 'shell ssh rhaas@depository pgfile create-exclusive - %f.lz4'\n\nAnything you want to do in the same process. I mean, right now we have\nbasically one target (filesystem) and one filter (compression).\nNeither of those things spawn a process. It seems logical to imagine\nthat there might be other things that are similar in the future. It\nseems to me that there are definitely things where you will want to\nspawn a process; that's why I like having shell commands as one\noption. But I don't think we should require that you can't have a\nfilter or a target unless you also spawn a process for it.\n\n> I really think we want the option to eventually do this server-side. And\n> I don't quite see it as viable to go for an API that allows to specify\n> shell fragments that are going to be executed server side.\n\nThe server-side thing is a good point, but I think it adds quite a bit\nof complexity, too. I'm worried that this is ballooning to an\nunworkable amount of complexity - and not just code complexity, but\nbikeshedding complexity, too. Like, I started with a command-line\noption that could probably have been implemented in a few hundred\nlines of code. Now, we're up to something where you have to build\ncustom processes that speak a novel protocol and work on both the\nclient and the server side. That's at least several thousand lines of\ncode, maybe over ten thousand if the sample binaries that use the new\nprotocol are more than just simple demonstrations of how to code to\nthe interface. More importantly, it means agreeing on the nature of\nthis custom protocol, which seems like something where I could put in\na ton of effort to create something and then have somebody complain\nbecause it's not JSON, or because it is JSON, or because the\ncapability negotiation system isn't right, or whatever. I'm not\nexactly saying that we shouldn't do it; I think it has some appeal.\nBut I'd sure like to find some way of getting started that doesn't\ninvolve having to do everything in one patch, and then getting told to\nchange it all again - possibly with different people wanting\ncontradictory things.\n\n> > Note that you could build this on top of what I proposed, but not the\n> > other way around.\n>\n> Why should it not be possible the other way round?\n\nBecause a C function call API lets you decide to spawn a process, but\nif the framework inherently spawns a process, you can't decide not to\ndo so in a particular case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:16:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sun, Apr 12, 2020 at 8:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > I really think we want the option to eventually do this server-side. And\n> > I don't quite see it as viable to go for an API that allows to specify\n> > shell fragments that are going to be executed server side.\n> \n> The server-side thing is a good point, but I think it adds quite a bit\n> of complexity, too. I'm worried that this is ballooning to an\n> unworkable amount of complexity - and not just code complexity, but\n> bikeshedding complexity, too. Like, I started with a command-line\n> option that could probably have been implemented in a few hundred\n> lines of code. Now, we're up to something where you have to build\n> custom processes that speak a novel protocol and work on both the\n> client and the server side. That's at least several thousand lines of\n> code, maybe over ten thousand if the sample binaries that use the new\n> protocol are more than just simple demonstrations of how to code to\n> the interface. More importantly, it means agreeing on the nature of\n> this custom protocol, which seems like something where I could put in\n> a ton of effort to create something and then have somebody complain\n> because it's not JSON, or because it is JSON, or because the\n> capability negotiation system isn't right, or whatever. I'm not\n> exactly saying that we shouldn't do it; I think it has some appeal.\n> But I'd sure like to find some way of getting started that doesn't\n> involve having to do everything in one patch, and then getting told to\n> change it all again - possibly with different people wanting\n> contradictory things.\n\nDoing things incrementally and not all in one patch absolutely makes a\nlot of sense and is a good idea.\n\nWouldn't it make sense to, given that we have some idea of what we want\nit to eventually look like, to make progress in that direction though?\n\nThat is- I tend to agree with Andres that having this supported\nserver-side eventually is what we should be thinking about as an\nend-goal (what is the point of pg_basebackup in all of this, after all,\nif the goal is to get a backup of PG from the PG server to s3..? why\ngo through some other program or through the replication protocol?) and\nhaving the server exec'ing out to run shell script fragments to make\nthat happen looks like it would be really awkward and full of potential\nrisks and issues and agreement that it wouldn't be a good fit.\n\nIf, instead, we worked on a C-based interface which includes filters and\nstorage drivers, and was implemented through libpgcommon, we could start\nwith that being all done through pg_basebackup and work to hammer out\nthe complications and issues that we run into there and, once it seems\nreasonably stable and works well, we could potentially pull that into\nthe backend to be run directly without having to have pg_basebackup\ninvolved in the process.\n\nThere's been good progress in the direction of having more done by the\nbackend already, and that's thanks to you and it's good work-\nspecifically that the backend now has the ability to generate a\nmanifest, with checksums included as the backup is being run, which is\ndefinitely an important piece.\n\nThanks,\n\nStephen", "msg_date": "Tue, 14 Apr 2020 11:08:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Tue, Apr 14, 2020 at 11:08 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Wouldn't it make sense to, given that we have some idea of what we want\n> it to eventually look like, to make progress in that direction though?\n\nWell, yes. :-)\n\n> That is- I tend to agree with Andres that having this supported\n> server-side eventually is what we should be thinking about as an\n> end-goal (what is the point of pg_basebackup in all of this, after all,\n> if the goal is to get a backup of PG from the PG server to s3..? why\n> go through some other program or through the replication protocol?) and\n> having the server exec'ing out to run shell script fragments to make\n> that happen looks like it would be really awkward and full of potential\n> risks and issues and agreement that it wouldn't be a good fit.\n\nI'm fairly deeply uncomfortable with what Andres is proposing. I see\nthat it's very powerful, and can do a lot of things, and that if\nyou're building something that does sophisticated things with storage,\nyou probably want an API like that. It does a great job making\ncomplicated things possible. However, I feel that it does a lousy job\nmaking simple things simple. Suppose you want to compress using your\nfavorite compression program. Well, you can't. Your favorite\ncompression program doesn't speak the bespoke PostgreSQL protocol\nrequired for backup plugins. Neither does your favorite encryption\nprogram. Either would be perfectly happy to accept a tarfile on stdin\nand dump out a compressed or encrypted version, as the case may be, on\nstdout, but sorry, no such luck. You need a special program that\nspeaks the magic PostgreSQL protocol but otherwise does pretty much\nthe exact same thing as the standard one.\n\nIt's possibly not the exact same thing. A special might, for example,\nuse multiple threads for parallel compression rather than multiple\nprocesses, perhaps gaining a bit of efficiency. But it's doubtful\nwhether all users care about such marginal improvements. All they're\ngoing to see is that they can use gzip and maybe lz4 because we\nprovide the necessary special magic tools to integrate with those, but\nfor some reason we don't have a special magic tool that they can use\nwith their own favorite compressor, and so they can't use it. I think\npeople are going to find that fairly unhelpful.\n\nNow, it's a problem we can work around. We could have a \"shell\ngateway\" program which acts as a plugin, speaks the backup plugin\nprotocol, and internally does fork-and-exec stuff to spin up copies of\nany binary you want to act as a filter. I don't see any real problem\nwith that. I do think it's very significantly more complicated than\njust what Andres called an FFI. It's gonna be way easier to just write\nsomething that spawns shell processes directly than it is to write\nsomething that spawns a process and talks to it using this protocol\nand passes around file descriptors using the various different\nmechanisms that different platforms use for that, and then that\nprocess turns around and spawns some other processes and passes along\nthe file descriptors to them. Now you've added a whole bunch of\nplatform-specific code and a whole bunch of code to generate and parse\nprotocol messages to achieve exactly the same thing that you could've\ndone far more simply with a C API. Even accepting as a given the need\nto make the C API work separately on both the client and server side,\nyou've probably at least doubled, and I suspect more like quadrupled,\nthe amount of infrastructure that has to be built.\n\nSo...\n\n> If, instead, we worked on a C-based interface which includes filters and\n> storage drivers, and was implemented through libpgcommon, we could start\n> with that being all done through pg_basebackup and work to hammer out\n> the complications and issues that we run into there and, once it seems\n> reasonably stable and works well, we could potentially pull that into\n> the backend to be run directly without having to have pg_basebackup\n> involved in the process.\n\n...let's do this. Actually, I don't really mind if we target something\nthat can work on both the client and server side initially, but based\non C, not a new wire protocol with file descriptor passing. That new\nwire protocol, and the file descriptor passing infrastructure that\ngoes with it, are things that I *really* think should be pushed off to\nversion 2, because I think they're going to generate a lot of\nadditional work and complexity, and I don't want to deal with all of\nit at once.\n\nAlso, I don't really see what's wrong with the server forking\nprocesses that exec(\"/usr/bin/lz4\") or whatever. We do similar things\nin other places and, while it won't work for cases where you want to\ncompress a shazillion files, that's not really a problem here anyway.\nAt least at the moment, the server-side format is *always* tar, so the\nproblem of needing a separate subprocess for every file in the data\ndirectory does not arise.\n\n> There's been good progress in the direction of having more done by the\n> backend already, and that's thanks to you and it's good work-\n> specifically that the backend now has the ability to generate a\n> manifest, with checksums included as the backup is being run, which is\n> definitely an important piece.\n\nThanks. I'm actually pretty pleased about making some of that\ninfrastructure available on the frontend side, and would like to go\nfurther in that direction over time. My only concern is that any given\npatch shouldn't be made to require too much collateral infrastructure\nwork, and any infrastructure work that it will require should be\nagreed, so far as we can, early in the development process, so that\nthere's time to do it at a suitably unhurried pace.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:38:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-14 11:38:03 -0400, Robert Haas wrote:\n> I'm fairly deeply uncomfortable with what Andres is proposing. I see\n> that it's very powerful, and can do a lot of things, and that if\n> you're building something that does sophisticated things with storage,\n> you probably want an API like that. It does a great job making\n> complicated things possible. However, I feel that it does a lousy job\n> making simple things simple.\n\nI think it's pretty much exactly the opposite. Your approach seems to\nmove all the complexity to the user, having to build entire combination\nof commands themselves. Instead of having one or two default commands\nthat do backups in common situations, everyone has to assemble them from\npieces.\n\nMoved from later in your email, since it seems to make more sense to\nhave it here:\n> All they're going to see is that they can use gzip and maybe lz4\n> because we provide the necessary special magic tools to integrate with\n> those, but for some reason we don't have a special magic tool that\n> they can use with their own favorite compressor, and so they can't use\n> it. I think people are going to find that fairly unhelpful.\n\nI have no problem with providing people with the opportunity to use\ntheir personal favorite compressor, but forcing them to have to do that,\nand to ensure it's installed etc, strikes me as a spectacurly bad\ndefault situation. Most people don't have the time to research which\ncompression algorithms work the best for which precise situation.\n\nHow do you imagine a default scripted invocation of the new backup stuff\nto look like? Having to specify multiple commandline \"fragments\" for\ncompression, storing files, ... can't be what we want the common case\nshould look like. It'll just again lead to everyone copy & pasting\nexamples that all are wrong in different ways. They'll not at all work\nacross platforms (or often not across OS versions).\n\n\nIn general, I think it's good to give expert users the ability to\ncustomize things like backups and archiving. But defaulting to every\nnon-expert user having to all that expert work (or coyping it from bad\nexamples) is one of the most user hostile things in postgres.\n\n\n> Also, I don't really see what's wrong with the server forking\n> processes that exec(\"/usr/bin/lz4\") or whatever. We do similar things\n> in other places and, while it won't work for cases where you want to\n> compress a shazillion files, that's not really a problem here anyway.\n> At least at the moment, the server-side format is *always* tar, so the\n> problem of needing a separate subprocess for every file in the data\n> directory does not arise.\n\nI really really don't understand this. Are you suggesting that for\nserver side compression etc we're going to add the ability to specify\nshell commands as argument to the base backup command? That seems so\nobviously a non-starter? A good default for backup configurations\nshould be that the PG user that the backup is done under is only allowed\nto do that, and not that it directly has arbitrary remote command\nexecution.\n\n\n> Suppose you want to compress using your favorite compression\n> program. Well, you can't. Your favorite compression program doesn't\n> speak the bespoke PostgreSQL protocol required for backup\n> plugins. Neither does your favorite encryption program. Either would\n> be perfectly happy to accept a tarfile on stdin and dump out a\n> compressed or encrypted version, as the case may be, on stdout, but\n> sorry, no such luck. You need a special program that speaks the magic\n> PostgreSQL protocol but otherwise does pretty much the exact same\n> thing as the standard one.\n\nBut the tool speaking the protocol can just allow piping through\nwhatever tool? Given that there likely is benefits to either doing\nthings on the client side or on the server side, it seems inevitable\nthat there's multiple places that would make sense to have the\ncapability for?\n\n\n> It's possibly not the exact same thing. A special might, for example,\n> use multiple threads for parallel compression rather than multiple\n> processes, perhaps gaining a bit of efficiency. But it's doubtful\n> whether all users care about such marginal improvements.\n\nMarginal improvements? Compression scales decently well with the number\nof cores. pg_basebackup's compression is useless because it's so slow\n(and because its clientside, but that's IME the lesser issue). I feel I\nmust be misunderstanding what you mean here.\n\ngzip - vs pigz -p $numcores on my machine: 180MB/s vs 2.5GB/s. The\nlatter will still sometimes be a bottleneck (it's a bottlenck in pigz,\nnot available compression cycles), but a lot less commonly than 180.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 14 Apr 2020 18:50:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Tue, Apr 14, 2020 at 9:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-04-14 11:38:03 -0400, Robert Haas wrote:\n> > I'm fairly deeply uncomfortable with what Andres is proposing. I see\n> > that it's very powerful, and can do a lot of things, and that if\n> > you're building something that does sophisticated things with storage,\n> > you probably want an API like that. It does a great job making\n> > complicated things possible. However, I feel that it does a lousy job\n> > making simple things simple.\n>\n> I think it's pretty much exactly the opposite. Your approach seems to\n> move all the complexity to the user, having to build entire combination\n> of commands themselves. Instead of having one or two default commands\n> that do backups in common situations, everyone has to assemble them from\n> pieces.\n\nI think we're mostly talking about different things. I was speaking\nmostly about the difficulty of developing it. I agree that a project\nwhich is easier to develop is likely to provide fewer benefits to the\nend user. On the other hand, it might be more likely to get done, and\nprojects that don't get done provide few benefits to users. I strongly\nbelieve we need an incremental approach here.\n\n> In general, I think it's good to give expert users the ability to\n> customize things like backups and archiving. But defaulting to every\n> non-expert user having to all that expert work (or coyping it from bad\n> examples) is one of the most user hostile things in postgres.\n\nI'm not against adding more built-in compression algorithms, but I\nalso believe (as I have several times now) that the world moves a lot\nfaster than PostgreSQL, which has not added a single new compression\nalgorithm to pg_basebackup ever. We had 1 compression algorithm in\n2011, and we still have that same 1 algorithm today. So, either nobody\ncares, or adding new algorithms is sufficiently challenging - for\neither technical or political reasons - that nobody's managed to get\nit done. I think having a simple framework in pg_basebackup for\nplugging in new algorithms would make it noticeably simpler to add LZ4\nor whatever your favorite compression algorithm is. And I think having\nthat framework also be able to use shell commands, so that users don't\nhave to wait a decade or more for new choices to show up, is also a\ngood idea.\n\nI don't disagree that the situation around things like archive_command\nis awful, but a good part of that is that every time somebody shows up\nand says \"hey, let's try to make a small improvement,\" between two and\nforty people show up and start explaining why it's still going to be\nterrible. Eventually the pile of requirements get so large, and/or\nthere are enough contradictory opinions, that the person who made the\nproposal for how to improve things gives up and leaves. So then we\nstill have the documentation suggesting \"cp\". When people - it happens\nto be me in this case, but the problem is much more general - show up\nand propose improvements to difficult areas, we can and should give\nthem good advice on how to improve their proposals. But we should not\ninsist that they have to build something incredibly complex and\ngrandiose and solve every problem in that area. We should be happy if\nwe get ANY improvement in a difficult area, not send dozens of angry\nemails complaining that their proposal is imperfect.\n\n> I really really don't understand this. Are you suggesting that for\n> server side compression etc we're going to add the ability to specify\n> shell commands as argument to the base backup command? That seems so\n> obviously a non-starter? A good default for backup configurations\n> should be that the PG user that the backup is done under is only allowed\n> to do that, and not that it directly has arbitrary remote command\n> execution.\n\nI hadn't really considered that aspect, and that's certainly a\nconcern. But I also don't understand why you think it's somehow a big\ndeal. My point is not that clients should have the ability to execute\narbitrary commands on the server. It's that shelling out to an\nexternal binary provided by the operating system is a reasonable thing\nto do, versus having everything have to be done by binaries that we\ncreate. Which I think is what you are also saying right here:\n\n> But the tool speaking the protocol can just allow piping through\n> whatever tool? Given that there likely is benefits to either doing\n> things on the client side or on the server side, it seems inevitable\n> that there's multiple places that would make sense to have the\n> capability for?\n\nUnless I am misunderstanding you, this is exactly what i was\nproposing, and have been proposing since the first email on the\nthread.\n\n> > It's possibly not the exact same thing. A special might, for example,\n> > use multiple threads for parallel compression rather than multiple\n> > processes, perhaps gaining a bit of efficiency. But it's doubtful\n> > whether all users care about such marginal improvements.\n>\n> Marginal improvements? Compression scales decently well with the number\n> of cores. pg_basebackup's compression is useless because it's so slow\n> (and because its clientside, but that's IME the lesser issue). I feel I\n> must be misunderstanding what you mean here.\n>\n> gzip - vs pigz -p $numcores on my machine: 180MB/s vs 2.5GB/s. The\n> latter will still sometimes be a bottleneck (it's a bottlenck in pigz,\n> not available compression cycles), but a lot less commonly than 180.\n\nThat's really, really, really not what I was talking about.\n\nI'm quite puzzled by your reading of this email. You seem to have\nmissed my point entirely. I don't know whether that's because I did a\npoor job writing it or because you didn't read it carefully enough or\nwhat. What I'm saying is: I don't immediately wish to undertake the\nproblem of building a new wire protocol that the client and server can\nuse to talk to external binaries. I would prefer to start with a C\nAPI, because I think it will be far less work and still able to meet a\nnumber of important needs. The new wire protocol that can be used to\ntalk to external binaries can be added later.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 09:23:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-15 09:23:30 -0400, Robert Haas wrote:\n> On Tue, Apr 14, 2020 at 9:50 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-04-14 11:38:03 -0400, Robert Haas wrote:\n> > > I'm fairly deeply uncomfortable with what Andres is proposing. I see\n> > > that it's very powerful, and can do a lot of things, and that if\n> > > you're building something that does sophisticated things with storage,\n> > > you probably want an API like that. It does a great job making\n> > > complicated things possible. However, I feel that it does a lousy job\n> > > making simple things simple.\n> >\n> > I think it's pretty much exactly the opposite. Your approach seems to\n> > move all the complexity to the user, having to build entire combination\n> > of commands themselves. Instead of having one or two default commands\n> > that do backups in common situations, everyone has to assemble them from\n> > pieces.\n> \n> I think we're mostly talking about different things.\n\nThat certainly would explain some misunderstandings ;)\n\n\nI mostly still am trying to define where we eventually want to be on a\nmedium to high level. And I don't think we really have agreement on\nthat. My original understanding of your eventual goal is that it's the\nexample invocation of pg_basebackup upthread, namely a bunch of shell\narguments to pg_basebackup. And I don't think that's good enough. IMO\nit only really make to design incremental steps after we have a rough\nagreement on the eventual goal. Otherwise we'll just end up supporting\nthe outcomes of missteps for a long time.\n\n\n> I was speaking mostly about the difficulty of developing it. I agree\n> that a project which is easier to develop is likely to provide fewer\n> benefits to the end user. On the other hand, it might be more likely\n> to get done, and projects that don't get done provide few benefits to\n> users. I strongly believe we need an incremental approach here.\n\nI agree. My concern is just that we should not expose things to the\nuser that will make it much harder to evolve going forward.\n\n\n> I'm not against adding more built-in compression algorithms, but I\n> also believe (as I have several times now) that the world moves a lot\n> faster than PostgreSQL, which has not added a single new compression\n> algorithm to pg_basebackup ever. We had 1 compression algorithm in\n> 2011, and we still have that same 1 algorithm today. So, either nobody\n> cares, or adding new algorithms is sufficiently challenging - for\n> either technical or political reasons - that nobody's managed to get\n> it done.\n\nImo most of the discussion has been around toast, and there the\nsituation imo is much more complicated than just about adding the\ncompression algorithm. I don't recall a discussion about adding an\noptional dependency to other compression algorithms to pg_basebackup\nthat didn't go anywhere for either technical or political reasons.\n\n\n> I think having a simple framework in pg_basebackup for plugging in new\n> algorithms would make it noticeably simpler to add LZ4 or whatever\n> your favorite compression algorithm is. And I think having that\n> framework also be able to use shell commands, so that users don't have\n> to wait a decade or more for new choices to show up, is also a good\n> idea.\n\nAs long as here's sensible defaults, and so that the user doesn't have\nto specify paths to binaries for the common cases, I'm OK with that. I'm\nnot ok with requiring the user to specify shell fragments for things\nthat should be built in.\n\nIf we think the appropriate way to implement extensible compression is\nby piping to commandline binaries ([1]), I'd imo e.g. ok if we had a\nbuiltin list of [{fileending, shell-fragment-for-compression}] that is\nfilled with appropriate values detected at build time for a few common\ncases. But then also allowed adding new methods via commandline options.\n\n\nI guess what I perceived to be the fundamental difference, before this\nemail, between our positions is that I (still) think that exposing\ndetailed postprocessing shell fragment style arguments to pg_basebackup,\nespecially as the only option to use the new capabilities, will nail us\ninto a corner - but you don't necessarily think so? Where I had/have no\nproblems with implementing features by *internally* piping through\nexternal binaries, as long as the user doesn't have to always specify\nthem.\n\n\n[1] I am not sure, nor the opposite, that piping is a great idea medium\nterm. One concern is that IIRC windows pipe performance is not great,\nand that there's some other portability problems as well. I think\nthere's also valid concerns about per-file overhead, which might be a\nproblem for some future uses.\n\n\n\n> > I really really don't understand this. Are you suggesting that for\n> > server side compression etc we're going to add the ability to specify\n> > shell commands as argument to the base backup command? That seems so\n> > obviously a non-starter? A good default for backup configurations\n> > should be that the PG user that the backup is done under is only allowed\n> > to do that, and not that it directly has arbitrary remote command\n> > execution.\n> \n> I hadn't really considered that aspect, and that's certainly a\n> concern. But I also don't understand why you think it's somehow a big\n> deal. My point is not that clients should have the ability to execute\n> arbitrary commands on the server. It's that shelling out to an\n> external binary provided by the operating system is a reasonable thing\n> to do, versus having everything have to be done by binaries that we\n> create. Which I think is what you are also saying right here:\n\n> > But the tool speaking the protocol can just allow piping through\n> > whatever tool? Given that there likely is benefits to either doing\n> > things on the client side or on the server side, it seems inevitable\n> > that there's multiple places that would make sense to have the\n> > capability for?\n> \n> Unless I am misunderstanding you, this is exactly what i was\n> proposing, and have been proposing since the first email on the\n> thread.\n\nWell, no and yes. As I said above, for me there's a difference between\npiping to commands as an internal implementation detail, and between\nthat being the non-poweruser interface. It may or may not be the right\ntradeoff to implement server side compression by piping the output\nto/from some binary. IMO it's clearly not the right way to implement\nserver side compression by specifying shell fragments as arguments to\nBASE_BACKUP.\n\nNor do I think it's the right thing, albeit a tad more debatable, that\nfor decent client side compression one has to specify a binary whose\npath will differ on various platforms (on windows you can't rely on\nPATH).\n\nIf we were to go for building all this via piopes, utilizing that to\nmake compression etc extensible for powerusers makes sense to me.\n\n\nBut I don't think it makes sense to design a C API without a rough\npicture of how things should eventually look like. If we were, e.g.,\neventually going to do all the work of compressing and transferring data\nin one external binary, then a C API exposing transformations in\npg_basebackup doesn't necessarily make sense. If it turns out that\npipes are too inefficient on windows to implement compression filters,\nthat we need parallel awareness in the API, etc it'll influence the API.\n\n\n> > > It's possibly not the exact same thing. A special might, for example,\n> > > use multiple threads for parallel compression rather than multiple\n> > > processes, perhaps gaining a bit of efficiency. But it's doubtful\n> > > whether all users care about such marginal improvements.\n> >\n> > Marginal improvements? Compression scales decently well with the number\n> > of cores. pg_basebackup's compression is useless because it's so slow\n> > (and because its clientside, but that's IME the lesser issue). I feel I\n> > must be misunderstanding what you mean here.\n> >\n> > gzip - vs pigz -p $numcores on my machine: 180MB/s vs 2.5GB/s. The\n> > latter will still sometimes be a bottleneck (it's a bottlenck in pigz,\n> > not available compression cycles), but a lot less commonly than 180.\n> \n> That's really, really, really not what I was talking about.\n\nWhat did you mean with the \"marginal improvements\" paragraph above?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 15:13:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Wed, Apr 15, 2020 at 6:13 PM Andres Freund <andres@anarazel.de> wrote:\n> I guess what I perceived to be the fundamental difference, before this\n> email, between our positions is that I (still) think that exposing\n> detailed postprocessing shell fragment style arguments to pg_basebackup,\n> especially as the only option to use the new capabilities, will nail us\n> into a corner - but you don't necessarily think so? Where I had/have no\n> problems with implementing features by *internally* piping through\n> external binaries, as long as the user doesn't have to always specify\n> them.\n\nMy principle concern is actually around having a C API and a flexible\ncommand-line interface. If we rearrange the code and the pg_basebackup\ncommand line syntax so that it's easy to add new \"filters\" and\n\"targets\", then I think that's a very good step forward. It's of less\nconcern to me whether those \"filters\" and \"targets\" are (1) C code\nthat we ship as part of pg_basebackup, (2) C code by extension authors\nthat we dynamically load into pg_basebackup, (3) off-the-shelf\nexternal programs that we invoke, or (4) special external programs\nthat we provide which do special magic. However, of those options, I\nlike #4 least, because it seems like a pain in the tail to implement.\nIt may turn out to be the most powerful and flexible, though I'm not\ncompletely sure about that yet.\n\nAs to exactly how far we can get with #3, I think it depends a good\ndeal on the answer to this question you pose in a footnote:\n\n> [1] I am not sure, nor the opposite, that piping is a great idea medium\n> term. One concern is that IIRC windows pipe performance is not great,\n> and that there's some other portability problems as well. I think\n> there's also valid concerns about per-file overhead, which might be a\n> problem for some future uses.\n\nIf piping stuff through shell commands performs well for use cases\nlike compression, then I think we can get pretty far with piping\nthings through shell commands. It means we can use any compression at\nall with no build-time dependency on that compressor. People can\ninstall anything they want, stick it in $PATH, and away they go. I see\nno particular reason to dislike that kind of thing; in fact, I think\nit offers many compelling advantages. On the other hand, if we really\nneed to interact directly with the library to get decent performance,\nbecause, say, pipes are too slow, then the approach of piping things\nthrough an arbitrary shell commands is a lot less exciting.\n\nEven then, though, I wonder how many runtime dependencies we're\nseriously willing to add. I imagine we can add one or two more\ncompression algorithms without giving everybody fits, even if it means\nadding optional build-time and run-time dependencies on some external\nlibraries. Any more than that is likely to provoke a backlash. And I\ndoubt whether we're willing to have the postgresql operating system\npackage depend on something like libgcrypt at all. I would expect such\na proposal to meet with vigorous objections. But without such a\ndependency, how would we realistically get encrypted backups except by\npiping through a shell command? I don't really see a way, and letting\na user specify a shell fragment to define what happens there seems\npretty reasonable to me. I'm also not very sure to what we can assume,\nwith either compression or encryption, that one size fits all. If\nthere are six popular compression libraries and four popular\nencryption libraries, does anyone really believe that it's going to be\nOK for 'yum install postgresql-server' to suck in all of those things?\nOr, even if that were OK or if it we could somehow avoid it, what are\nthe chances that we'd actually go to the trouble of building\ninterfaces to all of those things? I'd rate them as slim to none; we\nsuck at that sort of thing. Exhibit A: The work to make PostgreSQL\nsupport more than one SSL library.\n\nI'm becoming fairly uncertain as to how far we can get with shell\ncommands; some of the concerns raised about, for example, connection\nmanagement when talking to stuff like S3 are very worrying. At the\nsame time, I think we need to think pretty seriously about some of the\nupsides of shell commands. The average user cannot write a C library\nthat implements an API. The average user cannot write a C binary that\nspeaks a novel, PostgreSQL-specific protocol. Even the above-average\nuser who is capable of doing those things probably won't have the time\nto actually do it. So if thing you have to do to make PostgreSQL talk\nto the new sljgsjl compressor is either of those things, then we will\nnot have sljgsjl compression support for probably a decade after it\nbecomes the gold standard that everyone else in the industry is using.\nIf what you have to do is 'yum install sljgsjl' and then pg_basebackup\n--client-filter='shell sljgsjl', people can start using it as soon as\ntheir favorite distro packages it, without anyone who reads this\nmailing list needing to do any work whatsoever. If what you have to\ndo is create a 'sljgsjl.json' file in some PostgreSQL install\ndirectory that describes the salient properties of this compressor,\nand then after that you can say pg_basebackup --client-filter=sljgsjl,\nthat's also accessible to a broad swath of users. Now, it may be that\nthere's no practical way to make things that easy. But, to the extent\nthat we can, I think we should. The ability to integrate new\ntechnology without action by PostgreSQL core developers is not the\nonly consideration here, but it's definitely a good thing to have\ninsofar as we reasonably can.\n\n> But I don't think it makes sense to design a C API without a rough\n> picture of how things should eventually look like. If we were, e.g.,\n> eventually going to do all the work of compressing and transferring data\n> in one external binary, then a C API exposing transformations in\n> pg_basebackup doesn't necessarily make sense. If it turns out that\n> pipes are too inefficient on windows to implement compression filters,\n> that we need parallel awareness in the API, etc it'll influence the API.\n\nYeah. I think we really need to understand the performance\ncharacteristics of pipes better. If they're slow, then anything that\nneeds to be fast has to work some other way (but we could still\nprovide a pipe-based slow way for niche uses).\n\n> > That's really, really, really not what I was talking about.\n>\n> What did you mean with the \"marginal improvements\" paragraph above?\n\nI was talking about running one compressor processor with multiple\ncompression threads each reading from a separate pipe, vs. running\nmultiple processes each with a single thread doing the same thing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 19:55:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Wed, Apr 15, 2020 at 7:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah. I think we really need to understand the performance\n> characteristics of pipes better. If they're slow, then anything that\n> needs to be fast has to work some other way (but we could still\n> provide a pipe-based slow way for niche uses).\n\nHmm. Could we learn what we need to know about this by doing something\nas taking a basebackup of a cluster with some data in it (say, created\nby pgbench -i -s 400 or something) and then comparing the speed of cat\n< base.tar | gzip > base.tgz to the speed of gzip < base.tar >\nbase.tgz? It seems like there's no difference between those except\nthat the first one relays through an extra process and an extra pipe.\n\nI don't know exactly how to do the equivalent of this on Windows, but\nI bet somebody does.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Apr 2020 22:22:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Thu, Apr 16, 2020 at 10:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmm. Could we learn what we need to know about this by doing something\n> as taking a basebackup of a cluster with some data in it (say, created\n> by pgbench -i -s 400 or something) and then comparing the speed of cat\n> < base.tar | gzip > base.tgz to the speed of gzip < base.tar >\n> base.tgz? It seems like there's no difference between those except\n> that the first one relays through an extra process and an extra pipe.\n\nI decided to try this. First I experimented on my laptop using a\nbackup of a pristine pgbench database, scale factor 100, ~1.5GB.\n\n[rhaas pgbackup]$ for i in 1 2 3; do echo \"= run number $i = \"; sync;\nsync; time gzip < base.tar > base.tar.gz; rm -f base.tar.gz; sync;\nsync; time cat < base.tar | gzip > base.tar.gz; rm -f base.tar.gz;\nsync; sync; time cat < base.tar | cat | cat | gzip > base.tar.gz; rm\n-f base.tar.gz; done\n\n= run number 1 =\nreal 0m24.011s\nuser 0m23.542s\nsys 0m0.408s\n\nreal 0m23.623s\nuser 0m23.447s\nsys 0m0.908s\n\nreal 0m23.688s\nuser 0m23.847s\nsys 0m2.085s\n= run number 2 =\n\nreal 0m23.704s\nuser 0m23.290s\nsys 0m0.374s\n\nreal 0m23.389s\nuser 0m23.239s\nsys 0m0.879s\n\nreal 0m23.762s\nuser 0m23.888s\nsys 0m2.057s\n= run number 3 =\n\nreal 0m23.567s\nuser 0m23.187s\nsys 0m0.361s\n\nreal 0m23.573s\nuser 0m23.422s\nsys 0m0.903s\n\nreal 0m23.749s\nuser 0m23.884s\nsys 0m2.113s\n\nIt looks like piping everything through an extra copy of 'cat' may\neven be *faster* than having the process read it directly; two out of\nthree runs with the extra \"cat\" finished very slightly quicker than\nthe test where gzip read the file directly. The third set of numbers\nfor each test run is with three copies of \"cat\" interposed. That\nappears to be slower than with no extra pipes, but not very much, and\nit might just be noise.\n\nNext I tried it out on Linux. For this I used 'cthulhu', an older box\nwith lots and lots of memory and cores. Here I took the scale factor\nup to 400, so it's about 5.9GB of data. Same command as above produced\nthese results:\n\n= run number 1 =\n\nreal 2m35.797s\nuser 2m30.990s\nsys 0m4.760s\n\nreal 2m35.407s\nuser 2m32.730s\nsys 0m16.714s\n\nreal 2m40.598s\nuser 2m39.054s\nsys 0m37.596s\n= run number 2 =\n\nreal 2m35.529s\nuser 2m30.971s\nsys 0m4.510s\n\nreal 2m33.933s\nuser 2m31.685s\nsys 0m16.003s\n\nreal 2m45.563s\nuser 2m44.042s\nsys 0m40.357s\n= run number 3 =\n\nreal 2m35.876s\nuser 2m31.437s\nsys 0m4.391s\n\nreal 2m33.872s\nuser 2m31.629s\nsys 0m16.266s\n\nreal 2m40.836s\nuser 2m39.359s\nsys 0m38.960s\n\nThese results are pretty similar to the MacOS results. The overall\nperformance was worse, but I think that is probably explained by the\nfact that the MacBook is a Haswell-class processor rather than\nWestmere, and with significantly higher RAM speed. The pattern that\none extra pipe seems to be perhaps slightly faster, and three extra\npipes a tad slower, persists. So at least in this test, the overhead\nadded by each pipe appears to be <1%, which I would classify as good\nenough not to worry too much about.\n\n> I don't know exactly how to do the equivalent of this on Windows, but\n> I bet somebody does.\n\nHowever, I still don't know what the situation is on Windows. I did do\nsome searching around on the Internet to try to find out whether pipes\nbeing slow on Windows is a generally-known phenomenon, and I didn't\nfind anything very compelling, but I don't have an environment set up\nto the test myself.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 12:19:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-17 12:19:32 -0400, Robert Haas wrote:\n> On Thu, Apr 16, 2020 at 10:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Hmm. Could we learn what we need to know about this by doing something\n> > as taking a basebackup of a cluster with some data in it (say, created\n> > by pgbench -i -s 400 or something) and then comparing the speed of cat\n> > < base.tar | gzip > base.tgz to the speed of gzip < base.tar >\n> > base.tgz? It seems like there's no difference between those except\n> > that the first one relays through an extra process and an extra pipe.\n>\n> I decided to try this. First I experimented on my laptop using a\n> backup of a pristine pgbench database, scale factor 100, ~1.5GB.\n>\n> [rhaas pgbackup]$ for i in 1 2 3; do echo \"= run number $i = \"; sync;\n> sync; time gzip < base.tar > base.tar.gz; rm -f base.tar.gz; sync;\n> sync; time cat < base.tar | gzip > base.tar.gz; rm -f base.tar.gz;\n> sync; sync; time cat < base.tar | cat | cat | gzip > base.tar.gz; rm\n> -f base.tar.gz; done\n\nGiven that gzip is too slow to be practically usable for anything where\ncompression speed matters (like e.g. practical database backups), i'm\nnot sure this measures something useful. The overhead of gzip will\ndominate to a degree that even the slowest possible pipe implementation\nwould be fast enough.\n\nandres@awork3:/tmp/pgbase$ ls -lh\ntotal 7.7G\n-rw------- 1 andres andres 137K Apr 17 13:09 backup_manifest\n-rw------- 1 andres andres 7.7G Apr 17 13:09 base.tar\n-rw------- 1 andres andres 17M Apr 17 13:09 pg_wal.tar\n\nMeasuring with pv base.tar |gzip > /dev/null I can see that the\nperformance varies from somewhere around 20MB/s to about 90MB/s,\naveraging ~60MB/s.\n\nandres@awork3:/tmp/pgbase$ pv base.tar |gzip > /dev/null\n7.62GiB 0:02:09 [60.2MiB/s] [===============================================================================================================>] 100%\n\nWhereas e.g. zstd takes a much much shorter time, even in single\nthreaded mode:\n\nandres@awork3:/tmp/pgbase$ pv base.tar |zstd -T1 |wc -c\n7.62GiB 0:00:14 [ 530MiB/s] [===============================================================================================================>] 100%\n448956321\n\nnot to speak of using parallel compression (pigz is parallel gzip):\n\nandres@awork3:/tmp/pgbase$ pv base.tar |pigz -p 20 |wc -c\n7.62GiB 0:00:07 [1.03GiB/s] [===============================================================================================================>] 100%\n571718276\n\nandres@awork3:/tmp/pgbase$ pv base.tar |zstd -T20 |wc -c\n7.62GiB 0:00:04 [1.78GiB/s] [===============================================================================================================>] 100%\n448956321\n\n\nLooking at raw pipe speed, I think it's not too hard to see some\nlimitations:\n\nandres@awork3:/tmp/pgbase$ time (cat base.tar | wc -c )\n8184994304\n\nreal\t0m3.217s\nuser\t0m0.054s\nsys\t0m4.856s\nandres@awork3:/tmp/pgbase$ time (cat base.tar | cat | wc -c )\n8184994304\n\nreal\t0m3.246s\nuser\t0m0.113s\nsys\t0m7.086s\nandres@awork3:/tmp/pgbase$ time (cat base.tar | cat | cat | cat | cat | cat | wc -c )\n8184994304\n\nreal\t0m4.262s\nuser\t0m0.257s\nsys\t0m20.706s\n\nbut I'm not sure how deep pipelines we're thinking would be common.\n\nTo make sure this is still relevant in the compression context:\n\nandres@awork3:/tmp/pgbase$ pv base.tar | zstd -T20 > /dev/null\n7.62GiB 0:00:04 [1.77GiB/s] [===============================================================================================================>] 100%\nandres@awork3:/tmp/pgbase$ pv base.tar | cat | cat | zstd -T20 > /dev/null\n7.62GiB 0:00:05 [1.38GiB/s] [===============================================================================================================>] 100%\n\nIt's much less noticable if the cat's are after the zstd, there's so\nmuch less data as pgbench's data is so compressible.\n\n\nThis does seem to suggest that composing features through chains of\npipes wouldn't be a good idea. But not that we shouldn't implement\ncompression via pipes (nor the opposite).\n\n\n> > I don't know exactly how to do the equivalent of this on Windows, but\n> > I bet somebody does.\n>\n> However, I still don't know what the situation is on Windows. I did do\n> some searching around on the Internet to try to find out whether pipes\n> being slow on Windows is a generally-known phenomenon, and I didn't\n> find anything very compelling, but I don't have an environment set up\n> to the test myself.\n\nI tried to measure something. But I'm not a windows person. And it's\njust a kvm VM. I don't know how well that translates into other\nenvironments.\n\nI downloaded gnuwin32 coreutils and zstd and performed some\nmeasurements. The first results were *shockingly* bad:\n\nzstd -T0 < onegbofrandom | wc -c\nlinux host:\t0.467s\nwindows guest:\t0.968s\n\nzstd -T0 < onegbofrandom | cat | wc -c\nlinux host:\t0.479s\nwindows guest:\t6.058s\n\nzstd -T0 < onegbofrandom | cat | cat | wc -c\nlinux host:\t0.516s\nwindows guest:\t7.830s\n\nI think that's because cat reads or writes in too small increments for\nwindows (but damn, that's slow). Replacing cat with dd:\n\nzstd -T0 < onegbofrandom | dd bs=512 | wc -c\nlinux host:\t3.091s\nwindows guest:\t5.909s\n\nzstd -T0 < onegbofrandom | dd bs=64k | wc -c\nlinux host:\t0.540s\nwindows guest:\t1.128s\n\nzstd -T0 < onegbofrandom | dd bs=1M | wc -c\nlinux host:\t0.516s\nwindows guest:\t1.043s\n\nzstd -T0 < onegbofrandom | dd bs=1 | wc -c\nlinux host:\t1547s\nwindows guest:\t2607s\n(yes, really, it's this slow)\n\nzstd -T0 < onegbofrandom > NUL\nzstd -T0 < onegbofrandom > /dev/null\nlinux host:\t0.361s\nwindows guest:\t0.602s\n\nzstd -T0 < onegbofrandom | dd bs=1M of=NUL\nzstd -T0 < onegbofrandom | dd bs=1M of=/dev/null\nlinux host:\t0.454s\nwindows guest:\t0.802s\n\nzstd -T0 < onegbofrandom | dd bs=64k | dd bs=64k | dd bs=64k | wc -c\nlinux host:\t0.521s\nwindows guest:\t1.376s\n\n\nThis suggest that pipes do have a considerably higher overhead on\nwindows, but that it's not all that terrible if one takes care to use\nlarge buffers in each pipe element.\n\nIt's notable though that even the simplest use of a pipe does add a\nconsiderable overhead compared to using the files directly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:44:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 7:44 PM Andres Freund <andres@anarazel.de> wrote:\n> This suggest that pipes do have a considerably higher overhead on\n> windows, but that it's not all that terrible if one takes care to use\n> large buffers in each pipe element.\n>\n> It's notable though that even the simplest use of a pipe does add a\n> considerable overhead compared to using the files directly.\n\nThanks for these results. I think that this shows that it's probably\nnot a great idea to force everything to go through pipes in every\ncase, but on the other hand, there's no reason to be a particularly\nscared of the performance implications of letting some things go\nthrough pipes. For instance, if we decide that LZ4 compression is\ngoing to be a good choice for most users, we might want to do that\nin-process rather than via pipes. However, if somebody wants to pipe\nthrough an external compressor that they prefer, that's going to be a\nlittle slower, but not necessarily to a degree that creates big\nproblems. People with bigger databases will need to be more careful\nabout which options they choose, but that's kind of inevitable.\n\nDo you agree?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Apr 2020 11:04:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "Fwiw, it was common trick in the Oracle world to create a named pipe\nto gzip and then write your backup to it. I really like that way of\ndoing things but I suppose it's probably too old-fashioned to expect\nto survive. And in practice while it worked for a manual process for a\nsysadmin it's pretty awkward to automate reliably.\n\n\n", "msg_date": "Sat, 18 Apr 2020 18:37:09 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sat, Apr 18, 2020 at 8:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 17, 2020 at 7:44 PM Andres Freund <andres@anarazel.de> wrote:\n> > This suggest that pipes do have a considerably higher overhead on\n> > windows, but that it's not all that terrible if one takes care to use\n> > large buffers in each pipe element.\n> >\n> > It's notable though that even the simplest use of a pipe does add a\n> > considerable overhead compared to using the files directly.\n>\n> Thanks for these results. I think that this shows that it's probably\n> not a great idea to force everything to go through pipes in every\n> case, but on the other hand, there's no reason to be a particularly\n> scared of the performance implications of letting some things go\n> through pipes. For instance, if we decide that LZ4 compression is\n> going to be a good choice for most users, we might want to do that\n> in-process rather than via pipes.\n>\n\nHow will the user know how to use this compressed backup? I mean to\nsay if we use some compression algorithm to compress the data then the\nuser should know how to decompress and use the backup. IIUC, if\ncurrently, the user uses tar format to backup, it can simply untar it\nand start the server but will that be possible if we provide some\nin-built compression methods like LZ4?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 08:18:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Sat, Apr 18, 2020 at 5:14 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> zstd -T0 < onegbofrandom > NUL\n> zstd -T0 < onegbofrandom > /dev/null\n> linux host: 0.361s\n> windows guest: 0.602s\n>\n> zstd -T0 < onegbofrandom | dd bs=1M of=NUL\n> zstd -T0 < onegbofrandom | dd bs=1M of=/dev/null\n> linux host: 0.454s\n> windows guest: 0.802s\n>\n> zstd -T0 < onegbofrandom | dd bs=64k | dd bs=64k | dd bs=64k | wc -c\n> linux host: 0.521s\n> windows guest: 1.376s\n>\n>\n> This suggest that pipes do have a considerably higher overhead on\n> windows, but that it's not all that terrible if one takes care to use\n> large buffers in each pipe element.\n>\n\nI have also done some similar experiments on my Win-7 box and the\nresults are as follows:\n\nzstd -T0 < 16396 > NUL\n\nExecution time: 2.240 s\n\nzstd -T0 < 16396 | dd bs=1M > NUL\n\nExecution time: 4.240 s\n\nzstd -T0 < 16396 | dd bs=64k | dd bs=64k | dd bs=64k | wc -c\n\nExecution time: 5.959 s\n\nIn the above tests, 16396 is a 1GB file generated via pgbench. The\nabove results indicate that adding more pipe chains with dd adds\nsignificant overhead but how can we distinguish what is exact overhead\ndue to pipe?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 09:45:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 20, 2020 at 8:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 18, 2020 at 8:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Apr 17, 2020 at 7:44 PM Andres Freund <andres@anarazel.de> wrote:\n> > > This suggest that pipes do have a considerably higher overhead on\n> > > windows, but that it's not all that terrible if one takes care to use\n> > > large buffers in each pipe element.\n> > >\n> > > It's notable though that even the simplest use of a pipe does add a\n> > > considerable overhead compared to using the files directly.\n> >\n> > Thanks for these results. I think that this shows that it's probably\n> > not a great idea to force everything to go through pipes in every\n> > case, but on the other hand, there's no reason to be a particularly\n> > scared of the performance implications of letting some things go\n> > through pipes. For instance, if we decide that LZ4 compression is\n> > going to be a good choice for most users, we might want to do that\n> > in-process rather than via pipes.\n> >\n>\n> How will the user know how to use this compressed backup? I mean to\n> say if we use some compression algorithm to compress the data then the\n> user should know how to decompress and use the backup. IIUC, if\n> currently, the user uses tar format to backup, it can simply untar it\n> and start the server but will that be possible if we provide some\n> in-built compression methods like LZ4?\n>\n\nOne idea could be that we can write something like BACKUP COMPRESSION:\n<LZ4 or whatever compression we have used> in backup_label file and\nthen probably recovery can take care of decompressing it.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:30:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 5:57 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> There's a couple of other pieces here that I think bear mentioning. The\n> first is that pgBackRest has an actual 'restore' command- and that works\n> with the filters and works with the storage drivers, so what you're\n> looking at when it comes to these interfaces isn't just \"put a file\" but\n> it's also \"get a file\". That's actually quite important to have when\n> you start thinking about these more complicated methods of doing\n> backups.\n>\n\nI also think it is important to provide a way or interface to restore\nthe data user has backed up using whatever new API we provide as here.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 12:06:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" }, { "msg_contents": "On Thu, Apr 16, 2020 at 3:44 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I think having a simple framework in pg_basebackup for plugging in new\n> > algorithms would make it noticeably simpler to add LZ4 or whatever\n> > your favorite compression algorithm is. And I think having that\n> > framework also be able to use shell commands, so that users don't have\n> > to wait a decade or more for new choices to show up, is also a good\n> > idea.\n>\n> As long as here's sensible defaults, and so that the user doesn't have\n> to specify paths to binaries for the common cases, I'm OK with that. I'm\n> not ok with requiring the user to specify shell fragments for things\n> that should be built in.\n>\n> If we think the appropriate way to implement extensible compression is\n> by piping to commandline binaries ([1]),\n>\n\nI can see how such a scheme could be useful for backups but how do we\nrestore such a backup?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 15:14:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: where should I stick that backup?" } ]
[ { "msg_contents": "Generate backup manifests for base backups, and validate them.\n\nA manifest is a JSON document which includes (1) the file name, size,\nlast modification time, and an optional checksum for each file backed\nup, (2) timelines and LSNs for whatever WAL will need to be replayed\nto make the backup consistent, and (3) a checksum for the manifest\nitself. By default, we use CRC-32C when checksumming data files,\nbecause we are trying to detect corruption and user error, not foil an\nadversary. However, pg_basebackup and the server-side BASE_BACKUP\ncommand now have options to select a different algorithm, so users\nwanting a cryptographic hash function can select SHA-224, SHA-256,\nSHA-384, or SHA-512. Users not wanting file checksums at all can\ndisable them, or disable generating of the backup manifest altogether.\nUsing a cryptographic hash function in place of CRC-32C consumes\nsignificantly more CPU cycles, which may slow down backups in some\ncases.\n\nA new tool called pg_validatebackup can validate a backup against the\nmanifest. If no checksums are present, it can still check that the\nright files exist and that they have the expected sizes. If checksums\nare present, it can also verify that each file has the expected\nchecksum. Additionally, it calls pg_waldump to verify that the\nexpected WAL files are present and parseable. Only plain format\nbackups can be validated directly, but tar format backups can be\nvalidated after extracting them.\n\nRobert Haas, with help, ideas, review, and testing from David Steele,\nStephen Frost, Andrew Dunstan, Rushabh Lathia, Suraj Kharage, Tushar\nAhuja, Rajkumar Raghuwanshi, Mark Dilger, Davinder Singh, Jeevan\nChalke, Amit Kapila, Andres Freund, and Noah Misch.\n\nDiscussion: http://postgr.es/m/CA+TgmoZV8dw1H2bzZ9xkKwdrk8+XYa+DC9H=F7heO2zna5T6qg@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0d8c9c1210c44b36ec2efcb223a1dfbe897a3661\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 37 +-\ndoc/src/sgml/ref/allfiles.sgml | 1 +\ndoc/src/sgml/ref/pg_basebackup.sgml | 64 ++\ndoc/src/sgml/ref/pg_validatebackup.sgml | 291 ++++++++\ndoc/src/sgml/reference.sgml | 1 +\nsrc/backend/access/transam/xlog.c | 3 +-\nsrc/backend/replication/basebackup.c | 537 +++++++++++++-\nsrc/backend/replication/repl_gram.y | 13 +\nsrc/backend/replication/repl_scanner.l | 2 +\nsrc/backend/replication/walsender.c | 30 +\nsrc/bin/Makefile | 1 +\nsrc/bin/pg_basebackup/pg_basebackup.c | 208 +++++-\nsrc/bin/pg_basebackup/t/010_pg_basebackup.pl | 8 +-\nsrc/bin/pg_validatebackup/.gitignore | 2 +\nsrc/bin/pg_validatebackup/Makefile | 39 +\nsrc/bin/pg_validatebackup/parse_manifest.c | 740 +++++++++++++++++++\nsrc/bin/pg_validatebackup/parse_manifest.h | 45 ++\nsrc/bin/pg_validatebackup/pg_validatebackup.c | 905 ++++++++++++++++++++++++\nsrc/bin/pg_validatebackup/t/001_basic.pl | 30 +\nsrc/bin/pg_validatebackup/t/002_algorithm.pl | 58 ++\nsrc/bin/pg_validatebackup/t/003_corruption.pl | 251 +++++++\nsrc/bin/pg_validatebackup/t/004_options.pl | 89 +++\nsrc/bin/pg_validatebackup/t/005_bad_manifest.pl | 201 ++++++\nsrc/bin/pg_validatebackup/t/006_encoding.pl | 27 +\nsrc/bin/pg_validatebackup/t/007_wal.pl | 55 ++\nsrc/include/replication/basebackup.h | 7 +-\nsrc/include/replication/walsender.h | 1 +\n27 files changed, 3614 insertions(+), 32 deletions(-)", "msg_date": "Fri, 03 Apr 2020 19:07:08 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Generate backup manifests for base backups, and validate them." }, { "msg_contents": "On 2020-04-03 21:07, Robert Haas wrote:\n> A new tool called pg_validatebackup can validate a backup against the\n> manifest.\n\nIn software engineering, \"verify\" and \"validate\" have standardized \ndistinct meanings. I'm not going to try to explain them here, but you \ncan easily find them online. I haven't formed an opinion on which one \nof them this tool is doing, but I notice that both the man page and the \nmessages produced by the tool use the two terms seemingly \ninterchangeably. We should try to pick the correct term and use it \nconsistently.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:51:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Generate backup manifests for base backups, and validate\n them." }, { "msg_contents": "On Tue, Apr 7, 2020 at 5:51 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-03 21:07, Robert Haas wrote:\n> > A new tool called pg_validatebackup can validate a backup against the\n> > manifest.\n>\n> In software engineering, \"verify\" and \"validate\" have standardized\n> distinct meanings. I'm not going to try to explain them here, but you\n> can easily find them online. I haven't formed an opinion on which one\n> of them this tool is doing, but I notice that both the man page and the\n> messages produced by the tool use the two terms seemingly\n> interchangeably. We should try to pick the correct term and use it\n> consistently.\n\nThe tool is trying to make sure that we have the same backup that\nwe're supposed to have, and that the associated WAL is present and\nsane. Looking at\nhttps://en.wikipedia.org/wiki/Verification_and_validation, that sounds\nmore like verification than validation, but I confess that this\ndistinction is new to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:44:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Generate backup manifests for base backups,\n and validate them." }, { "msg_contents": "On 4/7/20 12:44 PM, Robert Haas wrote:\n> On Tue, Apr 7, 2020 at 5:51 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2020-04-03 21:07, Robert Haas wrote:\n>>> A new tool called pg_validatebackup can validate a backup against the\n>>> manifest.\n>>\n>> In software engineering, \"verify\" and \"validate\" have standardized\n>> distinct meanings. I'm not going to try to explain them here, but you\n>> can easily find them online. I haven't formed an opinion on which one\n>> of them this tool is doing, but I notice that both the man page and the\n>> messages produced by the tool use the two terms seemingly\n>> interchangeably. We should try to pick the correct term and use it\n>> consistently.\n> \n> The tool is trying to make sure that we have the same backup that\n> we're supposed to have, and that the associated WAL is present and\n> sane. Looking at\n> https://en.wikipedia.org/wiki/Verification_and_validation, that sounds\n> more like verification than validation, but I confess that this\n> distinction is new to me.\n\nWhen I searched I found a two different definitions for validation and \nverification. One for software development (as in the link above and \nwhat I think Peter meant) and another for data (see \nhttps://en.wikipedia.org/wiki/Data_validation, \nhttps://en.wikipedia.org/wiki/Data_verification, \nhttps://www.differencebetween.com/difference-between-data-validation-and-vs-data-verification/)\n\nIt seems that validation vs. verify as defined in PMBOK (the former \nsense) does not really apply here, though. That leaves only the latter \nsense which appears less well-documented but points to \"verify\" as the \nbetter option.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 7 Apr 2020 13:13:24 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Generate backup manifests for base backups, and validate\n them." } ]
[ { "msg_contents": "Hi,\n\nGetOldestXmin() applies vacuum_defer_cleanup_age only when\n!RecoveryInProgress(). In contrast to that GetSnapshotData() applies it\nunconditionally.\n\nI'm not actually clear whether including vacuum_defer_cleanup_age on a\nreplica is meaningful. But it strikes me as odd to have that behavioural\ndifference between GetOldestXmin() and GetSnapshotData() - without any\nneed, as far as I can tell?\n\n\nThe difference seems to have been introduced in\n\ncommit bca8b7f16a3e720794cb0afbdb3733be4f8d9c2c\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: 2011-02-16 19:29:37 +0000\n\n Hot Standby feedback for avoidance of cleanup conflicts on standby.\n Standby optionally sends back information about oldestXmin of queries\n which is then checked and applied to the WALSender's proc->xmin.\n GetOldestXmin() is modified slightly to agree with GetSnapshotData(),\n so that all backends on primary include WALSender within their snapshots.\n Note this does nothing to change the snapshot xmin on either master or\n standby. Feedback piggybacks on the standby reply message.\n vacuum_defer_cleanup_age is no longer used on standby, though parameter\n still exists on primary, since some use cases still exist.\n\n Simon Riggs, review comments from Fujii Masao, Heikki Linnakangas, Robert Haas\n\n\nwithout, as far as I can tell, explaining why \"vacuum_defer_cleanup_age\nis no longer used on standby\" shouldn't also apply to\nGetSnapshotData().\n\nI suspect it doesn't hurt all that much to unnecessarily apply\nvacuum_defer_cleanup_age on a replica. The only thing I see where it\nmatters is that it makes get_actual_variable_endpoint() less accurate,\nwhich we probably would like to avoid...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Apr 2020 15:53:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "vacuum_defer_cleanup_age inconsistently applied on replicas" }, { "msg_contents": "On Fri, Apr 3, 2020 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> GetOldestXmin() applies vacuum_defer_cleanup_age only when\n> !RecoveryInProgress(). In contrast to that GetSnapshotData() applies it\n> unconditionally.\n>\n> I'm not actually clear whether including vacuum_defer_cleanup_age on a\n> replica is meaningful. But it strikes me as odd to have that behavioural\n> difference between GetOldestXmin() and GetSnapshotData() - without any\n> need, as far as I can tell?\n\nDid you notice the comments added by Tom in b4a0223d008, which repeat\nthe claim that it isn't used on standbys? I think that this is\nprobably just an oversight in bca8b7f1, as you suggested. It's not\nthat hard to imagine how this oversight might have happened: Hot\nstandby feedback was introduced, and nobody cared about\nvacuum_defer_cleanup_age anymore. It was always very difficult to\ntune.\n\nOTOH, I wonder if it's possible that vacuum_defer_cleanup_age was\ndeliberately intended to affect the behavior of\nXLogWalRcvSendHSFeedback(), which is probably one of the most common\nreasons why GetOldestXmin() is called on standbys.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:18:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum_defer_cleanup_age inconsistently applied on replicas" }, { "msg_contents": "On Fri, Apr 3, 2020 at 4:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> OTOH, I wonder if it's possible that vacuum_defer_cleanup_age was\n> deliberately intended to affect the behavior of\n> XLogWalRcvSendHSFeedback(), which is probably one of the most common\n> reasons why GetOldestXmin() is called on standbys.\n\nPressed \"send\" too soon. vacuum_defer_cleanup_age *doesn't* get\napplied when recovery is in progress, so that definitely can't be\ntrue.\n\nAnother hint that vacuum_defer_cleanup_age is only really supposed to\nbe used on the primary is the fact that it appears under \"18.6.1.\nMaster Server\" in the 9.1 docs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:25:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum_defer_cleanup_age inconsistently applied on replicas" } ]
[ { "msg_contents": "Hi,\n\nRight now we prevent dead rows in non-shared catalog tables from being\nremoved whenever there is a logical slot with an older catalog\nxmin. Obviously that's required when the slot is in the current\ndatabase, but it's not when the slot is in a different database.\n\nI don't think we can afford to iterate through the slots to determine\nthe \"current database horizon\" in GetSnapshotData(), but it should not\nbe a problem to do so in GetOldestXmin(). The latter should normally not\nbe called at a very high frequency.\n\nI think in busy clusters this could help even when only one database is\nin active use, because it allows relfrozenxid in template0/1 to be\nadvanced more aggressively.\n\n\nI'm writing this down as an idea, I don't plan to work on this anytime\nsoon.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 4 Apr 2020 12:58:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "idea: reduce logical slot induced bloat when multiple databases are\n used" } ]
[ { "msg_contents": "Hi,\n\nRunning sqlsmith on master i got an assertion failure on parse_coerce.c:2049\n\nThis is a minimal query to reproduce in an empty database, i also\nattached the stack trace\n\n \"\"\"\nselect\n pg_catalog.array_in(\n cast(pg_catalog.regoperatorout(\n cast(cast(null as regoperator) as regoperator)) as cstring),\n cast((select pronamespace from pg_catalog.pg_proc limit 1 offset 1)\n as oid),\n cast(subq_1.pid as int4)) as c0\nfrom pg_catalog.pg_stat_progress_analyze as subq_1\n \"\"\"\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 4 Apr 2020 16:03:52 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Failed Assertion about PolymorphicType" }, { "msg_contents": "Jaime Casanova <jaime.casanova@2ndquadrant.com> writes:\n> Running sqlsmith on master i got an assertion failure on parse_coerce.c:2049\n\nHmph, or more simply:\n\nregression=# select array_in('{1,2,3}',23,-1);\nserver closed the connection unexpectedly\n\nwhich is a case that worked before. The core of the problem is\nthat array_in() violates the assumption that a polymorphic result\nrequires a polymorphic argument:\n\nregression=# \\df array_in\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n------------+----------+------------------+-----------------------+------\n pg_catalog | array_in | anyarray | cstring, oid, integer | func\n(1 row)\n\nI see that enforce_generic_type_consistency did not use to assert\nthat it'd resolved every polymorphic rettype. So I think we should just\nremove that assertion (and fix the incorrect comment that led to\nadding it).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Apr 2020 17:21:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Failed Assertion about PolymorphicType" } ]
[ { "msg_contents": "Hi,\n\nvacuum_rel() has the following comment:\n\t/*\n\t * Functions in indexes may want a snapshot set. Also, setting a snapshot\n\t * ensures that RecentGlobalXmin is kept truly recent.\n\t */\n\tPushActiveSnapshot(GetTransactionSnapshot());\n\nwhich was added quite a while ago in\n\ncommit d53a56687f3d4772d17ffa0013a33231b7163731\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2008-09-11 14:01:10 +0000\n\n\nBut to me that's understating the issue. Don't we e.g. need a snapshot\nto ensure that pg_subtrans won't be truncated away? I thought xlog.c\ndoesn't pass PROCARRAY_FLAGS_VACUUM to GetOldestXmin when truncating?\n\t\tTruncateSUBTRANS(GetOldestXmin(NULL, PROCARRAY_FLAGS_DEFAULT));\n\nIt's fine for rows that vacuum could see according to its xmin to be\npruned away, since that won't happen while it has a page locked. But we\ncan't just have a pg_subtrans access error out, and there's no page\nlevel interlock against that.\n\nAlso, without an xmin set, it seems possible that vacuum could see some\nof the transaction ids it uses (in local memory) wrap around. While not\nlikely, it doesn't seem that unlikely either, since autovacuum will be\nrunning full throttle if there's a 2 billion xid old transaction hanging\naround. And if that super old transaction finishes, e.g. vacuum's\nOldestXmin value could end up being in the future in the middle of\nvacuuming the table (if that table has a new relfrozenxid).\n\n\nHow about replacing it with something like\n\t/*\n\t * Need to acquire a snapshot to prevent pg_subtrans from being truncated,\n\t * cutoff xids in local memory wrapping around, and to have updated xmin\n\t * horizons.\n\t */\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 00:18:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Comment explaining why vacuum needs to push snapshot seems\n insufficient." }, { "msg_contents": "On 2020-Apr-05, Andres Freund wrote:\n\n> vacuum_rel() has the following comment:\n> \t/*\n> \t * Functions in indexes may want a snapshot set. Also, setting a snapshot\n> \t * ensures that RecentGlobalXmin is kept truly recent.\n> \t */\n> \tPushActiveSnapshot(GetTransactionSnapshot());\n> \n> which was added quite a while ago in\n> \n> commit d53a56687f3d4772d17ffa0013a33231b7163731\n\nNote that what that commit did was change the snapshot acquisition from\noccurring solely during vacuum full to being acquired always -- and\nadded a small additional bit of knowledge:\n\n- if (vacstmt->full)\n- {\n- /* functions in indexes may want a snapshot set */\n- PushActiveSnapshot(GetTransactionSnapshot());\n- }\n- else\n+ /*\n+ * Functions in indexes may want a snapshot set. Also, setting\n+ * a snapshot ensures that RecentGlobalXmin is kept truly recent.\n+ */\n+ PushActiveSnapshot(GetTransactionSnapshot());\n\nso I wouldn't blame that commit for failing to understand all the side\neffects of acquiring a snapshot there and then.\n\n> But to me that's understating the issue. Don't we e.g. need a snapshot\n> to ensure that pg_subtrans won't be truncated away? I thought xlog.c\n> doesn't pass PROCARRAY_FLAGS_VACUUM to GetOldestXmin when truncating?\n> \t\tTruncateSUBTRANS(GetOldestXmin(NULL, PROCARRAY_FLAGS_DEFAULT));\n\nNice find. This bug would probably be orders-of-magnitude easier to hit\nnow than it was in 2008 -- given both the hardware advances and the\nincreased transaction rates.\n\n> Also, without an xmin set, it seems possible that vacuum could see some\n> of the transaction ids it uses (in local memory) wrap around.\n\nDitto.\n\n> How about replacing it with something like\n> \t/*\n> \t * Need to acquire a snapshot to prevent pg_subtrans from being truncated,\n> \t * cutoff xids in local memory wrapping around, and to have updated xmin\n> \t * horizons.\n> \t */\n\n\"While we don't typically need a snapshot, we need the side effects of\nacquiring one: having an xmin prevents concurrent pg_subtrans truncation\nand prevents our cutoff Xids from becoming wrapped-around; this also\nupdates our Xmin horizons. Lastly, functions in indexes may want a\nsnapshot set.\"\n\n(You omitted that last point without explanation -- maybe on purpose?\nis it no longer needed?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Apr 2020 17:56:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Comment explaining why vacuum needs to push snapshot seems\n insufficient." }, { "msg_contents": "Hi,\n\nOn 2020-04-15 17:56:58 -0400, Alvaro Herrera wrote:\n> On 2020-Apr-05, Andres Freund wrote:\n> \n> > vacuum_rel() has the following comment:\n> > \t/*\n> > \t * Functions in indexes may want a snapshot set. Also, setting a snapshot\n> > \t * ensures that RecentGlobalXmin is kept truly recent.\n> > \t */\n> > \tPushActiveSnapshot(GetTransactionSnapshot());\n> > \n> > which was added quite a while ago in\n> > \n> > commit d53a56687f3d4772d17ffa0013a33231b7163731\n> \n> Note that what that commit did was change the snapshot acquisition from\n> occurring solely during vacuum full to being acquired always -- and\n> added a small additional bit of knowledge:\n> \n> - if (vacstmt->full)\n> - {\n> - /* functions in indexes may want a snapshot set */\n> - PushActiveSnapshot(GetTransactionSnapshot());\n> - }\n> - else\n> + /*\n> + * Functions in indexes may want a snapshot set. Also, setting\n> + * a snapshot ensures that RecentGlobalXmin is kept truly recent.\n> + */\n> + PushActiveSnapshot(GetTransactionSnapshot());\n> \n> so I wouldn't blame that commit for failing to understand all the side\n> effects of acquiring a snapshot there and then.\n\nFair enough. I just looked far enough to find where the comment was\nintroduced. But I'm not sure the logic leading to that is correct - see\nbelow.\n\n\n> > But to me that's understating the issue. Don't we e.g. need a snapshot\n> > to ensure that pg_subtrans won't be truncated away? I thought xlog.c\n> > doesn't pass PROCARRAY_FLAGS_VACUUM to GetOldestXmin when truncating?\n> > \t\tTruncateSUBTRANS(GetOldestXmin(NULL, PROCARRAY_FLAGS_DEFAULT));\n> \n> Nice find. This bug would probably be orders-of-magnitude easier to hit\n> now than it was in 2008 -- given both the hardware advances and the\n> increased transaction rates.\n\nYea. I think there's a lot of very old sloppiness around xmin horizons\nthat we just avoided hitting frequently due to what you say. Although\nI'm fairly sure that we've declared some of the resulting bugs\nOS/hardware level issues that actually were horizon related...\n\n\n> > How about replacing it with something like\n> > \t/*\n> > \t * Need to acquire a snapshot to prevent pg_subtrans from being truncated,\n> > \t * cutoff xids in local memory wrapping around, and to have updated xmin\n> > \t * horizons.\n> > \t */\n> \n> \"While we don't typically need a snapshot, we need the side effects of\n> acquiring one: having an xmin prevents concurrent pg_subtrans truncation\n> and prevents our cutoff Xids from becoming wrapped-around; this also\n> updates our Xmin horizons. Lastly, functions in indexes may want a\n> snapshot set.\"\n> \n> (You omitted that last point without explanation -- maybe on purpose?\n> is it no longer needed?)\n\nI left out the \"typically need a snapshot\" part because it doesn't\nreally seem to say something meaningful. To me it's adding confusion\nwithout removing any. I guess we could reformulate it to explain that\nwhile we don't use the snapshot to make mvcc visibility determinations\n(since only definitely dead rows matter to vacuum), nor do we use it to\nprevent concurrent removal of dead tuples (since we're fine with that\nhappening), we still need to ensure that resources like pg_subtrans stay\naround.\n\nI left out the \"in indexes\" part because I didn't see it\nsaying/guaranteeing much. Thinking more about it, that's probably not\nquite right. In fact, I wonder if PushActiveSnapshot() does anything\nmeaningful for the case of expression indexes. If a snapshot is needed\nfor some indexes, I assume that would be because visibility tests are\ndone. But because we set PROC_IN_VACUUM it'd not at all be safe to\nactually do visibility tests - rows that are still visible could get\nremoved.\n\nIt's not clear to me which functions this is talking about. For vacuum\nfull, where the comment originated from, it's obvious (new index being\nbuilt) that we need to evaluate arbitrary functions, in particular for\nexpression indexes. But there's no different behaviour for expression\nindexes during normal vacuums, all the expression related work was done\nduring index insertion. And if the index operators themselves needed a\nvalid snapshot, it'd not be safe to set PROC_IN_VACUUM.\n\nI think, at least for btree, it's presumably ok that we invoke btree\noperators without a valid snapshot. Because we need to be able to\ncompare tuples on inner pages during normal operation, which might long\nago may have been removed, btree operators need to be safe against\nunderlying data vanishing anyway (which is why e.g. enum values are hard\nto remove).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:13:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Comment explaining why vacuum needs to push snapshot seems\n insufficient." } ]
[ { "msg_contents": "Hi,\n\nAnother one caught by sqlsmith, on the regression database run this\nquery (using any non-partitioned table works fine):\n\n\"\"\"\nselect currtid('pagg_tab'::regclass::oid, '(0,156)'::tid) >= '(1,158)'::tid;\n\"\"\"\n\nThis works on 11 (well it gives an error because the file doesn't\nexists) but crash the server on 12+\n\nattached the stack trace from master\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 5 Apr 2020 03:18:50 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "segmentation fault using currtid and partitioned tables" }, { "msg_contents": "Jaime Casanova <jaime.casanova@2ndquadrant.com> writes:\n> Another one caught by sqlsmith, on the regression database run this\n> query (using any non-partitioned table works fine):\n> select currtid('pagg_tab'::regclass::oid, '(0,156)'::tid) >= '(1,158)'::tid;\n\nHm, so\n\n(1) currtid_byreloid and currtid_byrelname lack any check to see\nif they're dealing with a relkind that lacks storage.\n\n(2) The proximate cause of the crash is that rd_tableam is zero,\nso that the interface functions in tableam.h just crash hard.\nThis seems like a pretty bad thing; does anyone want to promise\nthat there are no other oversights of the same ilk elsewhere,\nand never will be?\n\nI think it might be a good idea to make relations-without-storage\nset up rd_tableam as a vector of dummy functions that will throw\nsome suitable complaint about \"relation lacks storage\". NULL is\na horrible default for this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Apr 2020 12:51:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Sun, Apr 05, 2020 at 12:51:56PM -0400, Tom Lane wrote:\n> I think it might be a good idea to make relations-without-storage\n> set up rd_tableam as a vector of dummy functions that will throw\n> some suitable complaint about \"relation lacks storage\". NULL is\n> a horrible default for this.\n\nYeah, that's not good, but I am not really comfortable with the\nconcept of implying that (pg_class.relam == InvalidOid) maps to a\ndummy AM callback set instead of NULL for rd_tableam. That feels less\nnatural. As mentioned upthread, the error that we get in ~11 is\nconfusing as well when using a relation that has no storage:\nERROR: 58P01: could not open file \"base/16384/16385\": No such file or directory\n\nI have been looking at the tree and the use of the table AM APIs, and\nthose TID lookups are really a particular case compared to the other\ncallers of the table AM callbacks. Anyway, I have not spotted similar\nproblems, so I find very tempting the option to just add some\nRELKIND_HAS_STORAGE() to tid.c where it matters and call it a day.\n\nAndres, what do you think?\n--\nMichael", "msg_date": "Wed, 8 Apr 2020 16:13:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Wed, Apr 08, 2020 at 04:13:31PM +0900, Michael Paquier wrote:\n> I have been looking at the tree and the use of the table AM APIs, and\n> those TID lookups are really a particular case compared to the other\n> callers of the table AM callbacks. Anyway, I have not spotted similar\n> problems, so I find very tempting the option to just add some\n> RELKIND_HAS_STORAGE() to tid.c where it matters and call it a day.\n\nPlaying more with this stuff, it happens that we have zero code\ncoverage for currtid() and currtid2(), and the main user of those\nfunctions I can find around is the ODBC driver:\nhttps://coverage.postgresql.org/src/backend/utils/adt/tid.c.gcov.html\n\nThere are multiple cases to consider, particularly for views:\n- Case of a view with ctid as attribute taken from table.\n- Case of a view with ctid as attribute with incorrect attribute\ntype.\nIt is worth noting that all those code paths can trigger various\nelog() errors, which is not something that a user should be able to do\nusing a SQL-callable function. There are also two code paths for\ncases where a view has no or more-than-one SELECT rules, which cannot\nnormally be reached.\n\nAll in that, I propose something like the attached to patch the\nsurroundings with tests to cover everything I could think of, which I\nguess had better be backpatched? While on it, I have noticed that we\nlack coverage for max(tid) and min(tid), so I have included a bonus\ntest.\n\nAnother issue is that we don't have any documentation for those\nfunctions, in which case the best fit is a subsection for TID\noperators under \"Functions and Operators\"?\n\nFor now, I am adding a patch to next CF so as we don't forget about\nthis set of issues. Any thoughts?\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 15:22:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On 2020-Apr-09, Michael Paquier wrote:\n\n> Playing more with this stuff, it happens that we have zero code\n> coverage for currtid() and currtid2(), and the main user of those\n> functions I can find around is the ODBC driver:\n> https://coverage.postgresql.org/src/backend/utils/adt/tid.c.gcov.html\n\nYeah, they're there solely for ODBC as far as I know.\n\n> There are multiple cases to consider, particularly for views:\n> - Case of a view with ctid as attribute taken from table.\n> - Case of a view with ctid as attribute with incorrect attribute\n> type.\n> It is worth noting that all those code paths can trigger various\n> elog() errors, which is not something that a user should be able to do\n> using a SQL-callable function. There are also two code paths for\n> cases where a view has no or more-than-one SELECT rules, which cannot\n> normally be reached.\n\n> All in that, I propose something like the attached to patch the\n> surroundings with tests to cover everything I could think of, which I\n> guess had better be backpatched?\n\nI don't know, but this stuff is so unused that your patch seems\nexcessive ... and I think we'd rather not backpatch something so large.\nI propose we do something less invasive in the backbranches, like just\nthrow elog() errors (nothing fancy) where necessary to avoid the\ncrashes. Even for pg12 it seems that that should be sufficient.\n\nFor pg13 and beyond, I liked Tom's idea of installing dummy functions\nfor tables without storage -- that seems safer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 19:32:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Fri, May 22, 2020 at 07:32:57PM -0400, Alvaro Herrera wrote:\n> I don't know, but this stuff is so unused that your patch seems\n> excessive ... and I think we'd rather not backpatch something so large.\n> I propose we do something less invasive in the backbranches, like just\n> throw elog() errors (nothing fancy) where necessary to avoid the\n> crashes. Even for pg12 it seems that that should be sufficient.\n\nEven knowing that those trigger a bunch of elog()s which are not\nsomething that should be user-triggerable? :)\n\nPerhaps you are right though, and that we don't need to spend this\nmuch energy into improving the error messages so I am fine to discard\nthis part. At the end, in order to remove the crashes, you just need\nto keep around the two RELKIND_HAS_STORAGE() checks. But I would\nrather keep these two to use ereport(ERRCODE_FEATURE_NOT_SUPPORTED)\ninstead of elog(), and keep the test coverage of the previous patch\n(including the tests for the aggregates I noticed were missing).\nWould you be fine with that?\n\n> For pg13 and beyond, I liked Tom's idea of installing dummy functions\n> for tables without storage -- that seems safer.\n\nNot sure about that for v13. That would be invasive post-beta.\n--\nMichael", "msg_date": "Mon, 25 May 2020 18:29:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Mon, May 25, 2020 at 06:29:10PM +0900, Michael Paquier wrote:\n> Perhaps you are right though, and that we don't need to spend this\n> much energy into improving the error messages so I am fine to discard\n> this part. At the end, in order to remove the crashes, you just need\n> to keep around the two RELKIND_HAS_STORAGE() checks. But I would\n> rather keep these two to use ereport(ERRCODE_FEATURE_NOT_SUPPORTED)\n> instead of elog(), and keep the test coverage of the previous patch\n> (including the tests for the aggregates I noticed were missing).\n> Would you be fine with that?\n\nAnd this means the attached. Thoughts are welcome.\n--\nMichael", "msg_date": "Tue, 26 May 2020 12:00:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Mon, 25 May 2020 at 22:01, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 25, 2020 at 06:29:10PM +0900, Michael Paquier wrote:\n> > Perhaps you are right though, and that we don't need to spend this\n> > much energy into improving the error messages so I am fine to discard\n> > this part. At the end, in order to remove the crashes, you just need\n> > to keep around the two RELKIND_HAS_STORAGE() checks. But I would\n> > rather keep these two to use ereport(ERRCODE_FEATURE_NOT_SUPPORTED)\n> > instead of elog(), and keep the test coverage of the previous patch\n> > (including the tests for the aggregates I noticed were missing).\n> > Would you be fine with that?\n>\n> And this means the attached. Thoughts are welcome.\n\nso, currently the patch just installs protections on both currtid_*\nfunctions and adds some tests... therefore we can consider it as a bug\nfix and let it go in 13? actually also backpatch in 12...\n\npatch works, server doesn't crash anymore\n\nonly point to mention is a typo (a missing \"l\") in an added comment:\n\n+ * currtid_byrename\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 May 2020 00:29:39 -0500", "msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Wed, May 27, 2020 at 12:29:39AM -0500, Jaime Casanova wrote:\n> so, currently the patch just installs protections on both currtid_*\n> functions and adds some tests... therefore we can consider it as a bug\n> fix and let it go in 13? actually also backpatch in 12...\n\nYes, and it has the advantage to be simple.\n\n> only point to mention is a typo (a missing \"l\") in an added comment:\n> \n> + * currtid_byrename\n\nOops, thanks.\n--\nMichael", "msg_date": "Wed, 27 May 2020 15:03:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On 2020-May-26, Michael Paquier wrote:\n\n> On Mon, May 25, 2020 at 06:29:10PM +0900, Michael Paquier wrote:\n> > Perhaps you are right though, and that we don't need to spend this\n> > much energy into improving the error messages so I am fine to discard\n> > this part. At the end, in order to remove the crashes, you just need\n> > to keep around the two RELKIND_HAS_STORAGE() checks. But I would\n> > rather keep these two to use ereport(ERRCODE_FEATURE_NOT_SUPPORTED)\n> > instead of elog(), and keep the test coverage of the previous patch\n> > (including the tests for the aggregates I noticed were missing).\n> > Would you be fine with that?\n> \n> And this means the attached. Thoughts are welcome.\n\nYeah, this looks good to me. I would have used elog() instead, but\nI don't care enough ... as a translator, I can come up with a message as\nundecipherable as the original without worrying too much, since I\nsuspect nobody will ever see it in practice.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 May 2020 12:53:23 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "Hi,\n\nOn 2020-05-22 19:32:57 -0400, Alvaro Herrera wrote:\n> On 2020-Apr-09, Michael Paquier wrote:\n> \n> > Playing more with this stuff, it happens that we have zero code\n> > coverage for currtid() and currtid2(), and the main user of those\n> > functions I can find around is the ODBC driver:\n> > https://coverage.postgresql.org/src/backend/utils/adt/tid.c.gcov.html\n> \n> Yeah, they're there solely for ODBC as far as I know.\n\nAnd there only for very old servers (< 8.2), according to Hiroshi\nInoue. Found that out post 12 freeze. I was planning to drop them for\n13, but I unfortunately didn't get around to do so :(\n\nI guess we could decide to make a freeze exception to remove them now,\nalthough I'm not sure the reasons for doing so are strong enough.\n\n\n> > There are multiple cases to consider, particularly for views:\n> > - Case of a view with ctid as attribute taken from table.\n> > - Case of a view with ctid as attribute with incorrect attribute\n> > type.\n> > It is worth noting that all those code paths can trigger various\n> > elog() errors, which is not something that a user should be able to do\n> > using a SQL-callable function. There are also two code paths for\n> > cases where a view has no or more-than-one SELECT rules, which cannot\n> > normally be reached.\n> \n> > All in that, I propose something like the attached to patch the\n> > surroundings with tests to cover everything I could think of, which I\n> > guess had better be backpatched?\n> \n> I don't know, but this stuff is so unused that your patch seems\n> excessive ... and I think we'd rather not backpatch something so large.\n> I propose we do something less invasive in the backbranches, like just\n> throw elog() errors (nothing fancy) where necessary to avoid the\n> crashes. Even for pg12 it seems that that should be sufficient.\n> \n> For pg13 and beyond, I liked Tom's idea of installing dummy functions\n\nI concur that it seems unnecessary to make these translatable, even with\nthe reduced scope from\nhttps://www.postgresql.org/message-id/20200526025959.GE6155%40paquier.xyz\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 May 2020 17:55:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "Hi,\n\nOn 2020-04-05 12:51:56 -0400, Tom Lane wrote:\n> (2) The proximate cause of the crash is that rd_tableam is zero,\n> so that the interface functions in tableam.h just crash hard.\n> This seems like a pretty bad thing; does anyone want to promise\n> that there are no other oversights of the same ilk elsewhere,\n> and never will be?\n> \n> I think it might be a good idea to make relations-without-storage\n> set up rd_tableam as a vector of dummy functions that will throw\n> some suitable complaint about \"relation lacks storage\". NULL is\n> a horrible default for this.\n\nI don't have particularly strong views here. I can see a benefit to such\na pseudo AM. I can even imagine that there might some cases where we\nwould actually introduce some tableam functions for e.g. partitioned or\nviews tables, to centralize their handling more, instead of having such\nconsiderations more distributed. Clearly not worth actively trying to\ndo that for all existing code dealing with such relkinds, but there\nmight be cases where it's worthwhile.\n\nOTOH, it's kinda annoying having to maintain a not insignificant number\nof functions that needs to be updated whenever the tableam interface\nevolves. I guess we could partially hack our way through that by having\none such function, and just assigning it to all the mandatory callbacks\nby way of a void cast. But that'd be mighty ugly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 May 2020 18:05:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-05 12:51:56 -0400, Tom Lane wrote:\n>> I think it might be a good idea to make relations-without-storage\n>> set up rd_tableam as a vector of dummy functions that will throw\n>> some suitable complaint about \"relation lacks storage\". NULL is\n>> a horrible default for this.\n\n> OTOH, it's kinda annoying having to maintain a not insignificant number\n> of functions that needs to be updated whenever the tableam interface\n> evolves.\n\nThat argument sounds pretty weak. If you're making breaking changes\nin the tableam API, updating the signatures (not even any code) of\nsome dummy functions seems like by far the easiest part.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 21:11:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Thu, May 28, 2020 at 05:55:59PM -0700, Andres Freund wrote:\n> And there only for very old servers (< 8.2), according to Hiroshi\n> Inoue. Found that out post 12 freeze. I was planning to drop them for\n> 13, but I unfortunately didn't get around to do so :(\n\n[... digging ...]\nAh, I think I see your point from the code. That's related to the use\nof RETURNING for ctids.\n\n> I guess we could decide to make a freeze exception to remove them now,\n> although I'm not sure the reasons for doing so are strong enough.\n\nNot sure that's a good thing to do after beta1 for 13, but there is an\nargument for that in 14. FWIW, my company is a huge user of the ODBC\ndriver (perhaps the biggest one?), and we have nothing even close to\n8.2.\n\n> I concur that it seems unnecessary to make these translatable, even with\n> the reduced scope from\n> https://www.postgresql.org/message-id/20200526025959.GE6155%40paquier.xyz\n\nOkay, I have switched the patch to do that. Any comments or\nobjections?\n--\nMichael", "msg_date": "Fri, 29 May 2020 15:48:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Fri, May 29, 2020 at 03:48:40PM +0900, Michael Paquier wrote:\n> Okay, I have switched the patch to do that. Any comments or\n> objections?\n\nApplied this one then. I also got to check the ODBC driver in more\ndetails, and I am indeed not seeing those functions getting used.\nOne extra thing to know is that the ODBC driver requires libpq from at\nleast 9.2, which may give one more argument to just remove them.\n\nNB: prion has been failing, just looking into it.\n--\nMichael", "msg_date": "Mon, 1 Jun 2020 10:57:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Mon, Jun 01, 2020 at 10:57:29AM +0900, Michael Paquier wrote:\n> Applied this one then. I also got to check the ODBC driver in more\n> details, and I am indeed not seeing those functions getting used.\n> One extra thing to know is that the ODBC driver requires libpq from at\n> least 9.2, which may give one more argument to just remove them.\n> \n> NB: prion has been failing, just looking into it.\n\nWoah. This one is old, good catch from -DRELCACHE_FORCE_RELEASE. It\nhappens that since its introduction in a3519a2 from 2002,\ncurrtid_for_view() in tid.c closes the view and then looks at a RTE\nfrom it. I have reproduced the issue and the patch attached takes\ncare of the problem. Would it be better to backpatch all the way down\nor is that not worth caring about?\n--\nMichael", "msg_date": "Mon, 1 Jun 2020 11:20:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Woah. This one is old, good catch from -DRELCACHE_FORCE_RELEASE. It\n> happens that since its introduction in a3519a2 from 2002,\n> currtid_for_view() in tid.c closes the view and then looks at a RTE\n> from it. I have reproduced the issue and the patch attached takes\n> care of the problem. Would it be better to backpatch all the way down\n> or is that not worth caring about?\n\nUgh. Aside from the stale-pointer-deref problem, once we drop the lock\nwe can't even be sure the table still exists. +1 for back-patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 May 2020 22:26:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" }, { "msg_contents": "On Sun, May 31, 2020 at 10:26:54PM -0400, Tom Lane wrote:\n> Ugh. Aside from the stale-pointer-deref problem, once we drop the lock\n> we can't even be sure the table still exists. +1 for back-patch.\n\nThanks. Fixed down to 9.5 then to make prion happier.\n--\nMichael", "msg_date": "Mon, 1 Jun 2020 14:55:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: segmentation fault using currtid and partitioned tables" } ]
[ { "msg_contents": "Over in the thread at [1] we were wondering how come buildfarm member\nhyrax has suddenly started to fail like this:\n\ndiff -U3 /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/expected/errors.out /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/results/errors.out\n--- /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/expected/errors.out\t2018-11-21 13:48:48.340000000 -0500\n+++ /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/results/errors.out\t2020-04-04 04:48:16.704699045 -0400\n@@ -446,4 +446,4 @@\n 'select infinite_recurse()' language sql;\n \\set VERBOSITY terse\n select infinite_recurse();\n-ERROR: stack depth limit exceeded\n+ERROR: stack depth limit exceeded at character 8\n\nI've now looked into this, and found that it's not at all hard to\nduplicate; compile HEAD with -DCLOBBER_CACHE_ALWAYS, and run \"select\ninfinite_recurse()\", and you'll likely get the changed error message.\n(The lack of other buildfarm failures is probably just because we\nhave so few animals doing CLOBBER_CACHE_ALWAYS builds frequently.)\n\nThe issue seems indeed to have been triggered by 8f59f6b9c0, because\nthat inserted an equal() call into recomputeNamespacePath(). equal()\nincludes a check_stack_depth() call. We get the observed message if\nthis call is the one where the stack limit is hit, and it is invoked\ninside ParseFuncOrColumn(), which has set up a parser error callback\nto point at the infinite_recurse() call that it's trying to resolve.\nThat callback's been there a long time of course, so we may conclude\nthat no other code path reached from ParseFuncOrColumn contains a\nstack depth check, or we'd likely have seen this before.\n\nIt's a bit surprising perhaps that we run out of stack here and not\nsomewhere else; but ParseFuncOrColumn and its subroutines consume\nquite a lot of stack, because of FUNC_MAX_ARGS-sized local arrays,\nso it's not *that* surprising.\n\nSo, what to do to re-stabilize the regression tests? Even if\nwe wanted to revert 8f59f6b9c0, that'd be kind of a band-aid fix,\nbecause there are lots of other possible ways that a parser error\ncallback could be active at the point of the error. A few other\npossibilities are:\n\n1. Change the test to do \"\\set VERBOSITY sqlstate\" so that all that\nis printed is\n\tERROR: 54001\nERRCODE_STATEMENT_TOO_COMPLEX is used in few enough places that\nthis wouldn't be too much of a loss of specificity. (Or we could\ngive stack overflow its very own ERRCODE.)\n\n2. Hack pcb_error_callback so that it suppresses the error position\nreport for ERRCODE_STATEMENT_TOO_COMPLEX, as it already does\nfor ERRCODE_QUERY_CANCELED. That seems pretty unpleasant though.\n\n3. Create a separate expected-file to match the variant output.\nThis would be a maintenance problem, but we could ameliorate that\nby moving the test to its own regression script, which was something\nthat'd already been proposed to get around the PPC64 Linux kernel\nsignal-handling bug that's been causing intermittent failures on\nmost of the PPC64 buildfarm animals [2].\n\nOn the whole I find #1 the least unpleasant, as well as the most\nlikely to forestall future variants of this issue. It won't dodge\nthe PPC64 problem, but I'm willing to keep living with that one\nfor now.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaUOS5X64nKgFxNV7JHN4sRkNAJYW2gHz-LMb0Ej4xHig%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/27924.1571068231%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 05 Apr 2020 14:33:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Hi,\n\nOn 2020-04-05 14:33:19 -0400, Tom Lane wrote:\n> It's a bit surprising perhaps that we run out of stack here and not\n> somewhere else; but ParseFuncOrColumn and its subroutines consume\n> quite a lot of stack, because of FUNC_MAX_ARGS-sized local arrays,\n> so it's not *that* surprising.\n> \n> So, what to do to re-stabilize the regression tests? Even if\n> we wanted to revert 8f59f6b9c0, that'd be kind of a band-aid fix,\n> because there are lots of other possible ways that a parser error\n> callback could be active at the point of the error. A few other\n> possibilities are:\n\n> \n> 1. Change the test to do \"\\set VERBOSITY sqlstate\" so that all that\n> is printed is\n> \tERROR: 54001\n> ERRCODE_STATEMENT_TOO_COMPLEX is used in few enough places that\n> this wouldn't be too much of a loss of specificity. (Or we could\n> give stack overflow its very own ERRCODE.)\n\nWe could print the error using :LAST_ERROR_MESSAGE after removing a\npotential trailing \"at character ...\" if we're worried about the loss of\nspecificity.\n\n\n> On the whole I find #1 the least unpleasant, as well as the most\n> likely to forestall future variants of this issue. It won't dodge\n> the PPC64 problem, but I'm willing to keep living with that one\n> for now.\n\nAnother avenue could be to make ParseFuncOrColumn et al use less stack,\nand hope that it avoids the problem. It's a bit insane that we use this\nmuch.\n\n\nWe don't have to go there in this case, but I've before wondered about\nadding helpers that use an on-stack var for small allocations, and falls\nback to palloc otherwise. Something boiling down to:\n\n Oid actual_arg_types_stack[3];\n Oid *actual_arg_types;\n\n if (list_length(fargs) <= lengthof(actual_arg_types_stack))\n actual_arg_types = actual_arg_types_stack;\n else\n actual_arg_types = palloc(sizeof(*actual_arg_types) * list_length(fargs))\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 11:57:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Another avenue could be to make ParseFuncOrColumn et al use less stack,\n> and hope that it avoids the problem. It's a bit insane that we use this\n> much.\n\nThat would only reduce the chance of getting a stack overflow there,\nand not by that much, especially not for a CLOBBER_CACHE_ALWAYS animal\nwhich is going to be doing catalog accesses inside there too.\n\n> We don't have to go there in this case, but I've before wondered about\n> adding helpers that use an on-stack var for small allocations, and falls\n> back to palloc otherwise. Something boiling down to:\n\nSeems like that adds a lot of potential for memory leakage?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Apr 2020 15:04:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Hi,\n\nOn 2020-04-05 15:04:30 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Another avenue could be to make ParseFuncOrColumn et al use less stack,\n> > and hope that it avoids the problem. It's a bit insane that we use this\n> > much.\n> \n> That would only reduce the chance of getting a stack overflow there,\n> and not by that much, especially not for a CLOBBER_CACHE_ALWAYS animal\n> which is going to be doing catalog accesses inside there too.\n\nIt'd certainly not be bullet proof. But I don't think we ever were? If I\nunderstood you correctly we were just not noticing the stack overflow\ndanger before? We did catalog accesses from within there before too,\nthat's not changed by the addition of equal(), no?\n\n\nReminds me: I'll try to dust up my patch to make cache invalidation\nprocessing non-recursive for 14 (I wrote an initial version as part of a\nbugfix that we ended up fixing differently). Besides making\nCLOBBER_CACHE_ALWAYS vastly less expensive, it also reduces the cost of\nlogical decoding substantially.\n\n\n> > We don't have to go there in this case, but I've before wondered about\n> > adding helpers that use an on-stack var for small allocations, and falls\n> > back to palloc otherwise. Something boiling down to:\n> \n> Seems like that adds a lot of potential for memory leakage?\n\nDepends on the case, I'd say. Certainly might be useful to add a helper\nfor a corresponding conditional free.\n\nFor parsing cases like this it could be better to bulk free at the\nend. Compared to the memory needed for all the transformed arguments etc\nit'd probably not matter in the short term (especially if only done for\n4+ args).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 12:21:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-05 14:33:19 -0400, Tom Lane wrote:\n>> 1. Change the test to do \"\\set VERBOSITY sqlstate\" so that all that\n>> is printed is\n>> ERROR: 54001\n>> ERRCODE_STATEMENT_TOO_COMPLEX is used in few enough places that\n>> this wouldn't be too much of a loss of specificity. (Or we could\n>> give stack overflow its very own ERRCODE.)\n\n> We could print the error using :LAST_ERROR_MESSAGE after removing a\n> potential trailing \"at character ...\" if we're worried about the loss of\n> specificity.\n\nOh, actually it seems that :LAST_ERROR_MESSAGE is already just the\nprimary message, without any \"at character N\" addon, so this would be\na very easy way to ameliorate that complaint. (\"at character N\" is\nadded by libpq's pqBuildErrorMessage3 in TERSE mode, but psql does\nnot use that when filling LAST_ERROR_MESSAGE.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Apr 2020 15:38:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-05 15:04:30 -0400, Tom Lane wrote:\n>> That would only reduce the chance of getting a stack overflow there,\n>> and not by that much, especially not for a CLOBBER_CACHE_ALWAYS animal\n>> which is going to be doing catalog accesses inside there too.\n\n> It'd certainly not be bullet proof. But I don't think we ever were? If I\n> understood you correctly we were just not noticing the stack overflow\n> danger before? We did catalog accesses from within there before too,\n> that's not changed by the addition of equal(), no?\n\nAh, you're right, the CCA aspect is not such a problem as long as there\nare not check_stack_depth() calls inside the code that's run to load a\nsyscache or relcache entry. Which there probably aren't, at least not\nfor system catalogs. The reason we're seeing this on a CCA animal is\nsimply that a cache flush has occurred to force recomputeNamespacePath\nto do some work. (In theory it could happen on a non-CCA animal, given\nunlucky timing of an sinval overrun.)\n\nMy point here though is that it's basically been blind luck that we've\nnot seen this before. There's certainly no good reason to assume that\na check_stack_depth() call shouldn't happen while parsing a function\ncall, or within some other chunk of the parser that happens to set up\na transient error-position callback. And it's only going to get more\nlikely in future, seeing for example my ambitions to extend the\nexecutor so that run-time expression failures can also report error\ncursors. So I think that we should be looking for a permanent fix,\nnot a reduce-the-odds band-aid.\n\n>> Seems like that adds a lot of potential for memory leakage?\n\n> Depends on the case, I'd say. Certainly might be useful to add a helper\n> for a corresponding conditional free.\n> For parsing cases like this it could be better to bulk free at the\n> end. Compared to the memory needed for all the transformed arguments etc\n> it'd probably not matter in the short term (especially if only done for\n> 4+ args).\n\nWhat I wish we had was alloca(), so you don't need a FUNC_MAX_ARGS-sized\narray to parse a two-argument function call. Too bad C99 didn't add\nthat. (But some sniffing around suggests that an awful lot of systems\nhave it anyway ... even MSVC. Hmmm.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Apr 2020 15:52:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "On 2020-04-05 15:38:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > We could print the error using :LAST_ERROR_MESSAGE after removing a\n> > potential trailing \"at character ...\" if we're worried about the loss of\n> > specificity.\n> \n> Oh, actually it seems that :LAST_ERROR_MESSAGE is already just the\n> primary message, without any \"at character N\" addon, so this would be\n> a very easy way to ameliorate that complaint. (\"at character N\" is\n> added by libpq's pqBuildErrorMessage3 in TERSE mode, but psql does\n> not use that when filling LAST_ERROR_MESSAGE.)\n\nHeh. I though it worked differently because I just had typed some\ngibberish and got an error in :LAST_ERROR_MESSAGE that ended in \"at or\nnear ...\". But that's scan.l adding that explicitly...\n\n\n", "msg_date": "Sun, 5 Apr 2020 13:44:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "On 2020-Apr-05, Tom Lane wrote:\n\n> What I wish we had was alloca(), so you don't need a FUNC_MAX_ARGS-sized\n> array to parse a two-argument function call. Too bad C99 didn't add\n> that. (But some sniffing around suggests that an awful lot of systems\n> have it anyway ... even MSVC. Hmmm.)\n\nIsn't it the case that you can create an inner block with a constant\nwhose size is determined by a containing block's variable? I mean as in\nthe attached, which refuses to compile because of our -Werror=vla -- but\nif I remove it, it compiles fine and works in my system.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 5 Apr 2020 19:54:19 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" }, { "msg_contents": "Hi,\n\nOn 2020-04-05 19:54:19 -0400, Alvaro Herrera wrote:\n> Isn't it the case that you can create an inner block with a constant\n> whose size is determined by a containing block's variable? I mean as in\n> the attached, which refuses to compile because of our -Werror=vla -- but\n> if I remove it, it compiles fine and works in my system.\n\nIIRC msvc doesn't support VLAs. And there's generally a slow push\ntowards deprecating them (they've e.g. been moved to optional in C11).\n\nThey don't tend to make a lot of sense for sizes that aren't tightly\nbound. In contrast to palloc etc, there's no good way to catch\nallocation errors. Most of the time you'll just get a SIGBUS or such,\nbut sometimes you'll just end up overwriting data (if the allocation is\nlarge enough to not touch the guard pages).\n\nBoth alloca/vlas also add some per-call overhead.\n\nAllocating the common size on-stack, and the uncommon ones on heap\nshould be cheaper, and handles the cases of large allocations much\nbetter.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 17:07:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CLOBBER_CACHE_ALWAYS regression instability" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16345\nLogged by: Augustinas Jokubauskas\nEmail address: doitsimplefy@gmail.com\nPostgreSQL version: 12.0\nOperating system: Ubuntu 18.04.3\nDescription: \n\nWhen query:\r\n\r\nselect ts_headline(\r\n $$Lorem ipsum urna. Nullam nullam ullamcorper urna.$$,\r\n to_tsquery('Lorem') && phraseto_tsquery('ullamcorper urna'),\r\n 'StartSel=#$#, StopSel=#$#, FragmentDelimiter=$#$, MaxFragments=100,\r\nMaxWords=100, MinWords=1'\r\n);\r\n\r\nis ran, a fragment of\r\n> Lorem ipsum urna. Nullam nullam ullamcorper urna.\r\nshould be returned, however, the result is a single word of #$#Lorem#$# is\r\nreturned, meaning that ts_headline did not find the queried string.", "msg_date": "Sun, 05 Apr 2020 21:49:23 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16345: ts_headline does not find phrase matches correctly" }, { "msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> When query:\n> select ts_headline(\n> $$Lorem ipsum urna. Nullam nullam ullamcorper urna.$$,\n> to_tsquery('Lorem') && phraseto_tsquery('ullamcorper urna'),\n> 'StartSel=#$#, StopSel=#$#, FragmentDelimiter=$#$, MaxFragments=100,\n> MaxWords=100, MinWords=1'\n> );\n> is ran, a fragment of\n>> Lorem ipsum urna. Nullam nullam ullamcorper urna.\n> should be returned, however, the result is a single word of #$#Lorem#$# is\n> returned, meaning that ts_headline did not find the queried string.\n\nYeah. I spent some time digging into this, and was reminded of how\nmiserably baroque and undercommented almost all of the text-search code\nis. Anyway, the bottom line is that wparser_def.c's hlCover() is\nfailing here. It is looking for a minimal cover, that is a substring\nsatisfying the given tsquery, and obviously with this query the only\nsuch substring is the whole sentence. (Well, not the trailing period.)\nHowever, it looks like hlCover() wasn't updated when we added phrase\nsearch, because that made word position significant and invalidated\na rather fundamental assumption that hlCover() seems to be making:\nit figures that any substring including all words of the query ought\nto be good enough. Since \"urna\" also appears as the third word of the\ntext, hlCover() doesn't try any substrings longer than \"Lorem ipsum\nurna. Nullam nullam ullamcorper\", thus it never finds one that\nactually satisfies the query, and we end up failing.\n\nAlthough utterly undocumented, the algorithm it is using seems to be\n\"find the latest first occurrence of any query word (here,\n'ullamcorper'), then find the earliest last occurrence of any query\nword before that (hence, 'Lorem'), and then see if the substring\nbetween those points satisfies the query (oops, nope). If not,\nstart over from a point one past the previous try.\" But all the\ntries after that will omit 'Lorem', so they all fail to match the\nquery, even though it'll eventually try substrings that include\nthe later occurrence of 'urna'.\n\nI've not spent a huge amount of time thinking about it, but this might\nbe all right as a way to find a shortest-possible cover for queries\ninvolving only AND and OR operators. (It'd fall down on NOT operators\nof course, but it already cheats on that by telling TS_execute to\nignore NOT subtrees.) However, it's blatantly wrong as soon as a\nphrase operator is involved, because then only some occurrences of a\nparticular word in the string might meet the query's requirements.\n\nSo I set out to rewrite hlCover to make fewer assumptions about\nwhat a valid cover could be. In the new version appearing below,\nit just tries every substring that begins and ends with some query\nword, preferring earlier and shorter substrings. This should\ncertainly find the desired cover ... but it didn't work, plus it\nbroke a number of existing regression test cases. I was thus forced\nto the realization that its immediate caller mark_hl_words is *also*\nbroken, because it's rejecting good headlines in favor of bad ones,\nor even in favor of headlines that contain no cover at all. Which\nwas a bit daunting, because that's an even larger and uglier chunk of\nundocumented code.\n\nI ended up with the following stepwise approach to improving the\nsituation.\n\n0001 below adds the problematic test case, with the wrong output\nthat HEAD produces. This was basically just to track which changes\naffected that.\n\n0002 makes a bunch of purely-cosmetic changes to mark_hl_words\nand its siblings, in hopes of making it less unintelligible to\nthe next hacker. I added comments, used macros to make some of\nthe hairier if-tests more readable, and changed a couple of\nsmall things for more readability (though they can be proven\nnot to affect the behavior). As expected, regression outputs\ndon't change here.\n\n0003 fixes a couple of fairly obvious bugs. One is that there's\nan early-exit optimization that tries to reject a possible headline\nbefore having fully defined its boundaries. This is not really\nnecessary, but worse it's wrong because the figure of merit might\nchange by the time we've chosen the actual boundaries. Deleting\nit doesn't change any regression test cases, but I'm sure it'd be\npossible to invent a scenario where it does the wrong thing.\nThe other bug is that the loop that tries to shorten a maximum-length\nheadline until it has a good endpoint has an order-of-operations issue:\nit can exit after making adjustments to curlen and poslen that discount\nthe i'th word from the headline, but without changing pose to actually\nexclude that word. So we end up with a poslen figure-of-merit that\ndoes not describe the actual headline from posb to pose.\n\nUnfortunately, the one regression test output change caused by 0003\nis clearly for the worse: hlCover successfully finds the cover '1 3'\nfor the query, but now mark_hl_words discards '3' and only highlights\n'1'. What is happening is that '3' fails the \"short word\" test and\nis thereby excluded from the headline. This behavior is clearly what\nthe code intends, but it was accidentally masked before by the\norder-of-operations bug.\n\nI argue that the problem here is that excluding an actual query term\nfrom the headline on the basis of its being short is just stupid.\nThere is much-earlier processing that is charged with excluding\nstop words from tsqueries, and this code has no business second-\nguessing that. So the short-word test should only be applied to\ntext words that did not appear in the tsquery.\n\nHence, 0004 rejiggers the new BADENDPOINT() macro so that query\nterms are never considered bad endpoints. That fixes the '1 <-> 3'\ntest case broken by 0003, but now there's a new diff: matching\n'1 2 3 1 3' to '1 & 3' now selects only '1 2 3' as the headline\nnot '1 2 3 1'. I do not think this is a bug though. The reason\nwe got '1 2 3 1' before is that hlCover selected the cover '1 2 3'\nbut then mark_hl_words decided '3' was a bad endpoint and extended\nthe headline by one word. (It would've extended more, because '1'\nis also a bad endpoint, except MaxWords=4 stopped it.) Again,\ntreating '3' as a bad endpoint is just silly, so I think this change\nis acceptable.\n\nNext, 0005 rearranges the preference order for different possible\nheadlines so that the first preference item is whether or not the\nheadline includes the cover string initially found by hlCover.\n(It might not, if the cover string was longer than MaxWords.)\nIt seems to me to be dumb to prefer headlines that don't meet that\nrequirement to ones that do, because shorter headlines might not\nsatisfy the user's query, which surely fails to satisfy the principle\nof least astonishment. (While I'm not entirely sure what can be\nsaid about the old implementation of hlCover, with my rewrite it\nis *certain* that substrings not including the full cover won't\nsatisfy the query.) 0005 also gets rid of what seems to me to\nbe a corner-case bug in the old preference logic, which is that\nit will take a headline with fewer query words over one with more,\nif the former has a \"good\" endpoint and the latter doesn't. That\nmakes the actual preference order close to unexplainable.\n\n0005 doesn't in itself change any regression results, but it's\nnecessary to prevent problems from appearing with the next patch.\nThe real situation here, as I've come to understand it, is that\nthe existing hlCover code frequently produces only one possible cover\nand thus it doesn't matter how silly are mark_hl_words's rules for\npreferring one over another. The rewrite causes hlCover to produce\nmore candidate covers and so it becomes more important for\nmark_hl_words to make sane decisions.\n\nLastly, 0006 introduces the new hlCover code. This at last fixes\nthe test case for the bug at hand. It also introduces two new diffs.\nOne is this change in one Rime of the Ancient Mariner example:\n\n- <b>painted</b> <b>Ocean</b>. +\n- Water, water, every where +\n- And all the boards did shrink;+\n- Water, water, every\n\n+ <b>painted</b> Ship +\n+ Upon a <b>painted</b> <b>Ocean</b>.+\n+ Water, water, every where +\n+ And all the boards did shrink\n\nI don't see any way that that's not an improved match, given that\nthe query is 'painted <-> Ocean'; including another match to one\nof the query words is surely better than not doing so. The other\nchange is that matching '1 2 3 1 3' to '1 <-> 3' now selects '3 1 3'\nnot just '1 3'. These are basically both the same change. The\nreason for it is that the old hlCover would *only* find the cover\n'painted Ocean' (or '1 3'). The new hlCover finds that, but it\nalso finds 'painted ... painted Ocean' (or '3 1 3'), and then the\npreference metric for more query words likes this option better.\n\nSo my feeling is that these changes are for the better and we shouldn't\ncomplain. We could perhaps make them go away if we changed the\npreference rules some more, for example by preferring headlines that\nuse shorter covers instead of (or at least ahead of) those having more\nquery words. But ISTM that would actually be a bigger change from the\ncurrent behavior, so likely it would create new changes in other query\nresults. Besides, there's already a preference for shorter covers in\nhlCover, so I don't feel like we need another one at the calling level.\n\nIn short then, I propose applying 0001-0006. I'm not quite sure\nif we should back-patch, or just be content to fix this in HEAD.\nBut there's definitely an argument that this has been broken since\nwe added phrase search (in 9.6) and deserves to be back-patched.\n\n(BTW, I wonder if we could now undo the hack to ignore NOT\nrestrictions while finding covers. I haven't tried it though.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 08 Apr 2020 23:02:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16345: ts_headline does not find phrase matches correctly" }, { "msg_contents": "redirected to hackers.\n\nOn Wed, Apr 8, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> In short then, I propose applying 0001-0006. I'm not quite sure\n> if we should back-patch, or just be content to fix this in HEAD.\n> But there's definitely an argument that this has been broken since\n> we added phrase search (in 9.6) and deserves to be back-patched.\n>\n>\nThanks for fixing this.\n\nI am getting a compiler warning, both with and without --enable-cassert.\n\nwparser_def.c: In function 'prsd_headline':\nwparser_def.c:2530:2: warning: 'pose' may be used uninitialized in this\nfunction [-Wmaybe-uninitialized]\n mark_fragment(prs, highlightall, bestb, beste);\n ^\nwparser_def.c:2384:6: note: 'pose' was declared here\n int pose,\n\n\nIt makes no sense to me that pose could be used uninitialized on a line\nthat doesn't use pose at all, so maybe it is a compiler bug or something.\n\nPostgreSQL 13devel-c9b0c67 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bit\n\nCheers,\n\nJeff\n\nredirected to hackers.On Wed, Apr 8, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nIn short then, I propose applying 0001-0006.  I'm not quite sure\nif we should back-patch, or just be content to fix this in HEAD.\nBut there's definitely an argument that this has been broken since\nwe added phrase search (in 9.6) and deserves to be back-patched.Thanks for fixing this.  I am getting a compiler warning, both with and without --enable-cassert.wparser_def.c: In function 'prsd_headline':wparser_def.c:2530:2: warning: 'pose' may be used uninitialized in this function [-Wmaybe-uninitialized]  mark_fragment(prs, highlightall, bestb, beste);  ^wparser_def.c:2384:6: note: 'pose' was declared here  int   pose,It makes no sense to me that pose could be used \n\nuninitialized on a line that doesn't use pose at all, so maybe it is a compiler bug or something.PostgreSQL 13devel-c9b0c67 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 64-bitCheers,Jeff", "msg_date": "Thu, 9 Apr 2020 14:39:41 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16345: ts_headline does not find phrase matches correctly" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> I am getting a compiler warning, both with and without --enable-cassert.\n\n> wparser_def.c: In function 'prsd_headline':\n> wparser_def.c:2530:2: warning: 'pose' may be used uninitialized in this\n> function [-Wmaybe-uninitialized]\n> mark_fragment(prs, highlightall, bestb, beste);\n> ^\n> wparser_def.c:2384:6: note: 'pose' was declared here\n> int pose,\n\nI see it too, now that I try a different compiler version. Will fix.\n\n> It makes no sense to me that pose could be used uninitialized on a line\n> that doesn't use pose at all, so maybe it is a compiler bug or something.\n\nIt looks like the compiler is doing aggressive inlining, which might\nhave something to do with the crummy error report placement. Notice\nthat this isn't inside 'prsd_headline' at all, so far as the source code\nis concerned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Apr 2020 15:29:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16345: ts_headline does not find phrase matches correctly" } ]
[ { "msg_contents": "Hi,\n\nWhen starting with on a data directory with an older WAL page magic we\ncurrently make that hard to debug. E.g.:\n\n2020-04-05 15:31:04.314 PDT [1896669][:0] LOG: database system was shut down at 2020-04-05 15:24:56 PDT\n2020-04-05 15:31:04.314 PDT [1896669][:0] LOG: invalid primary checkpoint record\n2020-04-05 15:31:04.314 PDT [1896669][:0] PANIC: could not locate a valid checkpoint record\n2020-04-05 15:31:04.315 PDT [1896668][:0] LOG: startup process (PID 1896669) was terminated by signal 6: Aborted\n2020-04-05 15:31:04.315 PDT [1896668][:0] LOG: aborting startup due to startup process failure\n2020-04-05 15:31:04.316 PDT [1896668][:0] LOG: database system is shut down\n\nAs far as I can tell this is not just the case for a wrong page magic,\nbut for all page level validation errors.\n\nI think this largely originates in:\n\ncommit 0668719801838aa6a8bda330ff9b3d20097ea844\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2018-05-05 01:34:53 +0300\n\n Fix scenario where streaming standby gets stuck at a continuation record.\n\n If a continuation record is split so that its first half has already been\n removed from the master, and is only present in pg_wal, and there is a\n recycled WAL segment in the standby server that looks like it would\n contain the second half, recovery would get stuck. The code in\n XLogPageRead() incorrectly started streaming at the beginning of the\n WAL record, even if we had already read the first page.\n\n Backpatch to 9.4. In principle, older versions have the same problem, but\n without replication slots, there was no straightforward mechanism to\n prevent the master from recycling old WAL that was still needed by standby.\n Without such a mechanism, I think it's reasonable to assume that there's\n enough slack in how many old segments are kept around to not run into this,\n or you have a WAL archive.\n\n Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with\n some extra comments by me.\n\n Discussion: https://www.postgresql.org/message-id/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com\n\nwhich added the following to XLogPageRead():\n\n+ /*\n+ * Check the page header immediately, so that we can retry immediately if\n+ * it's not valid. This may seem unnecessary, because XLogReadRecord()\n+ * validates the page header anyway, and would propagate the failure up to\n+ * ReadRecord(), which would retry. However, there's a corner case with\n+ * continuation records, if a record is split across two pages such that\n+ * we would need to read the two pages from different sources. For\n+ * example, imagine a scenario where a streaming replica is started up,\n+ * and replay reaches a record that's split across two WAL segments. The\n+ * first page is only available locally, in pg_wal, because it's already\n+ * been recycled in the master. The second page, however, is not present\n+ * in pg_wal, and we should stream it from the master. There is a recycled\n+ * WAL segment present in pg_wal, with garbage contents, however. We would\n+ * read the first page from the local WAL segment, but when reading the\n+ * second page, we would read the bogus, recycled, WAL segment. If we\n+ * didn't catch that case here, we would never recover, because\n+ * ReadRecord() would retry reading the whole record from the beginning.\n+ *\n+ * Of course, this only catches errors in the page header, which is what\n+ * happens in the case of a recycled WAL segment. Other kinds of errors or\n+ * corruption still has the same problem. But this at least fixes the\n+ * common case, which can happen as part of normal operation.\n+ *\n+ * Validating the page header is cheap enough that doing it twice\n+ * shouldn't be a big deal from a performance point of view.\n+ */\n+ if (!XLogReaderValidatePageHeader(xlogreader, targetPagePtr, readBuf))\n+ {\n+ /* reset any error XLogReaderValidatePageHeader() might have set */\n+ xlogreader->errormsg_buf[0] = '\\0';\n+ goto next_record_is_invalid;\n+ }\n+\n\nI really can't follow the logic of just intentionally and silently\nthrowing the error message away here. Isn't this basically hiding *all*\npage level error messages?\n\nAnd even in the scenarios where this were the right thing, I feel like\nnot even outputting a debugging message makes debugging situations in\nwhich this is encountered unnecessarily hard.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 15:49:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "WAL page magic errors (and plenty others) got hard to debug." }, { "msg_contents": "Hi,\n\nOn 2020-04-05 15:49:16 -0700, Andres Freund wrote:\n> When starting with on a data directory with an older WAL page magic we\n> currently make that hard to debug. E.g.:\n> \n> 2020-04-05 15:31:04.314 PDT [1896669][:0] LOG: database system was shut down at 2020-04-05 15:24:56 PDT\n> 2020-04-05 15:31:04.314 PDT [1896669][:0] LOG: invalid primary checkpoint record\n> 2020-04-05 15:31:04.314 PDT [1896669][:0] PANIC: could not locate a valid checkpoint record\n> 2020-04-05 15:31:04.315 PDT [1896668][:0] LOG: startup process (PID 1896669) was terminated by signal 6: Aborted\n> 2020-04-05 15:31:04.315 PDT [1896668][:0] LOG: aborting startup due to startup process failure\n> 2020-04-05 15:31:04.316 PDT [1896668][:0] LOG: database system is shut down\n> \n> As far as I can tell this is not just the case for a wrong page magic,\n> but for all page level validation errors.\n> \n> I think this largely originates in:\n> \n> commit 0668719801838aa6a8bda330ff9b3d20097ea844\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: 2018-05-05 01:34:53 +0300\n> \n> Fix scenario where streaming standby gets stuck at a continuation record.\n\nHeikki, Kyotaro, it'd be good if you could comment on what motivated\nthis approach. Because it sure as hell hides a lot of useful information\nwhen there's a problem with WAL. Or well, all information.\n\n- Andres\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Apr 2020 01:08:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WAL page magic errors (and plenty others) got hard to debug." } ]
[ { "msg_contents": "Hello, hackers!\n\nI`m investigating a complains from our clients about archive recovery \nspeed been very slow, and I`ve noticed a really strange and, I think, a \nvery dangerous recovery behavior.\n\nWhen running multi-timeline archive recovery, for every requested segno \nstartup process iterates through every timeline in restore target \ntimeline history, starting from highest timeline and ending in current, \nand tries to fetch the segno in question from this timeline.\n\nConsider the following example.\nTimelines:\nARCHIVE INSTANCE 'node'\n================================================================================================================================ \n\n  TLI  Parent TLI  Switchpoint  Min Segno                 Max \nSegno                 N segments  Size   Zratio  N backups  Status\n================================================================================================================================ \n\n  3    2           0/AEFFEDE0   0000000300000000000000AE \n0000000300000000000000D5  40          41MB   15.47   0 OK\n  2    1           0/A08768D0   0000000200000000000000A0 \n0000000200000000000000AE  15          14MB   17.24   0 OK\n  1    0           0/0          000000010000000000000001 \n0000000100000000000000BB  187         159MB  18.77   1          OK\n\n\nBackup:\n================================================================================================================================ \n\n  Instance  Version  ID      Recovery Time           Mode  WAL Mode  \nTLI  Time  Data   WAL  Zratio  Start LSN  Stop LSN   Status\n================================================================================================================================ \n\n  node      11       Q8C8IH  2020-04-06 02:13:31+03  FULL ARCHIVE 1/0    \n3s  23MB  16MB    1.00  0/2000028  0/30000B8  OK\n\n\nSo when we are trying to restore this backup, located on Timeline 1, to \nthe restore target on Timeline 3, we are getting this in the PostgreSQL \nlog:\n....\n2020-04-05 23:24:36 GMT [28508]: [5-1] LOG:  restored log file \n\"00000003.history\" from archive\nINFO: PID [28511]: pg_probackup archive-get WAL file: \n000000030000000000000002, remote: none, threads: 1/1, batch: 20\nERROR: PID [28511]: pg_probackup archive-get failed to deliver WAL file \n000000030000000000000002, prefetched: 0/20, time elapsed: 0ms\nINFO: PID [28512]: pg_probackup archive-get WAL file: \n000000020000000000000002, remote: none, threads: 1/1, batch: 20\nERROR: PID [28512]: pg_probackup archive-get failed to deliver WAL file \n000000020000000000000002, prefetched: 0/20, time elapsed: 0ms\nINFO: PID [28513]: pg_probackup archive-get WAL file: \n000000010000000000000002, remote: none, threads: 1/1, batch: 20\nINFO: PID [28513]: pg_probackup archive-get copied WAL file \n000000010000000000000002\n2020-04-05 23:24:36 GMT [28508]: [6-1] LOG:  restored log file \n\"000000010000000000000002\" from archive\n...\n\nBefore requesting 000000010000000000000002 recovery tries to fetch \n000000030000000000000002 and 000000020000000000000002 and that goes for \nevery segment, restored from the archive.\nThis tremendously slows down recovery speed, especially if archive is \nlocated on remote machine with high latency network.\nAnd it also may lead to feeding recovery with wrong WAL segment, located \non the next timeline.\n\nIs there a reason behind this behavior?\n\nAlso I`ve  attached a patch, which fixed this issue for me, but I`m not \nsure, that chosen approach is sound and didn`t break something.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 6 Apr 2020 03:02:37 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "archive recovery fetching wrong segments" }, { "msg_contents": "Hi Grigory,\n\nOn 4/5/20 8:02 PM, Grigory Smolkin wrote:\n> Hello, hackers!\n> \n> I`m investigating a complains from our clients about archive recovery \n> speed been very slow, and I`ve noticed a really strange and, I think, a \n> very dangerous recovery behavior.\n> \n> When running multi-timeline archive recovery, for every requested segno \n> startup process iterates through every timeline in restore target \n> timeline history, starting from highest timeline and ending in current, \n> and tries to fetch the segno in question from this timeline.\n\n<snip>\n\n> Is there a reason behind this behavior?\n> \n> Also I`ve  attached a patch, which fixed this issue for me, but I`m not \n> sure, that chosen approach is sound and didn`t break something.\n\nThis sure looks like [1] which has a completed patch nearly ready to \ncommit. Can you confirm and see if the proposed patch looks good?\n\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/flat/792ea085-95c4-bca0-ae82-47fdc80e146d%40oss.nttdata.com#800f005e01af6cb3bfcd70c53007a2db\n\n\n", "msg_date": "Mon, 6 Apr 2020 14:17:52 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: archive recovery fetching wrong segments" }, { "msg_contents": "\nOn 4/6/20 9:17 PM, David Steele wrote:\n> Hi Grigory,\n\nHello!\n>\n> On 4/5/20 8:02 PM, Grigory Smolkin wrote:\n>> Hello, hackers!\n>>\n>> I`m investigating a complains from our clients about archive recovery \n>> speed been very slow, and I`ve noticed a really strange and, I think, \n>> a very dangerous recovery behavior.\n>>\n>> When running multi-timeline archive recovery, for every requested \n>> segno startup process iterates through every timeline in restore \n>> target timeline history, starting from highest timeline and ending in \n>> current, and tries to fetch the segno in question from this timeline.\n>\n> <snip>\n>\n>> Is there a reason behind this behavior?\n>>\n>> Also I`ve  attached a patch, which fixed this issue for me, but I`m \n>> not sure, that chosen approach is sound and didn`t break something.\n>\n> This sure looks like [1] which has a completed patch nearly ready to \n> commit. Can you confirm and see if the proposed patch looks good?\n\nWell I`ve been testing it all day and so far nothing is broken.\n\n\nBut this foreach(xlog.c:3777) loop looks very strange to me, it is not \nrobust, we are blindly going over timelines and feeding recovery some \nfiles, hoping they are the right ones. I think we can do better, because:\n1. we know whether or not we are running multi-timeline recovery\n2. we know next timeline ID and can calculate switchpoint segment\n3. make an informed decision about from what timeline we must requesting \nfiles now.\n\nI will work on it.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 6 Apr 2020 22:23:55 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: archive recovery fetching wrong segments" }, { "msg_contents": "On 4/6/20 3:23 PM, Grigory Smolkin wrote:\n> \n> On 4/6/20 9:17 PM, David Steele wrote:\n>> Hi Grigory,\n> \n> Hello!\n>>\n>> On 4/5/20 8:02 PM, Grigory Smolkin wrote:\n>>> Hello, hackers!\n>>>\n>>> I`m investigating a complains from our clients about archive recovery \n>>> speed been very slow, and I`ve noticed a really strange and, I think, \n>>> a very dangerous recovery behavior.\n>>>\n>>> When running multi-timeline archive recovery, for every requested \n>>> segno startup process iterates through every timeline in restore \n>>> target timeline history, starting from highest timeline and ending in \n>>> current, and tries to fetch the segno in question from this timeline.\n>>\n>> <snip>\n>>\n>>> Is there a reason behind this behavior?\n>>>\n>>> Also I`ve  attached a patch, which fixed this issue for me, but I`m \n>>> not sure, that chosen approach is sound and didn`t break something.\n>>\n>> This sure looks like [1] which has a completed patch nearly ready to \n>> commit. Can you confirm and see if the proposed patch looks good?\n> \n> Well I`ve been testing it all day and so far nothing is broken.\n\nPerhaps I wasn't clear. There is a patch in this thread:\n\nhttps://www.postgresql.org/message-id/flat/792ea085-95c4-bca0-ae82-47fdc80e146d%40oss.nttdata.com#800f005e01af6cb3bfcd70c53007a2db\n\nwhich seems to address the same issue and is ready to be committed.\n\nI'd suggest you have a look at that patch and see if it fixes your issue.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 6 Apr 2020 15:51:45 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: archive recovery fetching wrong segments" }, { "msg_contents": "\nOn 4/6/20 10:51 PM, David Steele wrote:\n> On 4/6/20 3:23 PM, Grigory Smolkin wrote:\n>>\n>> On 4/6/20 9:17 PM, David Steele wrote:\n>>> Hi Grigory,\n>>\n>> Hello!\n>>>\n>>> On 4/5/20 8:02 PM, Grigory Smolkin wrote:\n>>>> Hello, hackers!\n>>>>\n>>>> I`m investigating a complains from our clients about archive \n>>>> recovery speed been very slow, and I`ve noticed a really strange \n>>>> and, I think, a very dangerous recovery behavior.\n>>>>\n>>>> When running multi-timeline archive recovery, for every requested \n>>>> segno startup process iterates through every timeline in restore \n>>>> target timeline history, starting from highest timeline and ending \n>>>> in current, and tries to fetch the segno in question from this \n>>>> timeline.\n>>>\n>>> <snip>\n>>>\n>>>> Is there a reason behind this behavior?\n>>>>\n>>>> Also I`ve  attached a patch, which fixed this issue for me, but I`m \n>>>> not sure, that chosen approach is sound and didn`t break something.\n>>>\n>>> This sure looks like [1] which has a completed patch nearly ready to \n>>> commit. Can you confirm and see if the proposed patch looks good?\n>>\n>> Well I`ve been testing it all day and so far nothing is broken.\n>\n> Perhaps I wasn't clear. There is a patch in this thread:\n>\n> https://www.postgresql.org/message-id/flat/792ea085-95c4-bca0-ae82-47fdc80e146d%40oss.nttdata.com#800f005e01af6cb3bfcd70c53007a2db \n>\n>\n> which seems to address the same issue and is ready to be committed.\n>\n> I'd suggest you have a look at that patch and see if it fixes your issue.\n\nOps, I`ve missed it.\nThank you, I will look into it.\n\n\n>\n> Regards,\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 6 Apr 2020 23:46:01 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: archive recovery fetching wrong segments" } ]
[ { "msg_contents": "Hi,\n\nUsing 2PC with master very quickly leads to:\n\n2020-04-05 19:42:18.368 PDT [2298126][5/2009:0] LOG: out of file descriptors: Too many open files; release and retry\n2020-04-05 19:42:18.368 PDT [2298126][5/2009:0] STATEMENT: COMMIT PREPARED 'ptx_2';\n\nThis started with:\n\ncommit 0dc8ead46363fec6f621a12c7e1f889ba73b55a9 (HEAD -> master)\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2019-11-25 15:04:54 -0300\n\n Refactor WAL file-reading code into WALRead()\n\n\nI found this while trying to benchmark the effect of my snapshot changes\non 2pc. I just used the attached pgbench file.\n\nandres@awork3:~/build/postgres/dev-assert/vpath$ pgbench -n -s 500 -c 4 -j 4 -T 100000 -P1 -f ~/tmp/pgbench-write-2pc.sql\nprogress: 1.0 s, 3723.8 tps, lat 1.068 ms stddev 0.305\nclient 2 script 0 aborted in command 8 query 0: ERROR: could not seek to end of file \"base/14036/16396\": Too many open files\nclient 1 script 0 aborted in command 8 query 0: ERROR: could not seek to end of file \"base/14036/16396\": Too many open files\nclient 3 script 0 aborted in command 8 query 0: ERROR: could not seek to end of file \"base/14036/16396\": Too many open files\nclient 0 script 0 aborted in command 8 query 0: ERROR: could not seek to end of file \"base/14036/16396\": Too many open files\ntransaction type: /home/andres/tmp/pgbench-write-2pc.sql\n\nI've not yet reviewed the change sufficiently to pinpoint the issue.\n\n\nIt's a bit sad that nobody has hit this in the last few months :(.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 5 Apr 2020 19:56:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "2pc leaks fds" }, { "msg_contents": "On Sun, Apr 05, 2020 at 07:56:51PM -0700, Andres Freund wrote:\n> I found this while trying to benchmark the effect of my snapshot changes\n> on 2pc. I just used the attached pgbench file.\n> \n> I've not yet reviewed the change sufficiently to pinpoint the issue.\n\nIndeed. It takes seconds to show up.\n\n> It's a bit sad that nobody has hit this in the last few months :(.\n\n2PC shines with the code of xlogreader.c in this case because it keeps\nopening and closing XLogReaderState for a short amount of time. So it\nis not surprising to me to see this error only months after the fact\nbecause recovery or pg_waldump just use one XLogReaderState. From\nwhat I can see, the error is that the code only bothers closing\nWALOpenSegment->seg when switching to a new segment, but we need also\nto close it when finishing the business in XLogReaderFree().\n\nI am adding an open item, and attached is a patch to take care of the\nproblem. Thoughts?\n--\nMichael", "msg_date": "Mon, 6 Apr 2020 14:26:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Hi,\n\nOn 2020-04-06 14:26:48 +0900, Michael Paquier wrote:\n> 2PC shines with the code of xlogreader.c in this case because it keeps\n> opening and closing XLogReaderState for a short amount of time. So it\n> is not surprising to me to see this error only months after the fact\n> because recovery or pg_waldump just use one XLogReaderState.\n\nWell, it doesn't exactly signal that people (including me, up to just\nnow) are testing their changes all that carefully...\n\n\n> From what I can see, the error is that the code only bothers closing\n> WALOpenSegment->seg when switching to a new segment, but we need also\n> to close it when finishing the business in XLogReaderFree().\n\nYea, I came to the same conclusion and locally fixed it the same way\n(except having the close a bit earlier in XLogReaderFree()).\n\n\n> diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\n> index f3fea5132f..7e25e2050a 100644\n> --- a/src/backend/access/transam/xlogreader.c\n> +++ b/src/backend/access/transam/xlogreader.c\n> @@ -144,6 +144,9 @@ XLogReaderFree(XLogReaderState *state)\n> \tif (state->main_data)\n> \t\tpfree(state->main_data);\n> \n> +\tif (state->seg.ws_file >= 0)\n> +\t\tclose(state->seg.ws_file);\n> +\n> \tpfree(state->errormsg_buf);\n> \tif (state->readRecordBuf)\n> \t\tpfree(state->readRecordBuf);\n\nBut I'm not sure it's quite the right idea. I'm not sure I fully\nunderstand the design of 0dc8ead46, but it looks to me like it's\nintended to allow users of the interface to have different ways of\nopening files. If we just close() the fd that'd be a bit more limited.\n\nOTOH, I think all but one (XLogPageRead()) of the current users of\nXLogReader use WALRead(), which also close()s the fd (before calling the\nWALSegmentOpen callback).\n\n\nThe XLogReader code flow has gotten quite complicated\n:(. XLogReaderReadRecord()-> state->read_page() ->\nlogical_read_xlog_page etc -> WALRead() -> wal_segment_open callback etc.\n\nThere's been a fair bit of change, making the interface more generic /\npowerful / reducing duplication, but not a lot of added / adapted\ncomments in the header...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 22:44:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> > From what I can see, the error is that the code only bothers closing\n> > WALOpenSegment->seg when switching to a new segment, but we need also\n> > to close it when finishing the business in XLogReaderFree().\n> \n> Yea, I came to the same conclusion and locally fixed it the same way\n> (except having the close a bit earlier in XLogReaderFree()).\n\nIt's still not quite clear to me why the problem starts to appear after\n0dc8ead46. This patch does not remove any close() call from XLogReaderFree().\n\n> But I'm not sure it's quite the right idea. I'm not sure I fully\n> understand the design of 0dc8ead46, but it looks to me like it's\n> intended to allow users of the interface to have different ways of\n> opening files. If we just close() the fd that'd be a bit more limited.\n\nIt should have allowed users to have different ways to *locate the segment*\nfile. The WALSegmentOpen callback could actually return file path instead of\nthe file descriptor and let WALRead() perform the opening/closing, but then\nthe WALRead function would need to be aware whether it is executing in backend\nor in frontend (so it can use the correct function to open/close the file).\n\nI was aware of the problem that the correct function should be used to open\nthe file and that's why this comment was added (although \"mandatory\" would be\nmore suitable than \"preferred\"):\n\n * BasicOpenFile() is the preferred way to open the segment file in backend\n * code, whereas open(2) should be used in frontend.\n */\ntypedef int (*WALSegmentOpen) (XLogSegNo nextSegNo, WALSegmentContext *segcxt,\n\t\t\t\t\t\t\t TimeLineID *tli_p);\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 06 Apr 2020 09:12:32 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Hi,\n\nOn 2020-04-06 09:12:32 +0200, Antonin Houska wrote:\n> Andres Freund <andres@anarazel.de> wrote:\n> \n> > > From what I can see, the error is that the code only bothers closing\n> > > WALOpenSegment->seg when switching to a new segment, but we need also\n> > > to close it when finishing the business in XLogReaderFree().\n> > \n> > Yea, I came to the same conclusion and locally fixed it the same way\n> > (except having the close a bit earlier in XLogReaderFree()).\n> \n> It's still not quite clear to me why the problem starts to appear after\n> 0dc8ead46. This patch does not remove any close() call from XLogReaderFree().\n\nBefore that change the file was also kind of leaked, but would use the\nsame static variable to store the fd and thus close it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Apr 2020 00:16:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Hi,\n\nI pushed a fix. While it might not be the best medium/long term fix, it\nunbreaks 2PC. Perhaps we should add an open item to track whether we\nwant to fix this differently?\n\n\nOn 2020-04-06 09:12:32 +0200, Antonin Houska wrote:\n> Andres Freund <andres@anarazel.de> wrote:\n> It should have allowed users to have different ways to *locate the segment*\n> file. The WALSegmentOpen callback could actually return file path instead of\n> the file descriptor and let WALRead() perform the opening/closing, but then\n> the WALRead function would need to be aware whether it is executing in backend\n> or in frontend (so it can use the correct function to open/close the file).\n> \n> I was aware of the problem that the correct function should be used to open\n> the file and that's why this comment was added (although \"mandatory\" would be\n> more suitable than \"preferred\"):\n> \n> * BasicOpenFile() is the preferred way to open the segment file in backend\n> * code, whereas open(2) should be used in frontend.\n> */\n> typedef int (*WALSegmentOpen) (XLogSegNo nextSegNo, WALSegmentContext *segcxt,\n> \t\t\t\t\t\t\t TimeLineID *tli_p);\n\nI don't think that BasicOpenFile() really solves anything here? If\nanything it *exascerbates* the problem, because it will trigger all of\nthe \"virtual file descriptors\" for already opened Files to close() the\nunderlying OS FDs. So not even a fully cached table can be seqscanned,\nbecause that tries to check the file size...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 17:12:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On Tue, Apr 07, 2020 at 05:12:49PM -0700, Andres Freund wrote:\n> I pushed a fix. While it might not be the best medium/long term fix, it\n> unbreaks 2PC. Perhaps we should add an open item to track whether we\n> want to fix this differently?\n\nSounds fine to me. I have updated the open item that we have now by\nadding a comment that the leak has been fixed by 91c4054, but that\nwe should revisit the refactoring.\n--\nMichael", "msg_date": "Wed, 8 Apr 2020 15:29:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Andres Freund <andres@anarazel.de> wrote:\n> > But I'm not sure it's quite the right idea. I'm not sure I fully\n> > understand the design of 0dc8ead46, but it looks to me like it's\n> > intended to allow users of the interface to have different ways of\n> > opening files. If we just close() the fd that'd be a bit more limited.\n> \n> It should have allowed users to have different ways to *locate the segment*\n> file. The WALSegmentOpen callback could actually return file path instead of\n> the file descriptor and let WALRead() perform the opening/closing, but then\n> the WALRead function would need to be aware whether it is executing in backend\n> or in frontend (so it can use the correct function to open/close the file).\n\nWell, #ifdef FRONTEND can be used to distinguish the caller of\nWALRead(). However now that I tried to adjust the API, I see that\npg_waldump.c:WALDumpOpenSegment uses specific logic to open the file. So if\nthe callback only returned the file name, there would be no suitable place for\nthe things that WALDumpOpenSegment does.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 08 Apr 2020 09:26:37 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-04-06 09:12:32 +0200, Antonin Houska wrote:\n> > Andres Freund <andres@anarazel.de> wrote:\n> > It should have allowed users to have different ways to *locate the segment*\n> > file. The WALSegmentOpen callback could actually return file path instead of\n> > the file descriptor and let WALRead() perform the opening/closing, but then\n> > the WALRead function would need to be aware whether it is executing in backend\n> > or in frontend (so it can use the correct function to open/close the file).\n> > \n> > I was aware of the problem that the correct function should be used to open\n> > the file and that's why this comment was added (although \"mandatory\" would be\n> > more suitable than \"preferred\"):\n> > \n> > * BasicOpenFile() is the preferred way to open the segment file in backend\n> > * code, whereas open(2) should be used in frontend.\n> > */\n> > typedef int (*WALSegmentOpen) (XLogSegNo nextSegNo, WALSegmentContext *segcxt,\n> > \t\t\t\t\t\t\t TimeLineID *tli_p);\n> \n> I don't think that BasicOpenFile() really solves anything here? If\n> anything it *exascerbates* the problem, because it will trigger all of\n> the \"virtual file descriptors\" for already opened Files to close() the\n> underlying OS FDs. So not even a fully cached table can be seqscanned,\n> because that tries to check the file size...\n\nSpecifically for 2PC, isn't it better to close some OS-level FD of an\nunrelated table scan and then succeed than to ERROR immediately? Anyway,\n0dc8ead46 hasn't changed this.\n\nI at least admit that the comment should not recommend particular function,\nand that WALRead() should call the appropriate function to close the file,\nrather than always calling close().\n\nCan we just pass the existing FD to the callback as an additional argument,\nand let the callback close it?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 08 Apr 2020 10:00:21 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "I have tested with and without the commit from Andres using the pgbench\nscript (below) provided in the initial email.\n\npgbench -n -s 500 -c 4 -j 4 -T 100000 -P1 -f pgbench-write-2pc.sql\n\nI am not getting the leak anymore, it seems to be holding up pretty well.\n\n\nOn Wed, Apr 8, 2020 at 12:59 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> Andres Freund <andres@anarazel.de> wrote:\n>\n> > On 2020-04-06 09:12:32 +0200, Antonin Houska wrote:\n> > > Andres Freund <andres@anarazel.de> wrote:\n> > > It should have allowed users to have different ways to *locate the\n> segment*\n> > > file. The WALSegmentOpen callback could actually return file path\n> instead of\n> > > the file descriptor and let WALRead() perform the opening/closing, but\n> then\n> > > the WALRead function would need to be aware whether it is executing in\n> backend\n> > > or in frontend (so it can use the correct function to open/close the\n> file).\n> > >\n> > > I was aware of the problem that the correct function should be used to\n> open\n> > > the file and that's why this comment was added (although \"mandatory\"\n> would be\n> > > more suitable than \"preferred\"):\n> > >\n> > > * BasicOpenFile() is the preferred way to open the segment file in\n> backend\n> > > * code, whereas open(2) should be used in frontend.\n> > > */\n> > > typedef int (*WALSegmentOpen) (XLogSegNo nextSegNo, WALSegmentContext\n> *segcxt,\n> > > TimeLineID\n> *tli_p);\n> >\n> > I don't think that BasicOpenFile() really solves anything here? If\n> > anything it *exascerbates* the problem, because it will trigger all of\n> > the \"virtual file descriptors\" for already opened Files to close() the\n> > underlying OS FDs. So not even a fully cached table can be seqscanned,\n> > because that tries to check the file size...\n>\n> Specifically for 2PC, isn't it better to close some OS-level FD of an\n> unrelated table scan and then succeed than to ERROR immediately? Anyway,\n> 0dc8ead46 hasn't changed this.\n>\n> I at least admit that the comment should not recommend particular function,\n> and that WALRead() should call the appropriate function to close the file,\n> rather than always calling close().\n>\n> Can we just pass the existing FD to the callback as an additional argument,\n> and let the callback close it?\n>\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nI have tested with and without the commit from Andres using the pgbench script (below) provided in the initial email. pgbench -n -s 500 -c 4 -j 4 -T 100000 -P1 -f pgbench-write-2pc.sqlI am not getting the leak anymore, it seems to be holding up pretty well.On Wed, Apr 8, 2020 at 12:59 PM Antonin Houska <ah@cybertec.at> wrote:Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-04-06 09:12:32 +0200, Antonin Houska wrote:\n> > Andres Freund <andres@anarazel.de> wrote:\n> > It should have allowed users to have different ways to *locate the segment*\n> > file. The WALSegmentOpen callback could actually return file path instead of\n> > the file descriptor and let WALRead() perform the opening/closing, but then\n> > the WALRead function would need to be aware whether it is executing in backend\n> > or in frontend (so it can use the correct function to open/close the file).\n> > \n> > I was aware of the problem that the correct function should be used to open\n> > the file and that's why this comment was added (although \"mandatory\" would be\n> > more suitable than \"preferred\"):\n> > \n> >  * BasicOpenFile() is the preferred way to open the segment file in backend\n> >  * code, whereas open(2) should be used in frontend.\n> >  */\n> > typedef int (*WALSegmentOpen) (XLogSegNo nextSegNo, WALSegmentContext *segcxt,\n> >                                                        TimeLineID *tli_p);\n> \n> I don't think that BasicOpenFile() really solves anything here? If\n> anything it *exascerbates* the problem, because it will trigger all of\n> the \"virtual file descriptors\" for already opened Files to close() the\n> underlying OS FDs.  So not even a fully cached table can be seqscanned,\n> because that tries to check the file size...\n\nSpecifically for 2PC, isn't it better to close some OS-level FD of an\nunrelated table scan and then succeed than to ERROR immediately? Anyway,\n0dc8ead46 hasn't changed this.\n\nI at least admit that the comment should not recommend particular function,\nand that WALRead() should call the appropriate function to close the file,\nrather than always calling close().\n\nCan we just pass the existing FD to the callback as an additional argument,\nand let the callback close it?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Wed, 8 Apr 2020 14:49:38 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-Apr-08, Antonin Houska wrote:\n\n> Specifically for 2PC, isn't it better to close some OS-level FD of an\n> unrelated table scan and then succeed than to ERROR immediately? Anyway,\n> 0dc8ead46 hasn't changed this.\n\nI think for full generality of the interface, we pass a \"close\" callback\nin addition to the \"open\" callback. But if we were to pass it only for\nWALRead, then there would be no way to call it during XLogReaderFree.\n\nI think the fix Andres applied is okay as far as it goes, but for the\nlong term we may want to change the interface even further -- maybe by\nhaving these functions be part of the XLogReader state struct. I have\nthis code paged out of my head ATM, but maybe tomorrow I can give it a\nlittle think.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 19:54:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Concretely, I propose to have a new struct like\n\ntypedef struct xlogReaderFuncs\n{\n\tXLogPageReadCB read_page;\n\tXLogSegmentOpenCB open_segment;\n\tXLogSegmentCloseCB open_segment;\n} xlogReaderFuncs;\n\n#define XLOGREADER_FUNCS(...) &(xlogReaderFuncs){__VA_ARGS__}\n\nand then invoke it something like\n\n xlogreader = XLogReaderAllocate(wal_segment_size, NULL,\n XLOGREADER_FUNCS(.readpage = &read_local_xlog_page,\n .opensegment = &wal_segment_open),\n .closesegment = &wal_segment_close),\n NULL);\n\n(with suitable definitions for XLogSegmentOpenCB etc) so that the\nsupport functions are all available at the xlogreader level, instead of\n\"open\" being buried at the read-page level. Any additional support\nfunctions can be added easily.\n\nThis would give xlogreader a simpler interface.\n\nIf people like this, I could make this change for pg13 and avoid\nchanging the API again in pg14.\n\nThougths?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 13:57:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-04-22 13:57:54 -0400, Alvaro Herrera wrote:\n> Concretely, I propose to have a new struct like\n> \n> typedef struct xlogReaderFuncs\n> {\n> \tXLogPageReadCB read_page;\n> \tXLogSegmentOpenCB open_segment;\n> \tXLogSegmentCloseCB open_segment;\n> } xlogReaderFuncs;\n> \n> #define XLOGREADER_FUNCS(...) &(xlogReaderFuncs){__VA_ARGS__}\n\nNot sure I quite see the point of that helper macro...\n\n> and then invoke it something like\n> \n> xlogreader = XLogReaderAllocate(wal_segment_size, NULL,\n> XLOGREADER_FUNCS(.readpage = &read_local_xlog_page,\n> .opensegment = &wal_segment_open),\n> .closesegment = &wal_segment_close),\n> NULL);\n> \n> (with suitable definitions for XLogSegmentOpenCB etc) so that the\n> support functions are all available at the xlogreader level, instead of\n> \"open\" being buried at the read-page level. Any additional support\n> functions can be added easily.\n> \n> This would give xlogreader a simpler interface.\n\nMy first reaction was that this looks like it'd make it harder to read\nWAL from memory. But that's not really a problem, since\nopensegment/closesegment don't have to do anything.\n\nI think reducing the levels of indirection around xlogreader would be a\ngood idea. The control flow currently is *really* complicated: With the\npage read callback at the xlogreader level, as well as separate\ncallbacks set from within the page read callback and passed to\nWALRead(). And even though the WALOpenSegment, WALSegmentContext are\nreally private to WALRead, not XLogReader as a whole, they are members\nof XLogReaderState. I think the PG13 changes made it considerably\nharder to understand xlogreader / xlogreader using code.\n\nNote that the WALOpenSegment callback currently does not have access to\nXLogReaderState->private_data, which I think is a pretty significant new\nrestriction. Afaict it's not nicely possible anymore to have two\nxlogreaders inside the the same process that read from different data\ndirectories or other cases where opening the segment requires context\ninformation.\n\n> If people like this, I could make this change for pg13 and avoid\n> changing the API again in pg14.\n\nI'm in favor of doing so. Not necessarily primarily to avoid repeated\nAPI changes, but because I don't think the v13 changes went in the quite\nright direction.\n\nISTM that we should:\n- have the three callbacks you mention above\n- change WALSegmentOpen to also get the XLogReaderState\n- add private state to WALOpenSegment, so it can be used even when not\n accessing data in files / when one needs more information to close the\n file.\n- disambiguate between WALOpenSegment (struct describing an open\n segment) and WALSegmentOpen (callback to open a segment) (note that\n the read page callback uses a *CB naming, why not follow?)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:30:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-Apr-22, Andres Freund wrote:\n\n> On 2020-04-22 13:57:54 -0400, Alvaro Herrera wrote:\n> > Concretely, I propose to have a new struct like\n> > \n> > typedef struct xlogReaderFuncs\n> > {\n> > \tXLogPageReadCB read_page;\n> > \tXLogSegmentOpenCB open_segment;\n> > \tXLogSegmentCloseCB open_segment;\n> > } xlogReaderFuncs;\n> > \n> > #define XLOGREADER_FUNCS(...) &(xlogReaderFuncs){__VA_ARGS__}\n> \n> Not sure I quite see the point of that helper macro...\n\nAvoid the ugly cast -- same discussion we had for ARCHIVE_OPTS in\npg_dump code in commit f831d4accda0.\n\n\n> ISTM that we should:\n> - have the three callbacks you mention above\n> - change WALSegmentOpen to also get the XLogReaderState\n> - add private state to WALOpenSegment, so it can be used even when not\n> accessing data in files / when one needs more information to close the\n> file.\n> - disambiguate between WALOpenSegment (struct describing an open\n> segment) and WALSegmentOpen (callback to open a segment) (note that\n> the read page callback uses a *CB naming, why not follow?)\n\nSounds good.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 15:07:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-Apr-22, Andres Freund wrote:\n\n> I'm in favor of doing so. Not necessarily primarily to avoid repeated\n> API changes, but because I don't think the v13 changes went in the quite\n> right direction.\n> \n> ISTM that we should:\n> - have the three callbacks you mention above\n> - change WALSegmentOpen to also get the XLogReaderState\n> - add private state to WALOpenSegment, so it can be used even when not\n> accessing data in files / when one needs more information to close the\n> file.\n> - disambiguate between WALOpenSegment (struct describing an open\n> segment) and WALSegmentOpen (callback to open a segment) (note that\n> the read page callback uses a *CB naming, why not follow?)\n\nHere's a first attempt at that. The segment_open/close callbacks are\nnow given at XLogReaderAllocate time, and are passed the XLogReaderState\npointer. I wrote a comment to explain that the page_read callback can\nuse WALRead() if it wishes to do so; but if it does, then segment_open\nhas to be provided. segment_close is mandatory (since we call it at\nXLogReaderFree).\n\nOf the half a dozen cases that exist, three are slightly weird:\n\n* Physical walsender does not use a xlogreader at all. I think we could\n beat that code up so that it does. But for the moment I just cons up\n a fake xlogreader, which only has the segment_open pointer set up, so\n that it can call WALRead.\n\n* main xlog.c uses an xlogreader with XLogPageRead(), which does not use\n WALRead. Therefore it does not pass open_segment. It does not use\n xlogreader->seg.ws_file either. Eventually we may want to beat this\n one up also.\n\n* pg_rewind has its own page read callback, SimpleXLogPageRead, which\n does all the required opening and closing. I don't think it'd be an\n improvement to force this to use segment_open. Oddly enough, it calls\n itself \"simple\" but is unique in having the ability to read files from\n the wal archive.\n\nAll tests are passing for me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 23 Apr 2020 19:16:03 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "At Thu, 23 Apr 2020 19:16:03 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Apr-22, Andres Freund wrote:\n> \n> > I'm in favor of doing so. Not necessarily primarily to avoid repeated\n> > API changes, but because I don't think the v13 changes went in the quite\n> > right direction.\n> > \n> > ISTM that we should:\n> > - have the three callbacks you mention above\n> > - change WALSegmentOpen to also get the XLogReaderState\n> > - add private state to WALOpenSegment, so it can be used even when not\n> > accessing data in files / when one needs more information to close the\n> > file.\n> > - disambiguate between WALOpenSegment (struct describing an open\n> > segment) and WALSegmentOpen (callback to open a segment) (note that\n> > the read page callback uses a *CB naming, why not follow?)\n> \n> Here's a first attempt at that. The segment_open/close callbacks are\n> now given at XLogReaderAllocate time, and are passed the XLogReaderState\n> pointer. I wrote a comment to explain that the page_read callback can\n> use WALRead() if it wishes to do so; but if it does, then segment_open\n> has to be provided. segment_close is mandatory (since we call it at\n> XLogReaderFree).\n> \n> Of the half a dozen cases that exist, three are slightly weird:\n> \n> * Physical walsender does not use a xlogreader at all. I think we could\n> beat that code up so that it does. But for the moment I just cons up\n> a fake xlogreader, which only has the segment_open pointer set up, so\n> that it can call WALRead.\n> \n> * main xlog.c uses an xlogreader with XLogPageRead(), which does not use\n> WALRead. Therefore it does not pass open_segment. It does not use\n> xlogreader->seg.ws_file either. Eventually we may want to beat this\n> one up also.\n> \n> * pg_rewind has its own page read callback, SimpleXLogPageRead, which\n> does all the required opening and closing. I don't think it'd be an\n> improvement to force this to use segment_open. Oddly enough, it calls\n> itself \"simple\" but is unique in having the ability to read files from\n> the wal archive.\n> \n> All tests are passing for me.\n\nI modestly object to such many call-back functions. FWIW I'm writing\nthis with [1] in my mind.\n\nAn open-callback is bound to a read-callback. A close-callback is\nbound to the way the read-callback opens a segment (or the\nopen-callback). I'm afraid that only adding \"cleanup\" callback might\nbe sufficient.\n\n[1] https://www.postgresql.org/message-id/20200422.101246.331162888498679491.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 24 Apr 2020 15:36:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-Apr-24, Kyotaro Horiguchi wrote:\n\n> At Thu, 23 Apr 2020 19:16:03 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n\n> > Here's a first attempt at that. The segment_open/close callbacks are\n> > now given at XLogReaderAllocate time, and are passed the XLogReaderState\n> > pointer. I wrote a comment to explain that the page_read callback can\n> > use WALRead() if it wishes to do so; but if it does, then segment_open\n> > has to be provided. segment_close is mandatory (since we call it at\n> > XLogReaderFree).\n\n> I modestly object to such many call-back functions. FWIW I'm writing\n> this with [1] in my mind.\n> [1] https://www.postgresql.org/message-id/20200422.101246.331162888498679491.horikyota.ntt%40gmail.com\n\nHmm. Looking at your 0001, I think there's nothing in that patch that's\nnot compatible with my proposed API change.\n\n0002 is a completely different story of course; but that patch is a\nradical change of spirit for xlogreader, in the sense that it's no\nlonger a \"reader\", but rather just an interpreter of bytes from a WAL\nbyte sequence into WAL records; and shifts the responsibility of the\nactual reading to the caller. That's why xlogreader no longer receives\nthe page_read callback (and why it doesn't need the segment_open,\nsegment_close callbacks).\n\nI have to admit that until today I hadn't realized that that's what your\npatch series was doing. I'm not familiar with how you intend to\nimplement WAL encryption on top of this, but on first blush I'm not\nliking this proposed design too much.\n\n> An open-callback is bound to a read-callback. A close-callback is\n> bound to the way the read-callback opens a segment (or the\n> open-callback). I'm afraid that only adding \"cleanup\" callback might\n> be sufficient.\n\nWell, the complaint is that the current layering is weird, in that there\nare two levels at which we pass callbacks: one is XLogReaderAllocate,\nwhere you specify the page_read callback; and the other layer is inside\nthe page_read callback, if that layer uses the WALRead auxiliary\nfunction. The thing that my patch is doing is pass all three callbacks\nat the XLogReaderAllocate level. So when xlogreader drills down to\nread_page, xlogreader already has the segment_open callback handy if it\nneeds it. Conceptually, this seems more sensible.\n\nI think a \"cleanup\" callback might also be sensible in general terms,\nbut we have a problem with the specifics -- namely that the \"state\" that\nwe need to clean up (the file descriptor of the open segment) is part of\nxlogreader's state. And we obviously cannot remove the FD from\nXLogReaderState, because when we need the FD to do things with it to\nobtain data from the file.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Apr 2020 11:48:46 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "At Fri, 24 Apr 2020 11:48:46 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Apr-24, Kyotaro Horiguchi wrote:\n> \n> > At Thu, 23 Apr 2020 19:16:03 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> \n> > > Here's a first attempt at that. The segment_open/close callbacks are\n> > > now given at XLogReaderAllocate time, and are passed the XLogReaderState\n> > > pointer. I wrote a comment to explain that the page_read callback can\n> > > use WALRead() if it wishes to do so; but if it does, then segment_open\n> > > has to be provided. segment_close is mandatory (since we call it at\n> > > XLogReaderFree).\n> \n> > I modestly object to such many call-back functions. FWIW I'm writing\n> > this with [1] in my mind.\n> > [1] https://www.postgresql.org/message-id/20200422.101246.331162888498679491.horikyota.ntt%40gmail.com\n> \n> Hmm. Looking at your 0001, I think there's nothing in that patch that's\n> not compatible with my proposed API change.\n> \n> 0002 is a completely different story of course; but that patch is a\n> radical change of spirit for xlogreader, in the sense that it's no\n> longer a \"reader\", but rather just an interpreter of bytes from a WAL\n> byte sequence into WAL records; and shifts the responsibility of the\n> actual reading to the caller. That's why xlogreader no longer receives\n> the page_read callback (and why it doesn't need the segment_open,\n> segment_close callbacks).\n\nSorry for the ambiguity, I didn't meant I minded that this conflicts\nwith my patch or I don't want this to be committed. It is easily\nrebased on this patch. What I was anxious about is that the new\ncallback struct might be too flexible than required. So I \"mildly\"\nobjected, and I won't be dissapointed if this patch is committed.\n\n> I have to admit that until today I hadn't realized that that's what your\n> patch series was doing. I'm not familiar with how you intend to\n> implement WAL encryption on top of this, but on first blush I'm not\n> liking this proposed design too much.\n\nRight. I might be too much in detail, but it simplifies the call\ntree. Anyway that is another discussion, though:)\n\n> > An open-callback is bound to a read-callback. A close-callback is\n> > bound to the way the read-callback opens a segment (or the\n> > open-callback). I'm afraid that only adding \"cleanup\" callback might\n> > be sufficient.\n> \n> Well, the complaint is that the current layering is weird, in that there\n> are two levels at which we pass callbacks: one is XLogReaderAllocate,\n> where you specify the page_read callback; and the other layer is inside\n> the page_read callback, if that layer uses the WALRead auxiliary\n> function. The thing that my patch is doing is pass all three callbacks\n> at the XLogReaderAllocate level. So when xlogreader drills down to\n> read_page, xlogreader already has the segment_open callback handy if it\n> needs it. Conceptually, this seems more sensible.\n\nIt looks like as if the open/read/close-callbacks are generic and on\nthe same interface layer, but actually open-callback is dedicate to\nWALRead and it is useless when the read-callback doesn't use\nWALRead. What I was anxious about is that the open-callback is\nuselessly exposing the secret of the read-callback.\n\n> I think a \"cleanup\" callback might also be sensible in general terms,\n> but we have a problem with the specifics -- namely that the \"state\" that\n> we need to clean up (the file descriptor of the open segment) is part of\n> xlogreader's state. And we obviously cannot remove the FD from\n> XLogReaderState, because when we need the FD to do things with it to\n> obtain data from the file.\n\nI meant concretely that we only have read- and cleanup- callbacks in\nxlogreader state. The caller of XLogReaderAllocate specifies the\ncleanup-callback that is to be used to clean up what the\nreader-callback left behind, in the same manner with this patch does.\nThe only reason it is not named close-callback is that it is used only\nwhen the xlogreader-state is destroyed. So I'm fine with\nread/close-callbacks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Apr 2020 14:11:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-Apr-27, Kyotaro Horiguchi wrote:\n\n> At Fri, 24 Apr 2020 11:48:46 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n\n> Sorry for the ambiguity, I didn't meant I minded that this conflicts\n> with my patch or I don't want this to be committed. It is easily\n> rebased on this patch. What I was anxious about is that the new\n> callback struct might be too flexible than required. So I \"mildly\"\n> objected, and I won't be dissapointed if this patch is committed.\n\n... well, yeah, maybe it is too flexible. And perhaps we could further\ntweak this interface so that the file descriptor is not part of\nXLogReader at all -- with such a change, it would make more sense to\nworry about the \"close\" callback not being \"close\" but something like\n\"cleanup\", as you suggest. But right now, and thinking from the point\nof view of going into postgres 13 beta shortly, it seems to me that\nXLogReader is just a very leaky abstraction since both itself and its\nusers are aware of the fact that there is a file descriptor.\n\nMaybe with your rework for encryption you'll want to remove the FD from\nXLogReader at all, and move it elsewhere. Or maybe not. But it seems\nto me that my suggested approach is sensible, and better than the\ncurrent situation. (Let's keep in mind that the primary concern here is\nthat the callstack is way too complicated -- you ask XlogReader for\ndata, it calls your Read callback, that one calls WALRead passing your\nopenSegment callback and stuffs the FD in XLogReaderState ... a sieve it\nis, the way it leaks, not an abstraction.)\n\n> > I have to admit that until today I hadn't realized that that's what your\n> > patch series was doing. I'm not familiar with how you intend to\n> > implement WAL encryption on top of this, but on first blush I'm not\n> > liking this proposed design too much.\n> \n> Right. I might be too much in detail, but it simplifies the call\n> tree. Anyway that is another discussion, though:)\n\nOkay. We can discuss further changes later, of course.\n\n> It looks like as if the open/read/close-callbacks are generic and on\n> the same interface layer, but actually open-callback is dedicate to\n> WALRead and it is useless when the read-callback doesn't use\n> WALRead. What I was anxious about is that the open-callback is\n> uselessly exposing the secret of the read-callback.\n\nWell, I don't think we care about that. WALRead can be thought of as\njust a helper function that you may use to write your read callback.\nThe comments I added explain this.\n\n> I meant concretely that we only have read- and cleanup- callbacks in\n> xlogreader state. The caller of XLogReaderAllocate specifies the\n> cleanup-callback that is to be used to clean up what the\n> reader-callback left behind, in the same manner with this patch does.\n> The only reason it is not named close-callback is that it is used only\n> when the xlogreader-state is destroyed. So I'm fine with\n> read/close-callbacks.\n\nWe can revisit the current design in the future. For example for\nencryption we might decide to remove the current-open-segment FD from\nXLogReaderState and then things might be different. (I think the\ncurrent design is based a lot on historical code, rather than being\noptimal.)\n\nSince your objection isn't strong, I propose to commit same patch as\nbefore, only rebased as conflicted with cd123234404e and this comment\nprologuing WALRead:\n\n/*\n * Helper function to ease writing of XLogRoutine->page_read callbacks.\n * If this function is used, caller must supply an open_segment callback in\n * 'state', as that is used here.\n [... rest is same as before ...]\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 7 May 2020 19:28:55 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "At Thu, 7 May 2020 19:28:55 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-Apr-27, Kyotaro Horiguchi wrote:\n> \n> > At Fri, 24 Apr 2020 11:48:46 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> \n> > Sorry for the ambiguity, I didn't meant I minded that this conflicts\n> > with my patch or I don't want this to be committed. It is easily\n> > rebased on this patch. What I was anxious about is that the new\n> > callback struct might be too flexible than required. So I \"mildly\"\n> > objected, and I won't be dissapointed if this patch is committed.\n> \n> ... well, yeah, maybe it is too flexible. And perhaps we could further\n> tweak this interface so that the file descriptor is not part of\n> XLogReader at all -- with such a change, it would make more sense to\n> worry about the \"close\" callback not being \"close\" but something like\n> \"cleanup\", as you suggest. But right now, and thinking from the point\n> of view of going into postgres 13 beta shortly, it seems to me that\n> XLogReader is just a very leaky abstraction since both itself and its\n> users are aware of the fact that there is a file descriptor.\n\nAgreed.\n\n> Maybe with your rework for encryption you'll want to remove the FD from\n> XLogReader at all, and move it elsewhere. Or maybe not. But it seems\n> to me that my suggested approach is sensible, and better than the\n> current situation. (Let's keep in mind that the primary concern here is\n> that the callstack is way too complicated -- you ask XlogReader for\n> data, it calls your Read callback, that one calls WALRead passing your\n> openSegment callback and stuffs the FD in XLogReaderState ... a sieve it\n> is, the way it leaks, not an abstraction.)\n\nI agree that new callback functions is most sensible for getting into\n13, of course.\n\n> > > I have to admit that until today I hadn't realized that that's what your\n> > > patch series was doing. I'm not familiar with how you intend to\n> > > implement WAL encryption on top of this, but on first blush I'm not\n> > > liking this proposed design too much.\n> > \n> > Right. I might be too much in detail, but it simplifies the call\n> > tree. Anyway that is another discussion, though:)\n> \n> Okay. We can discuss further changes later, of course.\n> \n> > It looks like as if the open/read/close-callbacks are generic and on\n> > the same interface layer, but actually open-callback is dedicate to\n> > WALRead and it is useless when the read-callback doesn't use\n> > WALRead. What I was anxious about is that the open-callback is\n> > uselessly exposing the secret of the read-callback.\n> \n> Well, I don't think we care about that. WALRead can be thought of as\n> just a helper function that you may use to write your read callback.\n> The comments I added explain this.\n\nThanks.\n\n> > I meant concretely that we only have read- and cleanup- callbacks in\n> > xlogreader state. The caller of XLogReaderAllocate specifies the\n> > cleanup-callback that is to be used to clean up what the\n> > reader-callback left behind, in the same manner with this patch does.\n> > The only reason it is not named close-callback is that it is used only\n> > when the xlogreader-state is destroyed. So I'm fine with\n> > read/close-callbacks.\n> \n> We can revisit the current design in the future. For example for\n> encryption we might decide to remove the current-open-segment FD from\n> XLogReaderState and then things might be different. (I think the\n> current design is based a lot on historical code, rather than being\n> optimal.)\n> \n> Since your objection isn't strong, I propose to commit same patch as\n> before, only rebased as conflicted with cd123234404e and this comment\n> prologuing WALRead:\n> \n> /*\n> * Helper function to ease writing of XLogRoutine->page_read callbacks.\n> * If this function is used, caller must supply an open_segment callback in\n> * 'state', as that is used here.\n> [... rest is same as before ...]\n\nI agree to the direction of this patch. Thanks for the explanation.\nThe patch looks good to me except the two points below.\n\n\n+\t/* XXX for xlogreader use, we'd call XLogBeginRead+XLogReadRecord here */\n+\tif (!WALRead(&fake_xlogreader,\n+\t\t\t\t &output_message.data[output_message.len],\n\nI'm not sure the point of the XXX comment, but I think WALRead here is\nthe right thing and we aren't going to use\nXLogBeginRead+XLogReadRecord here. So it seems to me the comment is\nmisleading and instead we need such a comment for fake_xlogreader like\nthis.\n\n+\tstatic XLogReaderState fake_xlogreader =\n+\t{\n+\t\t/* fake reader state only to let WALRead to use the callbacks */\n\n\nwal_segment_close(XlogReaderState *state) is setting\nstate->seg.ws_file to -1. On the other hand wal_segment_close(state,..)\ndoesn't update ws_file and the caller sets the returned value to\n(eventually) the same field.\n\n+\t\t\tseg->ws_file = state->routine.segment_open(state, nextSegNo,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t segcxt, &tli);\n\nIf you are willing to do so, I think it is better to make the callback\nfunctions are responsible to update the seg.ws_file and the callers\ndon't care.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 May 2020 11:42:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "On 2020-May-08, Kyotaro Horiguchi wrote:\n\n> I agree to the direction of this patch. Thanks for the explanation.\n> The patch looks good to me except the two points below.\n\nThanks! I pushed the patch. I fixed the walsender commentary as you\nsuggested, but I'm still of the opinion that we might want to use the\nXLogReader abstraction in physical walsender than work without it; if\nnothing else, that would simplify WALRead's API.\n\nI didn't change this one though:\n\n> wal_segment_close(XlogReaderState *state) is setting\n> state->seg.ws_file to -1. On the other hand wal_segment_close(state,..)\n> doesn't update ws_file and the caller sets the returned value to\n> (eventually) the same field.\n> \n> +\t\t\tseg->ws_file = state->routine.segment_open(state, nextSegNo,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t segcxt, &tli);\n> \n> If you are willing to do so, I think it is better to make the callback\n> functions are responsible to update the seg.ws_file and the callers\n> don't care.\n\nI agree that this would be a good idea, but it's more than just a\nhandful of lines of changes so I think we should consider it separately.\nAttached as 0002. I also realized while doing this that we can further\nsimplify WALRead()'s API if we're willing to bend walsender a little bit\nmore into the fake xlogreader thing; that's 0001.\n\nI marked the open item closed nonetheless.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 May 2020 17:09:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2020-May-08, Kyotaro Horiguchi wrote:\n> \n> > I agree to the direction of this patch. Thanks for the explanation.\n> > The patch looks good to me except the two points below.\n> \n> Thanks! I pushed the patch.\n\nI tried to follow the discussion but haven't had better idea. This looks\nbetter than the previous version. Thanks!\n\nWhile looking at the changes, I've noticed a small comment issue in the\nXLogReaderRoutine structure definition:\n\n\t* \"tli_p\" is an input/output argument. XLogRead() uses it to pass the\n\nThe XLogRead() function has been renamed to WALRead() in 0dc8ead4. (This\nincorrect comment was actually introduced by that commit.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 11 May 2020 10:44:14 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" }, { "msg_contents": "Hi Antonin, thanks for the review.\n\n\nOn 2020-May-11, Antonin Houska wrote:\n\n> While looking at the changes, I've noticed a small comment issue in the\n> XLogReaderRoutine structure definition:\n> \n> \t* \"tli_p\" is an input/output argument. XLogRead() uses it to pass the\n> \n> The XLogRead() function has been renamed to WALRead() in 0dc8ead4. (This\n> incorrect comment was actually introduced by that commit.)\n\nAh. I'll fix this, thanks for pointing it out.\n\n(It might be that the TLI situation can be improved with some callback,\ntoo.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 May 2020 12:25:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2pc leaks fds" } ]
[ { "msg_contents": "Hi,\n\nDue to the change below, when using the default postgres configuration\nof ynchronous_commit = on, max_wal_senders = 10, will now acquire a new\nexclusive lwlock after writing a commit record.\n\ncommit 48c9f4926562278a2fd2b85e7486c6d11705f177\nAuthor: Simon Riggs <simon@2ndQuadrant.com>\nDate: 2017-12-29 14:30:33 +0000\n\n Fix race condition when changing synchronous_standby_names\n\n A momentary window exists when synchronous_standby_names\n changes that allows commands issued after the change to\n continue to act as async until the change becomes visible.\n Remove the race by using more appropriate test in syncrep.c\n\n Author: Asim Rama Praveen and Ashwin Agrawal\n Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen\n Reviewed-by: Michael Paquier, Masahiko Sawada\n\nAs far as I can tell there was no discussion about the added contention\ndue this change in the relevant thread [1].\n\nThe default configuration has an empty synchronous_standby_names. Before\nthis change we'd fall out of SyncRepWaitForLSN() before acquiring\nSyncRepLock in exlusive mode. Now we don't anymore.\n\n\nI'm really not ok with unneccessarily adding an exclusive lock\nacquisition to such a crucial path.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CABrsG8j3kPD%2Bkbbsx_isEpFvAgaOBNGyGpsqSjQ6L8vwVUaZAQ%40mail.gmail.com\n\n\n", "msg_date": "Sun, 5 Apr 2020 22:03:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Mon, 6 Apr 2020 at 14:04, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Due to the change below, when using the default postgres configuration\n> of ynchronous_commit = on, max_wal_senders = 10, will now acquire a new\n> exclusive lwlock after writing a commit record.\n\nIndeed.\n\n>\n> commit 48c9f4926562278a2fd2b85e7486c6d11705f177\n> Author: Simon Riggs <simon@2ndQuadrant.com>\n> Date: 2017-12-29 14:30:33 +0000\n>\n> Fix race condition when changing synchronous_standby_names\n>\n> A momentary window exists when synchronous_standby_names\n> changes that allows commands issued after the change to\n> continue to act as async until the change becomes visible.\n> Remove the race by using more appropriate test in syncrep.c\n>\n> Author: Asim Rama Praveen and Ashwin Agrawal\n> Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen\n> Reviewed-by: Michael Paquier, Masahiko Sawada\n>\n> As far as I can tell there was no discussion about the added contention\n> due this change in the relevant thread [1].\n>\n> The default configuration has an empty synchronous_standby_names. Before\n> this change we'd fall out of SyncRepWaitForLSN() before acquiring\n> SyncRepLock in exlusive mode. Now we don't anymore.\n>\n>\n> I'm really not ok with unneccessarily adding an exclusive lock\n> acquisition to such a crucial path.\n>\n\nI think we can acquire SyncRepLock in share mode once to check\nWalSndCtl->sync_standbys_defined and if it's true then check it again\nafter acquiring it in exclusive mode. But it in turn ends up with\nadding one extra LWLockAcquire and LWLockRelease in sync rep path.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Apr 2020 17:51:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Mon, Apr 6, 2020 at 1:52 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Mon, 6 Apr 2020 at 14:04, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > commit 48c9f4926562278a2fd2b85e7486c6d11705f177\n> > Author: Simon Riggs <simon@2ndQuadrant.com>\n> > Date: 2017-12-29 14:30:33 +0000\n> >\n> > Fix race condition when changing synchronous_standby_names\n> >\n> > A momentary window exists when synchronous_standby_names\n> > changes that allows commands issued after the change to\n> > continue to act as async until the change becomes visible.\n> > Remove the race by using more appropriate test in syncrep.c\n> >\n> > Author: Asim Rama Praveen and Ashwin Agrawal\n> > Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen\n> > Reviewed-by: Michael Paquier, Masahiko Sawada\n> >\n> > As far as I can tell there was no discussion about the added contention\n> > due this change in the relevant thread [1].\n> >\n> > The default configuration has an empty synchronous_standby_names. Before\n> > this change we'd fall out of SyncRepWaitForLSN() before acquiring\n> > SyncRepLock in exlusive mode. Now we don't anymore.\n> >\n> >\n> > I'm really not ok with unneccessarily adding an exclusive lock\n> > acquisition to such a crucial path.\n> >\n>\n> I think we can acquire SyncRepLock in share mode once to check\n> WalSndCtl->sync_standbys_defined and if it's true then check it again\n> after acquiring it in exclusive mode. But it in turn ends up with\n> adding one extra LWLockAcquire and LWLockRelease in sync rep path.\n>\n\nHow about we change it to this ?\n\ndiff --git a/src/backend/replication/syncrep.c\nb/src/backend/replication/syncrep.c\nindex ffd5b31eb2..cdb82a8b28 100644\n--- a/src/backend/replication/syncrep.c\n+++ b/src/backend/replication/syncrep.c\n@@ -165,8 +165,11 @@ SyncRepWaitForLSN(XLogRecPtr lsn, bool commit)\n /*\n * Fast exit if user has not requested sync replication.\n */\n- if (!SyncRepRequested())\n- return;\n+ if (!SyncRepRequested() || !SyncStandbysDefined())\n+ {\n+ if (!WalSndCtl->sync_standbys_defined)\n+ return;\n+ }\n\n Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks)));\n Assert(WalSndCtl != NULL);\n\nBring back the check which existed based on GUC but instead of just blindly\nreturning based on just GUC not being set, check\nWalSndCtl->sync_standbys_defined. Thoughts?\n\nOn Mon, Apr 6, 2020 at 1:52 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Mon, 6 Apr 2020 at 14:04, Andres Freund <andres@anarazel.de> wrote:\n>\n> commit 48c9f4926562278a2fd2b85e7486c6d11705f177\n> Author: Simon Riggs <simon@2ndQuadrant.com>\n> Date:   2017-12-29 14:30:33 +0000\n>\n>     Fix race condition when changing synchronous_standby_names\n>\n>     A momentary window exists when synchronous_standby_names\n>     changes that allows commands issued after the change to\n>     continue to act as async until the change becomes visible.\n>     Remove the race by using more appropriate test in syncrep.c\n>\n>     Author: Asim Rama Praveen and Ashwin Agrawal\n>     Reported-by: Xin Zhang, Ashwin Agrawal, and Asim Rama Praveen\n>     Reviewed-by: Michael Paquier, Masahiko Sawada\n>\n> As far as I can tell there was no discussion about the added contention\n> due this change in the relevant thread [1].\n>\n> The default configuration has an empty synchronous_standby_names. Before\n> this change we'd fall out of SyncRepWaitForLSN() before acquiring\n> SyncRepLock in exlusive mode. Now we don't anymore.\n>\n>\n> I'm really not ok with unneccessarily adding an exclusive lock\n> acquisition to such a crucial path.\n>\n\nI think we can acquire SyncRepLock in share mode once to check\nWalSndCtl->sync_standbys_defined and if it's true then check it again\nafter acquiring it in exclusive mode. But it in turn ends up with\nadding one extra LWLockAcquire and LWLockRelease in sync rep path.How about we change it to this ?diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.cindex ffd5b31eb2..cdb82a8b28 100644--- a/src/backend/replication/syncrep.c+++ b/src/backend/replication/syncrep.c@@ -165,8 +165,11 @@ SyncRepWaitForLSN(XLogRecPtr lsn, bool commit)        /*         * Fast exit if user has not requested sync replication.         */-       if (!SyncRepRequested())-               return;+       if (!SyncRepRequested() || !SyncStandbysDefined())+       {+               if (!WalSndCtl->sync_standbys_defined)+                       return;+       }         Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks)));        Assert(WalSndCtl != NULL);Bring back the check which existed based on GUC but instead of just blindly returning based on just GUC not being set, check WalSndCtl->sync_standbys_defined. Thoughts?", "msg_date": "Mon, 6 Apr 2020 11:51:06 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "Hi,\n\nOn 2020-04-06 11:51:06 -0700, Ashwin Agrawal wrote:\n> On Mon, Apr 6, 2020 at 1:52 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> > On Mon, 6 Apr 2020 at 14:04, Andres Freund <andres@anarazel.de> wrote:\n> > > I'm really not ok with unneccessarily adding an exclusive lock\n> > > acquisition to such a crucial path.\n> > >\n> >\n> > I think we can acquire SyncRepLock in share mode once to check\n> > WalSndCtl->sync_standbys_defined and if it's true then check it again\n> > after acquiring it in exclusive mode. But it in turn ends up with\n> > adding one extra LWLockAcquire and LWLockRelease in sync rep path.\n\nThat's still too much. Adding another lwlock acquisition, where the same\nlock is acquired by all backends (contrasting e.g. to buffer locks), to\nthe commit path, for the benefit of a feature that the vast majority of\npeople aren't going to use, isn't good.\n\n\n> How about we change it to this ?\n\nHm. Better. But I think it might need at least a compiler barrier /\nvolatile memory load? Unlikely here, but otherwise the compiler could\ntheoretically just stash the variable somewhere locally (it's not likely\nto be a problem because it'd not be long ago that we acquired an lwlock,\nwhich is a full barrier).\n\n\n> Bring back the check which existed based on GUC but instead of just blindly\n> returning based on just GUC not being set, check\n> WalSndCtl->sync_standbys_defined. Thoughts?\n\nHm. Is there any reason not to just check\nWalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\nand WalSndCtl->sync_standbys_defined?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Apr 2020 14:14:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Tue, 7 Apr 2020 at 06:14, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-06 11:51:06 -0700, Ashwin Agrawal wrote:\n> > On Mon, Apr 6, 2020 at 1:52 AM Masahiko Sawada <\n> > masahiko.sawada@2ndquadrant.com> wrote:\n> > > On Mon, 6 Apr 2020 at 14:04, Andres Freund <andres@anarazel.de> wrote:\n> > > > I'm really not ok with unneccessarily adding an exclusive lock\n> > > > acquisition to such a crucial path.\n> > > >\n> > >\n> > > I think we can acquire SyncRepLock in share mode once to check\n> > > WalSndCtl->sync_standbys_defined and if it's true then check it again\n> > > after acquiring it in exclusive mode. But it in turn ends up with\n> > > adding one extra LWLockAcquire and LWLockRelease in sync rep path.\n>\n> That's still too much. Adding another lwlock acquisition, where the same\n> lock is acquired by all backends (contrasting e.g. to buffer locks), to\n> the commit path, for the benefit of a feature that the vast majority of\n> people aren't going to use, isn't good.\n\nAgreed.\n\nIn this case it seems okay to read WalSndCtl->sync_standbys_defined\nwithout SyncRepLock before we acquire SyncRepLock in exclusive mode.\nWhile changing WalSndCtl->sync_standbys_defined to true, in the\ncurrent code a backend who reached SyncRepWaitForLSN() waits on\nSyncRepLock, see sync_standbys_defined is true and enqueue itself.\nWith this change, since we don't acquire SyncRepLock to read\nWalSndCtl->sync_standbys_defined these backends return without waiting\nfor the change of WalSndCtl->sync_standbys_defined but it would not be\na problem. Similarly, I've considered the case where changing to\nfalse, but I think there is no problem.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 15:48:07 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > How about we change it to this ?\n>\n> Hm. Better. But I think it might need at least a compiler barrier /\n> volatile memory load? Unlikely here, but otherwise the compiler could\n> theoretically just stash the variable somewhere locally (it's not likely\n> to be a problem because it'd not be long ago that we acquired an lwlock,\n> which is a full barrier).\n>\n\nThat's the part, I am not fully sure about. But reading the comment above\nSyncRepUpdateSyncStandbysDefined(), it seems fine.\n\n> Bring back the check which existed based on GUC but instead of just\n> blindly\n> > returning based on just GUC not being set, check\n> > WalSndCtl->sync_standbys_defined. Thoughts?\n>\n> Hm. Is there any reason not to just check\n> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> and WalSndCtl->sync_standbys_defined?\n>\n\nAgree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n\nI wasn't fully thinking there, as I got distracted by if lock will be\nrequired or not for reading the same. If lock was required then checking\nfor guc first would have been better, but seems not required.\n\nOn Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de> wrote:\n> How about we change it to this ?\n\nHm. Better. But I think it might need at least a compiler barrier /\nvolatile memory load?  Unlikely here, but otherwise the compiler could\ntheoretically just stash the variable somewhere locally (it's not likely\nto be a problem because it'd not be long ago that we acquired an lwlock,\nwhich is a full barrier).That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> Bring back the check which existed based on GUC but instead of just blindly\n> returning based on just GUC not being set, check\n> WalSndCtl->sync_standbys_defined. Thoughts?\n\nHm. Is there any reason not to just check\nWalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\nand WalSndCtl->sync_standbys_defined?Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.I wasn't fully thinking there, as I got distracted by if lock will be required or not for reading the same. If lock was required then checking for guc first would have been better, but seems not required.", "msg_date": "Tue, 7 Apr 2020 11:01:55 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On 2020/04/08 3:01, Ashwin Agrawal wrote:\n> \n> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n> \n> > How about we change it to this ?\n> \n> Hm. Better. But I think it might need at least a compiler barrier /\n> volatile memory load?  Unlikely here, but otherwise the compiler could\n> theoretically just stash the variable somewhere locally (it's not likely\n> to be a problem because it'd not be long ago that we acquired an lwlock,\n> which is a full barrier).\n> \n> \n> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> \n> > Bring back the check which existed based on GUC but instead of just blindly\n> > returning based on just GUC not being set, check\n> > WalSndCtl->sync_standbys_defined. Thoughts?\n> \n> Hm. Is there any reason not to just check\n> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> and WalSndCtl->sync_standbys_defined?\n> \n> \n> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n\nSo the consensus is something like the following? Patch attached.\n\n /*\n- * Fast exit if user has not requested sync replication.\n+ * Fast exit if user has not requested sync replication, or there are no\n+ * sync replication standby names defined.\n */\n- if (!SyncRepRequested())\n+ if (!SyncRepRequested() ||\n+ !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n return;\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 10 Apr 2020 13:20:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n> >\n> > On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n> >\n> > > How about we change it to this ?\n> >\n> > Hm. Better. But I think it might need at least a compiler barrier /\n> > volatile memory load? Unlikely here, but otherwise the compiler could\n> > theoretically just stash the variable somewhere locally (it's not likely\n> > to be a problem because it'd not be long ago that we acquired an lwlock,\n> > which is a full barrier).\n> >\n> >\n> > That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> >\n> > > Bring back the check which existed based on GUC but instead of just blindly\n> > > returning based on just GUC not being set, check\n> > > WalSndCtl->sync_standbys_defined. Thoughts?\n> >\n> > Hm. Is there any reason not to just check\n> > WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> > and WalSndCtl->sync_standbys_defined?\n> >\n> >\n> > Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n>\n> So the consensus is something like the following? Patch attached.\n>\n> /*\n> - * Fast exit if user has not requested sync replication.\n> + * Fast exit if user has not requested sync replication, or there are no\n> + * sync replication standby names defined.\n> */\n> - if (!SyncRepRequested())\n> + if (!SyncRepRequested() ||\n> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> return;\n>\n\nI think we need more comments describing why checking\nsync_standby_defined without SyncRepLock is safe here. For example:\n\nThis routine gets called every commit time. So, to check if the\nsynchronous standbys is defined as quick as possible we check\nWalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\nwe make this test unlocked, there's a change we might fail to notice\nthat it has been turned off and continue processing. But since the\nsubsequent check will check it again while holding SyncRepLock, it's\nno problem. Similarly even if we fail to notice that it has been\nturned on, it's okay to return quickly since all backend consistently\nbehaves so.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:11:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\n\nOn 2020/04/10 14:11, Masahiko Sawada wrote:\n> On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n>>>\n>>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n>>>\n>>> > How about we change it to this ?\n>>>\n>>> Hm. Better. But I think it might need at least a compiler barrier /\n>>> volatile memory load? Unlikely here, but otherwise the compiler could\n>>> theoretically just stash the variable somewhere locally (it's not likely\n>>> to be a problem because it'd not be long ago that we acquired an lwlock,\n>>> which is a full barrier).\n>>>\n>>>\n>>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n>>>\n>>> > Bring back the check which existed based on GUC but instead of just blindly\n>>> > returning based on just GUC not being set, check\n>>> > WalSndCtl->sync_standbys_defined. Thoughts?\n>>>\n>>> Hm. Is there any reason not to just check\n>>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n>>> and WalSndCtl->sync_standbys_defined?\n>>>\n>>>\n>>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n>>\n>> So the consensus is something like the following? Patch attached.\n>>\n>> /*\n>> - * Fast exit if user has not requested sync replication.\n>> + * Fast exit if user has not requested sync replication, or there are no\n>> + * sync replication standby names defined.\n>> */\n>> - if (!SyncRepRequested())\n>> + if (!SyncRepRequested() ||\n>> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n>> return;\n>>\n> \n> I think we need more comments describing why checking\n> sync_standby_defined without SyncRepLock is safe here. For example:\n\nYep, agreed!\n\n> This routine gets called every commit time. So, to check if the\n> synchronous standbys is defined as quick as possible we check\n> WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n> we make this test unlocked, there's a change we might fail to notice\n> that it has been turned off and continue processing.\n\nDoes this really happen? I was thinking that the problem by not taking\nthe lock here is that SyncRepWaitForLSN() can see that shared flag after\nSyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\nbefore it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\ninto the wait queue becaues the flag was true, without lock, it may keep\nsleeping infinitely.\n\n> But since the\n> subsequent check will check it again while holding SyncRepLock, it's\n> no problem. Similarly even if we fail to notice that it has been\n> turned on\nIs this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\nsets the flag to true, SyncRepWaitForLSN() basically doesn't seem\nto fail to notice that. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 10 Apr 2020 18:57:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Fri, 10 Apr 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/10 14:11, Masahiko Sawada wrote:\n> > On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n> >>>\n> >>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n> >>>\n> >>> > How about we change it to this ?\n> >>>\n> >>> Hm. Better. But I think it might need at least a compiler barrier /\n> >>> volatile memory load? Unlikely here, but otherwise the compiler could\n> >>> theoretically just stash the variable somewhere locally (it's not likely\n> >>> to be a problem because it'd not be long ago that we acquired an lwlock,\n> >>> which is a full barrier).\n> >>>\n> >>>\n> >>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> >>>\n> >>> > Bring back the check which existed based on GUC but instead of just blindly\n> >>> > returning based on just GUC not being set, check\n> >>> > WalSndCtl->sync_standbys_defined. Thoughts?\n> >>>\n> >>> Hm. Is there any reason not to just check\n> >>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> >>> and WalSndCtl->sync_standbys_defined?\n> >>>\n> >>>\n> >>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n> >>\n> >> So the consensus is something like the following? Patch attached.\n> >>\n> >> /*\n> >> - * Fast exit if user has not requested sync replication.\n> >> + * Fast exit if user has not requested sync replication, or there are no\n> >> + * sync replication standby names defined.\n> >> */\n> >> - if (!SyncRepRequested())\n> >> + if (!SyncRepRequested() ||\n> >> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> >> return;\n> >>\n> >\n> > I think we need more comments describing why checking\n> > sync_standby_defined without SyncRepLock is safe here. For example:\n>\n> Yep, agreed!\n>\n> > This routine gets called every commit time. So, to check if the\n> > synchronous standbys is defined as quick as possible we check\n> > WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n> > we make this test unlocked, there's a change we might fail to notice\n> > that it has been turned off and continue processing.\n>\n> Does this really happen? I was thinking that the problem by not taking\n> the lock here is that SyncRepWaitForLSN() can see that shared flag after\n> SyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\n> before it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\n> into the wait queue becaues the flag was true, without lock, it may keep\n> sleeping infinitely.\n\nI think that because a backend does the following check after\nacquiring SyncRepLock, in that case, once the backend has taken\nSyncRepLock it can see that sync_standbys_defined is false and return.\nBut you meant that we do both checks without SyncRepLock?\n\n /*\n * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n * set. See SyncRepUpdateSyncStandbysDefined.\n *\n * Also check that the standby hasn't already replied. Unlikely race\n * condition but we'll be fetching that cache line anyway so it's likely\n * to be a low cost check.\n */\n if (!WalSndCtl->sync_standbys_defined ||\n lsn <= WalSndCtl->lsn[mode])\n {\n LWLockRelease(SyncRepLock);\n return;\n }\n\n>\n> > But since the\n> > subsequent check will check it again while holding SyncRepLock, it's\n> > no problem. Similarly even if we fail to notice that it has been\n> > turned on\n> Is this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\n> sets the flag to true, SyncRepWaitForLSN() basically doesn't seem\n> to fail to notice that. No?\n\nWhat I wanted to say is, in the current code, while the checkpointer\nprocess is holding SyncRepLock to turn off sync_standbys_defined,\nbackends who reach SyncRepWaitForLSN() wait for the lock. Then, after\nthe checkpointer process releases SyncRepLock these backend can\nenqueue themselves to the wait queue because they can see that\nsync_standbys_defined is turned on. On the other hand if we do the\ncheck without SyncRepLock, backends who reach SyncRepWaitForLSN() will\nreturn instead of waiting, in spite of checkpointer process being\nturning on sync_standbys_defined. Which means these backends are\nfailing to notice that it has been turned on, I thought.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Apr 2020 20:56:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\n\nOn 2020/04/10 20:56, Masahiko Sawada wrote:\n> On Fri, 10 Apr 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/04/10 14:11, Masahiko Sawada wrote:\n>>> On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n>>>>>\n>>>>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n>>>>>\n>>>>> > How about we change it to this ?\n>>>>>\n>>>>> Hm. Better. But I think it might need at least a compiler barrier /\n>>>>> volatile memory load? Unlikely here, but otherwise the compiler could\n>>>>> theoretically just stash the variable somewhere locally (it's not likely\n>>>>> to be a problem because it'd not be long ago that we acquired an lwlock,\n>>>>> which is a full barrier).\n>>>>>\n>>>>>\n>>>>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n>>>>>\n>>>>> > Bring back the check which existed based on GUC but instead of just blindly\n>>>>> > returning based on just GUC not being set, check\n>>>>> > WalSndCtl->sync_standbys_defined. Thoughts?\n>>>>>\n>>>>> Hm. Is there any reason not to just check\n>>>>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n>>>>> and WalSndCtl->sync_standbys_defined?\n>>>>>\n>>>>>\n>>>>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n>>>>\n>>>> So the consensus is something like the following? Patch attached.\n>>>>\n>>>> /*\n>>>> - * Fast exit if user has not requested sync replication.\n>>>> + * Fast exit if user has not requested sync replication, or there are no\n>>>> + * sync replication standby names defined.\n>>>> */\n>>>> - if (!SyncRepRequested())\n>>>> + if (!SyncRepRequested() ||\n>>>> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n>>>> return;\n>>>>\n>>>\n>>> I think we need more comments describing why checking\n>>> sync_standby_defined without SyncRepLock is safe here. For example:\n>>\n>> Yep, agreed!\n>>\n>>> This routine gets called every commit time. So, to check if the\n>>> synchronous standbys is defined as quick as possible we check\n>>> WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n>>> we make this test unlocked, there's a change we might fail to notice\n>>> that it has been turned off and continue processing.\n>>\n>> Does this really happen? I was thinking that the problem by not taking\n>> the lock here is that SyncRepWaitForLSN() can see that shared flag after\n>> SyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\n>> before it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\n>> into the wait queue becaues the flag was true, without lock, it may keep\n>> sleeping infinitely.\n> \n> I think that because a backend does the following check after\n> acquiring SyncRepLock, in that case, once the backend has taken\n> SyncRepLock it can see that sync_standbys_defined is false and return.\n\nYes, but the backend can see that sync_standby_defined indicates false\nwhether holding SyncRepLock or not, after the checkpointer sets it to false.\n\n> But you meant that we do both checks without SyncRepLock?\n\nMaybe No. The change that the latest patch provides should be applied, I think.\nThat is, sync_standbys_defined should be check without lock at first, then\nonly if it's true, it should be checked again with lock.\n\nISTM that basically SyncRepLock is used in SyncRepWaitForLSN() and\nSyncRepUpdateSyncStandbysDefined() to make operation on the queue\nand enabling sync_standbys_defined atomic. Without lock, the issue that\nthe comment in SyncRepUpdateSyncStandbysDefined() explains would\nhappen. That is, the backend may keep waiting infinitely as follows.\n\n1. checkpointer calls SyncRepUpdateSyncStandbysDefined()\n2. checkpointer sees that the flag indicates true but the config indicates false\n3. checkpointer takes lock and wakes up all the waiters\n4. backend calls SyncRepWaitForLSN() can see that the flag indicates true\n5. checkpointer sets the flag to false and releases the lock\n6. backend adds itself to the queue and wait until it's waken up, but will not happen immediately\n\nSo after the backend sees that the flag indicates true without lock,\nit must check the flag again with lock immediately without operating\nthe queue. If this my understanding is right, I was thinking that\nthe comment should mention these things.\n\n> /*\n> * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n> * set. See SyncRepUpdateSyncStandbysDefined.\n> *\n> * Also check that the standby hasn't already replied. Unlikely race\n> * condition but we'll be fetching that cache line anyway so it's likely\n> * to be a low cost check.\n> */\n> if (!WalSndCtl->sync_standbys_defined ||\n> lsn <= WalSndCtl->lsn[mode])\n> {\n> LWLockRelease(SyncRepLock);\n> return;\n> }\n> \n>>\n>>> But since the\n>>> subsequent check will check it again while holding SyncRepLock, it's\n>>> no problem. Similarly even if we fail to notice that it has been\n>>> turned on\n>> Is this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\n>> sets the flag to true, SyncRepWaitForLSN() basically doesn't seem\n>> to fail to notice that. No?\n> \n> What I wanted to say is, in the current code, while the checkpointer\n> process is holding SyncRepLock to turn off sync_standbys_defined,\n> backends who reach SyncRepWaitForLSN() wait for the lock. Then, after\n> the checkpointer process releases SyncRepLock these backend can\n> enqueue themselves to the wait queue because they can see that\n> sync_standbys_defined is turned on.\n\nIn this case, since the checkpointer turned the flag off while holding\nthe lock, the backend sees that the flag is turned off, and doesn't\nenqueue itself. No?\n\n> On the other hand if we do the\n> check without SyncRepLock, backends who reach SyncRepWaitForLSN() will\n> return instead of waiting, in spite of checkpointer process being\n> turning on sync_standbys_defined. Which means these backends are\n> failing to notice that it has been turned on, I thought.\n\nNo. Or I'm missing something... In this case, the backend sees that\nthe flag is turned on without lock since checkpointer turned it on.\nSo you're thinking the following. Right?\n\n1. sync_standbys_defined flag is false\n2. checkpointer takes the lock and turns the flag on\n3. backend sees the flag\n4. checkpointer releases the lock\n\nIn #3, the flag indicates true, I think. But you think it's false?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 10 Apr 2020 21:52:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Fri, 10 Apr 2020 at 21:52, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/10 20:56, Masahiko Sawada wrote:\n> > On Fri, 10 Apr 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/04/10 14:11, Masahiko Sawada wrote:\n> >>> On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n> >>>>>\n> >>>>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n> >>>>>\n> >>>>> > How about we change it to this ?\n> >>>>>\n> >>>>> Hm. Better. But I think it might need at least a compiler barrier /\n> >>>>> volatile memory load? Unlikely here, but otherwise the compiler could\n> >>>>> theoretically just stash the variable somewhere locally (it's not likely\n> >>>>> to be a problem because it'd not be long ago that we acquired an lwlock,\n> >>>>> which is a full barrier).\n> >>>>>\n> >>>>>\n> >>>>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> >>>>>\n> >>>>> > Bring back the check which existed based on GUC but instead of just blindly\n> >>>>> > returning based on just GUC not being set, check\n> >>>>> > WalSndCtl->sync_standbys_defined. Thoughts?\n> >>>>>\n> >>>>> Hm. Is there any reason not to just check\n> >>>>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> >>>>> and WalSndCtl->sync_standbys_defined?\n> >>>>>\n> >>>>>\n> >>>>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n> >>>>\n> >>>> So the consensus is something like the following? Patch attached.\n> >>>>\n> >>>> /*\n> >>>> - * Fast exit if user has not requested sync replication.\n> >>>> + * Fast exit if user has not requested sync replication, or there are no\n> >>>> + * sync replication standby names defined.\n> >>>> */\n> >>>> - if (!SyncRepRequested())\n> >>>> + if (!SyncRepRequested() ||\n> >>>> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> >>>> return;\n> >>>>\n> >>>\n> >>> I think we need more comments describing why checking\n> >>> sync_standby_defined without SyncRepLock is safe here. For example:\n> >>\n> >> Yep, agreed!\n> >>\n> >>> This routine gets called every commit time. So, to check if the\n> >>> synchronous standbys is defined as quick as possible we check\n> >>> WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n> >>> we make this test unlocked, there's a change we might fail to notice\n> >>> that it has been turned off and continue processing.\n> >>\n> >> Does this really happen? I was thinking that the problem by not taking\n> >> the lock here is that SyncRepWaitForLSN() can see that shared flag after\n> >> SyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\n> >> before it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\n> >> into the wait queue becaues the flag was true, without lock, it may keep\n> >> sleeping infinitely.\n> >\n> > I think that because a backend does the following check after\n> > acquiring SyncRepLock, in that case, once the backend has taken\n> > SyncRepLock it can see that sync_standbys_defined is false and return.\n>\n> Yes, but the backend can see that sync_standby_defined indicates false\n> whether holding SyncRepLock or not, after the checkpointer sets it to false.\n>\n> > But you meant that we do both checks without SyncRepLock?\n>\n> Maybe No. The change that the latest patch provides should be applied, I think.\n> That is, sync_standbys_defined should be check without lock at first, then\n> only if it's true, it should be checked again with lock.\n\nYes. My understanding is the same.\n\nAfter applying your patch, SyncRepWaitForLSN() is going to become\nsomething like:\n\n /*\n * Fast exit if user has not requested sync replication, or there are no\n * sync replication standby names defined.\n */\n if (!SyncRepRequested() ||\n !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n return;\n\n Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks)));\n Assert(WalSndCtl != NULL);\n\n LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n Assert(MyProc->syncRepState == SYNC_REP_NOT_WAITING);\n\n /*\n * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n * set. See SyncRepUpdateSyncStandbysDefined.\n *\n * Also check that the standby hasn't already replied. Unlikely race\n * condition but we'll be fetching that cache line anyway so it's likely\n * to be a low cost check.\n */\n if (!WalSndCtl->sync_standbys_defined ||\n lsn <= WalSndCtl->lsn[mode])\n {\n LWLockRelease(SyncRepLock);\n return;\n }\n\n /*\n * Set our waitLSN so WALSender will know when to wake us, and add\n * ourselves to the queue.\n */\n MyProc->waitLSN = lsn;\n MyProc->syncRepState = SYNC_REP_WAITING;\n SyncRepQueueInsert(mode);\n Assert(SyncRepQueueIsOrderedByLSN(mode));\n LWLockRelease(SyncRepLock);\n\nThere are two checks of sync_standbys_defined. The first check is\nperformed without SyncRepLock and the second check is performed with\nSyncRepLock. That's what you and I are expecting. Right?\n\n>\n> ISTM that basically SyncRepLock is used in SyncRepWaitForLSN() and\n> SyncRepUpdateSyncStandbysDefined() to make operation on the queue\n> and enabling sync_standbys_defined atomic. Without lock, the issue that\n> the comment in SyncRepUpdateSyncStandbysDefined() explains would\n> happen. That is, the backend may keep waiting infinitely as follows.\n>\n\nLet me think the following sequence after applying your changes:\n\n> 1. checkpointer calls SyncRepUpdateSyncStandbysDefined()\n> 2. checkpointer sees that the flag indicates true but the config indicates false\n> 3. checkpointer takes lock and wakes up all the waiters\n> 4. backend calls SyncRepWaitForLSN() can see that the flag indicates true\n\nYes, I suppose this is the first check of sync_standbys_defined.\n\nAnd before the second check, backend tries to acquire SyncRepLock but\nsince the lock is already being held by checkohpointer it must wait.\n\n> 5. checkpointer sets the flag to false and releases the lock\n\nAfter checkpointer release the lock, the backend is woken up.\n\n> 6. backend adds itself to the queue and wait until it's waken up, but will not happen immediately\n\nThe backend sees that the flag has been false at the second check, so return.\n\nIf we didn't acquire SyncRepLock even for the second check I think the\nbackend would keep waiting infinitely as you mentioned.\n\n>\n> So after the backend sees that the flag indicates true without lock,\n> it must check the flag again with lock immediately without operating\n> the queue. If this my understanding is right, I was thinking that\n> the comment should mention these things.\n\nI think that's right. I was going to describe why we do the first\ncheck without SyncRepLock and why it is safe but it seems to me that\nthese things you mentioned are related to the second check, if I'm not\nmissing something.\n\n>\n> > /*\n> > * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n> > * set. See SyncRepUpdateSyncStandbysDefined.\n> > *\n> > * Also check that the standby hasn't already replied. Unlikely race\n> > * condition but we'll be fetching that cache line anyway so it's likely\n> > * to be a low cost check.\n> > */\n> > if (!WalSndCtl->sync_standbys_defined ||\n> > lsn <= WalSndCtl->lsn[mode])\n> > {\n> > LWLockRelease(SyncRepLock);\n> > return;\n> > }\n> >\n> >>\n> >>> But since the\n> >>> subsequent check will check it again while holding SyncRepLock, it's\n> >>> no problem. Similarly even if we fail to notice that it has been\n> >>> turned on\n> >> Is this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\n> >> sets the flag to true, SyncRepWaitForLSN() basically doesn't seem\n> >> to fail to notice that. No?\n> >\n> > What I wanted to say is, in the current code, while the checkpointer\n> > process is holding SyncRepLock to turn off sync_standbys_defined,\n> > backends who reach SyncRepWaitForLSN() wait for the lock. Then, after\n> > the checkpointer process releases SyncRepLock these backend can\n> > enqueue themselves to the wait queue because they can see that\n> > sync_standbys_defined is turned on.\n>\n> In this case, since the checkpointer turned the flag off while holding\n> the lock, the backend sees that the flag is turned off, and doesn't\n> enqueue itself. No?\n\nOops, I had mistake here. It should be \"while the checkpointer process\nis holding SyncRepLock to turn *on* sync_standbys_defined, ...\".\n\n>\n> > On the other hand if we do the\n> > check without SyncRepLock, backends who reach SyncRepWaitForLSN() will\n> > return instead of waiting, in spite of checkpointer process being\n> > turning on sync_standbys_defined. Which means these backends are\n> > failing to notice that it has been turned on, I thought.\n>\n> No. Or I'm missing something... In this case, the backend sees that\n> the flag is turned on without lock since checkpointer turned it on.\n> So you're thinking the following. Right?\n>\n> 1. sync_standbys_defined flag is false\n> 2. checkpointer takes the lock and turns the flag on\n> 3. backend sees the flag\n> 4. checkpointer releases the lock\n>\n> In #3, the flag indicates true, I think. But you think it's false?\n\nI meant the backends who reach SyncRepLock() while checkpointer is at\nafter acquiring the lock but before turning the flag on, described at\n#3 step in the following sequence. Such backends will wait for the\nlock in the current code, but after applying the patch they return\nquickly. So what I'm thinking is:\n\n1. sync_standbys_defined flag is false\n2. checkpointer takes the lock\n3. backend sees the flag, and return as it's still false\n4. checkpointer turns the flag on\n5. checkpointer releases the lock\n\nIf a backend reaches SyncRepWaitForLSN() between #4 and #5 it will\nwait for the lock and then enqueue itself after acquiring the lock.\nBut such behavior is not changed before and after applying the patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 11 Apr 2020 09:30:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Sat, 11 Apr 2020 at 09:30, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 10 Apr 2020 at 21:52, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/04/10 20:56, Masahiko Sawada wrote:\n> > > On Fri, 10 Apr 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>\n> > >>\n> > >>\n> > >> On 2020/04/10 14:11, Masahiko Sawada wrote:\n> > >>> On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >>>>\n> > >>>>\n> > >>>>\n> > >>>> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n> > >>>>>\n> > >>>>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n> > >>>>>\n> > >>>>> > How about we change it to this ?\n> > >>>>>\n> > >>>>> Hm. Better. But I think it might need at least a compiler barrier /\n> > >>>>> volatile memory load? Unlikely here, but otherwise the compiler could\n> > >>>>> theoretically just stash the variable somewhere locally (it's not likely\n> > >>>>> to be a problem because it'd not be long ago that we acquired an lwlock,\n> > >>>>> which is a full barrier).\n> > >>>>>\n> > >>>>>\n> > >>>>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n> > >>>>>\n> > >>>>> > Bring back the check which existed based on GUC but instead of just blindly\n> > >>>>> > returning based on just GUC not being set, check\n> > >>>>> > WalSndCtl->sync_standbys_defined. Thoughts?\n> > >>>>>\n> > >>>>> Hm. Is there any reason not to just check\n> > >>>>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n> > >>>>> and WalSndCtl->sync_standbys_defined?\n> > >>>>>\n> > >>>>>\n> > >>>>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n> > >>>>\n> > >>>> So the consensus is something like the following? Patch attached.\n> > >>>>\n> > >>>> /*\n> > >>>> - * Fast exit if user has not requested sync replication.\n> > >>>> + * Fast exit if user has not requested sync replication, or there are no\n> > >>>> + * sync replication standby names defined.\n> > >>>> */\n> > >>>> - if (!SyncRepRequested())\n> > >>>> + if (!SyncRepRequested() ||\n> > >>>> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> > >>>> return;\n> > >>>>\n> > >>>\n> > >>> I think we need more comments describing why checking\n> > >>> sync_standby_defined without SyncRepLock is safe here. For example:\n> > >>\n> > >> Yep, agreed!\n> > >>\n> > >>> This routine gets called every commit time. So, to check if the\n> > >>> synchronous standbys is defined as quick as possible we check\n> > >>> WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n> > >>> we make this test unlocked, there's a change we might fail to notice\n> > >>> that it has been turned off and continue processing.\n> > >>\n> > >> Does this really happen? I was thinking that the problem by not taking\n> > >> the lock here is that SyncRepWaitForLSN() can see that shared flag after\n> > >> SyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\n> > >> before it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\n> > >> into the wait queue becaues the flag was true, without lock, it may keep\n> > >> sleeping infinitely.\n> > >\n> > > I think that because a backend does the following check after\n> > > acquiring SyncRepLock, in that case, once the backend has taken\n> > > SyncRepLock it can see that sync_standbys_defined is false and return.\n> >\n> > Yes, but the backend can see that sync_standby_defined indicates false\n> > whether holding SyncRepLock or not, after the checkpointer sets it to false.\n> >\n> > > But you meant that we do both checks without SyncRepLock?\n> >\n> > Maybe No. The change that the latest patch provides should be applied, I think.\n> > That is, sync_standbys_defined should be check without lock at first, then\n> > only if it's true, it should be checked again with lock.\n>\n> Yes. My understanding is the same.\n>\n> After applying your patch, SyncRepWaitForLSN() is going to become\n> something like:\n>\n> /*\n> * Fast exit if user has not requested sync replication, or there are no\n> * sync replication standby names defined.\n> */\n> if (!SyncRepRequested() ||\n> !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> return;\n>\n> Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks)));\n> Assert(WalSndCtl != NULL);\n>\n> LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n> Assert(MyProc->syncRepState == SYNC_REP_NOT_WAITING);\n>\n> /*\n> * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n> * set. See SyncRepUpdateSyncStandbysDefined.\n> *\n> * Also check that the standby hasn't already replied. Unlikely race\n> * condition but we'll be fetching that cache line anyway so it's likely\n> * to be a low cost check.\n> */\n> if (!WalSndCtl->sync_standbys_defined ||\n> lsn <= WalSndCtl->lsn[mode])\n> {\n> LWLockRelease(SyncRepLock);\n> return;\n> }\n>\n> /*\n> * Set our waitLSN so WALSender will know when to wake us, and add\n> * ourselves to the queue.\n> */\n> MyProc->waitLSN = lsn;\n> MyProc->syncRepState = SYNC_REP_WAITING;\n> SyncRepQueueInsert(mode);\n> Assert(SyncRepQueueIsOrderedByLSN(mode));\n> LWLockRelease(SyncRepLock);\n>\n> There are two checks of sync_standbys_defined. The first check is\n> performed without SyncRepLock and the second check is performed with\n> SyncRepLock. That's what you and I are expecting. Right?\n>\n> >\n> > ISTM that basically SyncRepLock is used in SyncRepWaitForLSN() and\n> > SyncRepUpdateSyncStandbysDefined() to make operation on the queue\n> > and enabling sync_standbys_defined atomic. Without lock, the issue that\n> > the comment in SyncRepUpdateSyncStandbysDefined() explains would\n> > happen. That is, the backend may keep waiting infinitely as follows.\n> >\n>\n> Let me think the following sequence after applying your changes:\n>\n> > 1. checkpointer calls SyncRepUpdateSyncStandbysDefined()\n> > 2. checkpointer sees that the flag indicates true but the config indicates false\n> > 3. checkpointer takes lock and wakes up all the waiters\n> > 4. backend calls SyncRepWaitForLSN() can see that the flag indicates true\n>\n> Yes, I suppose this is the first check of sync_standbys_defined.\n>\n> And before the second check, backend tries to acquire SyncRepLock but\n> since the lock is already being held by checkohpointer it must wait.\n>\n> > 5. checkpointer sets the flag to false and releases the lock\n>\n> After checkpointer release the lock, the backend is woken up.\n>\n> > 6. backend adds itself to the queue and wait until it's waken up, but will not happen immediately\n>\n> The backend sees that the flag has been false at the second check, so return.\n>\n> If we didn't acquire SyncRepLock even for the second check I think the\n> backend would keep waiting infinitely as you mentioned.\n>\n> >\n> > So after the backend sees that the flag indicates true without lock,\n> > it must check the flag again with lock immediately without operating\n> > the queue. If this my understanding is right, I was thinking that\n> > the comment should mention these things.\n>\n> I think that's right. I was going to describe why we do the first\n> check without SyncRepLock and why it is safe but it seems to me that\n> these things you mentioned are related to the second check, if I'm not\n> missing something.\n>\n> >\n> > > /*\n> > > * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n> > > * set. See SyncRepUpdateSyncStandbysDefined.\n> > > *\n> > > * Also check that the standby hasn't already replied. Unlikely race\n> > > * condition but we'll be fetching that cache line anyway so it's likely\n> > > * to be a low cost check.\n> > > */\n> > > if (!WalSndCtl->sync_standbys_defined ||\n> > > lsn <= WalSndCtl->lsn[mode])\n> > > {\n> > > LWLockRelease(SyncRepLock);\n> > > return;\n> > > }\n> > >\n> > >>\n> > >>> But since the\n> > >>> subsequent check will check it again while holding SyncRepLock, it's\n> > >>> no problem. Similarly even if we fail to notice that it has been\n> > >>> turned on\n> > >> Is this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\n> > >> sets the flag to true, SyncRepWaitForLSN() basically doesn't seem\n> > >> to fail to notice that. No?\n> > >\n> > > What I wanted to say is, in the current code, while the checkpointer\n> > > process is holding SyncRepLock to turn off sync_standbys_defined,\n> > > backends who reach SyncRepWaitForLSN() wait for the lock. Then, after\n> > > the checkpointer process releases SyncRepLock these backend can\n> > > enqueue themselves to the wait queue because they can see that\n> > > sync_standbys_defined is turned on.\n> >\n> > In this case, since the checkpointer turned the flag off while holding\n> > the lock, the backend sees that the flag is turned off, and doesn't\n> > enqueue itself. No?\n>\n> Oops, I had mistake here. It should be \"while the checkpointer process\n> is holding SyncRepLock to turn *on* sync_standbys_defined, ...\".\n>\n> >\n> > > On the other hand if we do the\n> > > check without SyncRepLock, backends who reach SyncRepWaitForLSN() will\n> > > return instead of waiting, in spite of checkpointer process being\n> > > turning on sync_standbys_defined. Which means these backends are\n> > > failing to notice that it has been turned on, I thought.\n> >\n> > No. Or I'm missing something... In this case, the backend sees that\n> > the flag is turned on without lock since checkpointer turned it on.\n> > So you're thinking the following. Right?\n> >\n> > 1. sync_standbys_defined flag is false\n> > 2. checkpointer takes the lock and turns the flag on\n> > 3. backend sees the flag\n> > 4. checkpointer releases the lock\n> >\n> > In #3, the flag indicates true, I think. But you think it's false?\n>\n> I meant the backends who reach SyncRepLock() while checkpointer is at\n> after acquiring the lock but before turning the flag on, described at\n> #3 step in the following sequence. Such backends will wait for the\n> lock in the current code, but after applying the patch they return\n> quickly. So what I'm thinking is:\n>\n> 1. sync_standbys_defined flag is false\n> 2. checkpointer takes the lock\n> 3. backend sees the flag, and return as it's still false\n> 4. checkpointer turns the flag on\n> 5. checkpointer releases the lock\n>\n> If a backend reaches SyncRepWaitForLSN() between #4 and #5 it will\n> wait for the lock and then enqueue itself after acquiring the lock.\n> But such behavior is not changed before and after applying the patch.\n>\n\nFujii-san, I think we agree on how to fix this issue and on the patch\nyou proposed so please add your comments.\n\nThis item is for PG14, right? If so I'd like to add this item to the\nnext commit fest.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 11:41:12 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Mon, May 18, 2020 at 7:41 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> This item is for PG14, right? If so I'd like to add this item to the\n> next commit fest.\n>\n\nSure, add it to commit fest.\nSeems though it should be backpatched to relevant branches as well.\n\nOn Mon, May 18, 2020 at 7:41 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\nThis item is for PG14, right? If so I'd like to add this item to the\nnext commit fest.Sure, add it to commit fest.Seems though it should be backpatched to relevant branches as well.", "msg_date": "Tue, 19 May 2020 08:56:13 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Tue, May 19, 2020 at 08:56:13AM -0700, Ashwin Agrawal wrote:\n> Sure, add it to commit fest.\n> Seems though it should be backpatched to relevant branches as well.\n\nIt does not seem to be listed yet. Are you planning to add it under\nthe section for bug fixes?\n--\nMichael", "msg_date": "Wed, 20 May 2020 14:38:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\n\nOn 2020/05/19 11:41, Masahiko Sawada wrote:\n> On Sat, 11 Apr 2020 at 09:30, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Fri, 10 Apr 2020 at 21:52, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/04/10 20:56, Masahiko Sawada wrote:\n>>>> On Fri, 10 Apr 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/04/10 14:11, Masahiko Sawada wrote:\n>>>>>> On Fri, 10 Apr 2020 at 13:20, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/04/08 3:01, Ashwin Agrawal wrote:\n>>>>>>>>\n>>>>>>>> On Mon, Apr 6, 2020 at 2:14 PM Andres Freund <andres@anarazel.de <mailto:andres@anarazel.de>> wrote:\n>>>>>>>>\n>>>>>>>> > How about we change it to this ?\n>>>>>>>>\n>>>>>>>> Hm. Better. But I think it might need at least a compiler barrier /\n>>>>>>>> volatile memory load? Unlikely here, but otherwise the compiler could\n>>>>>>>> theoretically just stash the variable somewhere locally (it's not likely\n>>>>>>>> to be a problem because it'd not be long ago that we acquired an lwlock,\n>>>>>>>> which is a full barrier).\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> That's the part, I am not fully sure about. But reading the comment above SyncRepUpdateSyncStandbysDefined(), it seems fine.\n>>>>>>>>\n>>>>>>>> > Bring back the check which existed based on GUC but instead of just blindly\n>>>>>>>> > returning based on just GUC not being set, check\n>>>>>>>> > WalSndCtl->sync_standbys_defined. Thoughts?\n>>>>>>>>\n>>>>>>>> Hm. Is there any reason not to just check\n>>>>>>>> WalSndCtl->sync_standbys_defined? rather than both !SyncStandbysDefined()\n>>>>>>>> and WalSndCtl->sync_standbys_defined?\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> Agree, just checking for WalSndCtl->sync_standbys_defined seems fine.\n>>>>>>>\n>>>>>>> So the consensus is something like the following? Patch attached.\n>>>>>>>\n>>>>>>> /*\n>>>>>>> - * Fast exit if user has not requested sync replication.\n>>>>>>> + * Fast exit if user has not requested sync replication, or there are no\n>>>>>>> + * sync replication standby names defined.\n>>>>>>> */\n>>>>>>> - if (!SyncRepRequested())\n>>>>>>> + if (!SyncRepRequested() ||\n>>>>>>> + !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n>>>>>>> return;\n>>>>>>>\n>>>>>>\n>>>>>> I think we need more comments describing why checking\n>>>>>> sync_standby_defined without SyncRepLock is safe here. For example:\n>>>>>\n>>>>> Yep, agreed!\n>>>>>\n>>>>>> This routine gets called every commit time. So, to check if the\n>>>>>> synchronous standbys is defined as quick as possible we check\n>>>>>> WalSndCtl->sync_standbys_defined without acquiring SyncRepLock. Since\n>>>>>> we make this test unlocked, there's a change we might fail to notice\n>>>>>> that it has been turned off and continue processing.\n>>>>>\n>>>>> Does this really happen? I was thinking that the problem by not taking\n>>>>> the lock here is that SyncRepWaitForLSN() can see that shared flag after\n>>>>> SyncRepUpdateSyncStandbysDefined() wakes up all the waiters and\n>>>>> before it sets the flag to false. Then if SyncRepWaitForLSN() adds itself\n>>>>> into the wait queue becaues the flag was true, without lock, it may keep\n>>>>> sleeping infinitely.\n>>>>\n>>>> I think that because a backend does the following check after\n>>>> acquiring SyncRepLock, in that case, once the backend has taken\n>>>> SyncRepLock it can see that sync_standbys_defined is false and return.\n>>>\n>>> Yes, but the backend can see that sync_standby_defined indicates false\n>>> whether holding SyncRepLock or not, after the checkpointer sets it to false.\n>>>\n>>>> But you meant that we do both checks without SyncRepLock?\n>>>\n>>> Maybe No. The change that the latest patch provides should be applied, I think.\n>>> That is, sync_standbys_defined should be check without lock at first, then\n>>> only if it's true, it should be checked again with lock.\n>>\n>> Yes. My understanding is the same.\n>>\n>> After applying your patch, SyncRepWaitForLSN() is going to become\n>> something like:\n>>\n>> /*\n>> * Fast exit if user has not requested sync replication, or there are no\n>> * sync replication standby names defined.\n>> */\n>> if (!SyncRepRequested() ||\n>> !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n>> return;\n>>\n>> Assert(SHMQueueIsDetached(&(MyProc->syncRepLinks)));\n>> Assert(WalSndCtl != NULL);\n>>\n>> LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n>> Assert(MyProc->syncRepState == SYNC_REP_NOT_WAITING);\n>>\n>> /*\n>> * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n>> * set. See SyncRepUpdateSyncStandbysDefined.\n>> *\n>> * Also check that the standby hasn't already replied. Unlikely race\n>> * condition but we'll be fetching that cache line anyway so it's likely\n>> * to be a low cost check.\n>> */\n>> if (!WalSndCtl->sync_standbys_defined ||\n>> lsn <= WalSndCtl->lsn[mode])\n>> {\n>> LWLockRelease(SyncRepLock);\n>> return;\n>> }\n>>\n>> /*\n>> * Set our waitLSN so WALSender will know when to wake us, and add\n>> * ourselves to the queue.\n>> */\n>> MyProc->waitLSN = lsn;\n>> MyProc->syncRepState = SYNC_REP_WAITING;\n>> SyncRepQueueInsert(mode);\n>> Assert(SyncRepQueueIsOrderedByLSN(mode));\n>> LWLockRelease(SyncRepLock);\n>>\n>> There are two checks of sync_standbys_defined. The first check is\n>> performed without SyncRepLock and the second check is performed with\n>> SyncRepLock. That's what you and I are expecting. Right?\n>>\n>>>\n>>> ISTM that basically SyncRepLock is used in SyncRepWaitForLSN() and\n>>> SyncRepUpdateSyncStandbysDefined() to make operation on the queue\n>>> and enabling sync_standbys_defined atomic. Without lock, the issue that\n>>> the comment in SyncRepUpdateSyncStandbysDefined() explains would\n>>> happen. That is, the backend may keep waiting infinitely as follows.\n>>>\n>>\n>> Let me think the following sequence after applying your changes:\n>>\n>>> 1. checkpointer calls SyncRepUpdateSyncStandbysDefined()\n>>> 2. checkpointer sees that the flag indicates true but the config indicates false\n>>> 3. checkpointer takes lock and wakes up all the waiters\n>>> 4. backend calls SyncRepWaitForLSN() can see that the flag indicates true\n>>\n>> Yes, I suppose this is the first check of sync_standbys_defined.\n>>\n>> And before the second check, backend tries to acquire SyncRepLock but\n>> since the lock is already being held by checkohpointer it must wait.\n>>\n>>> 5. checkpointer sets the flag to false and releases the lock\n>>\n>> After checkpointer release the lock, the backend is woken up.\n>>\n>>> 6. backend adds itself to the queue and wait until it's waken up, but will not happen immediately\n>>\n>> The backend sees that the flag has been false at the second check, so return.\n>>\n>> If we didn't acquire SyncRepLock even for the second check I think the\n>> backend would keep waiting infinitely as you mentioned.\n>>\n>>>\n>>> So after the backend sees that the flag indicates true without lock,\n>>> it must check the flag again with lock immediately without operating\n>>> the queue. If this my understanding is right, I was thinking that\n>>> the comment should mention these things.\n>>\n>> I think that's right. I was going to describe why we do the first\n>> check without SyncRepLock and why it is safe but it seems to me that\n>> these things you mentioned are related to the second check, if I'm not\n>> missing something.\n>>\n>>>\n>>>> /*\n>>>> * We don't wait for sync rep if WalSndCtl->sync_standbys_defined is not\n>>>> * set. See SyncRepUpdateSyncStandbysDefined.\n>>>> *\n>>>> * Also check that the standby hasn't already replied. Unlikely race\n>>>> * condition but we'll be fetching that cache line anyway so it's likely\n>>>> * to be a low cost check.\n>>>> */\n>>>> if (!WalSndCtl->sync_standbys_defined ||\n>>>> lsn <= WalSndCtl->lsn[mode])\n>>>> {\n>>>> LWLockRelease(SyncRepLock);\n>>>> return;\n>>>> }\n>>>>\n>>>>>\n>>>>>> But since the\n>>>>>> subsequent check will check it again while holding SyncRepLock, it's\n>>>>>> no problem. Similarly even if we fail to notice that it has been\n>>>>>> turned on\n>>>>> Is this true? ISTM that after SyncRepUpdateSyncStandbysDefined()\n>>>>> sets the flag to true, SyncRepWaitForLSN() basically doesn't seem\n>>>>> to fail to notice that. No?\n>>>>\n>>>> What I wanted to say is, in the current code, while the checkpointer\n>>>> process is holding SyncRepLock to turn off sync_standbys_defined,\n>>>> backends who reach SyncRepWaitForLSN() wait for the lock. Then, after\n>>>> the checkpointer process releases SyncRepLock these backend can\n>>>> enqueue themselves to the wait queue because they can see that\n>>>> sync_standbys_defined is turned on.\n>>>\n>>> In this case, since the checkpointer turned the flag off while holding\n>>> the lock, the backend sees that the flag is turned off, and doesn't\n>>> enqueue itself. No?\n>>\n>> Oops, I had mistake here. It should be \"while the checkpointer process\n>> is holding SyncRepLock to turn *on* sync_standbys_defined, ...\".\n>>\n>>>\n>>>> On the other hand if we do the\n>>>> check without SyncRepLock, backends who reach SyncRepWaitForLSN() will\n>>>> return instead of waiting, in spite of checkpointer process being\n>>>> turning on sync_standbys_defined. Which means these backends are\n>>>> failing to notice that it has been turned on, I thought.\n>>>\n>>> No. Or I'm missing something... In this case, the backend sees that\n>>> the flag is turned on without lock since checkpointer turned it on.\n>>> So you're thinking the following. Right?\n>>>\n>>> 1. sync_standbys_defined flag is false\n>>> 2. checkpointer takes the lock and turns the flag on\n>>> 3. backend sees the flag\n>>> 4. checkpointer releases the lock\n>>>\n>>> In #3, the flag indicates true, I think. But you think it's false?\n>>\n>> I meant the backends who reach SyncRepLock() while checkpointer is at\n>> after acquiring the lock but before turning the flag on, described at\n>> #3 step in the following sequence. Such backends will wait for the\n>> lock in the current code, but after applying the patch they return\n>> quickly. So what I'm thinking is:\n>>\n>> 1. sync_standbys_defined flag is false\n>> 2. checkpointer takes the lock\n>> 3. backend sees the flag, and return as it's still false\n>> 4. checkpointer turns the flag on\n>> 5. checkpointer releases the lock\n>>\n>> If a backend reaches SyncRepWaitForLSN() between #4 and #5 it will\n>> wait for the lock and then enqueue itself after acquiring the lock.\n>> But such behavior is not changed before and after applying the patch.\n>>\n> \n> Fujii-san, I think we agree on how to fix this issue and on the patch\n> you proposed so please add your comments.\n\nSorry for the late reply...\n\nRegarding how to fix, don't we need memory barrier when reading\nsync_standbys_defined? Without that, after SyncRepUpdateSyncStandbysDefined()\nupdates it to true, SyncRepWaitForLSN() can see the previous value,\ni.e., false, and then exit out of the function. Is this right?\nIf this is right, we need memory barrier to avoid this issue?\n\n\n> This item is for PG14, right?\n\nYes!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:52:01 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "The proposed fix looks good, it resolves the lock contention problem as intended. +1 from my side.\r\n\r\n> On 09-Jul-2020, at 4:22 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> \r\n> \r\n> Regarding how to fix, don't we need memory barrier when reading\r\n> sync_standbys_defined? Without that, after SyncRepUpdateSyncStandbysDefined()\r\n> updates it to true, SyncRepWaitForLSN() can see the previous value,\r\n> i.e., false, and then exit out of the function. Is this right?\r\n> If this is right, we need memory barrier to avoid this issue?\r\n> \r\n\r\nThere is no out-of-order execution hazard in the scenario you are describing, memory barriers don’t seem to fit. Using locks to synchronise checkpointer process and a committing backend process is the right way. We have made a conscious decision to bypass the lock, which looks correct in this case.\r\n\r\nAs an aside, there is a small (?) window where a change to synchronous_standby_names GUC is partially propagated among committing backends, checkpointer and walsender. Such a window may result in walsender declaring a standby as synchronous while a commit backend fails to wait for it in SyncRepWaitForLSN. The root cause is walsender uses sync_standby_priority, a per-walsender variable to tell if a standby is synchronous. It is updated when walsender processes a config change. Whereas sync_standbys_defined, a variable updated by checkpointer, is used by committing backends to determine if they need to wait. If checkpointer is busy flushing buffers, it may take longer than walsender to reflect a change in sync_standbys_defined. This is a low impact problem, should be ok to live with it.\r\n\r\nAsim\r\n\r\n", "msg_date": "Tue, 11 Aug 2020 11:55:05 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Tue, Aug 11, 2020 at 7:55 AM Asim Praveen <pasim@vmware.com> wrote:\n> There is no out-of-order execution hazard in the scenario you are describing, memory barriers don’t seem to fit. Using locks to synchronise checkpointer process and a committing backend process is the right way. We have made a conscious decision to bypass the lock, which looks correct in this case.\n\nYeah, I am not immediately seeing why a memory barrier would help anything here.\n\n> As an aside, there is a small (?) window where a change to synchronous_standby_names GUC is partially propagated among committing backends, checkpointer and walsender. Such a window may result in walsender declaring a standby as synchronous while a commit backend fails to wait for it in SyncRepWaitForLSN. The root cause is walsender uses sync_standby_priority, a per-walsender variable to tell if a standby is synchronous. It is updated when walsender processes a config change. Whereas sync_standbys_defined, a variable updated by checkpointer, is used by committing backends to determine if they need to wait. If checkpointer is busy flushing buffers, it may take longer than walsender to reflect a change in sync_standbys_defined. This is a low impact problem, should be ok to live with it.\n\nI think this gets to the root of the issue. If we check the flag\nwithout a lock, we might see a slightly stale value. But, considering\nthat there's no particular amount of time within which configuration\nchanges are guaranteed to take effect, maybe that's OK. However, there\nis one potential gotcha here: if the walsender declares the standby to\nbe synchronous, a user can see that, right? So maybe there's this\nproblem: a user sees that the standby is synchronous and expects a\ntransaction committing afterward to provoke a wait, but really it\ndoesn't. Now the user is unhappy, feeling that the system didn't\nperform according to expectations.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 11 Aug 2020 11:27:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\r\n\r\n> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\r\n> \r\n> I think this gets to the root of the issue. If we check the flag\r\n> without a lock, we might see a slightly stale value. But, considering\r\n> that there's no particular amount of time within which configuration\r\n> changes are guaranteed to take effect, maybe that's OK. However, there\r\n> is one potential gotcha here: if the walsender declares the standby to\r\n> be synchronous, a user can see that, right? So maybe there's this\r\n> problem: a user sees that the standby is synchronous and expects a\r\n> transaction committing afterward to provoke a wait, but really it\r\n> doesn't. Now the user is unhappy, feeling that the system didn't\r\n> perform according to expectations.\r\n\r\nYes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\r\n\r\nThe potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\r\n\r\nAsim", "msg_date": "Wed, 12 Aug 2020 05:06:43 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n>\n>\n>\n> > On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > I think this gets to the root of the issue. If we check the flag\n> > without a lock, we might see a slightly stale value. But, considering\n> > that there's no particular amount of time within which configuration\n> > changes are guaranteed to take effect, maybe that's OK. However, there\n> > is one potential gotcha here: if the walsender declares the standby to\n> > be synchronous, a user can see that, right? So maybe there's this\n> > problem: a user sees that the standby is synchronous and expects a\n> > transaction committing afterward to provoke a wait, but really it\n> > doesn't. Now the user is unhappy, feeling that the system didn't\n> > perform according to expectations.\n>\n> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\n>\n> The potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\n\nI think that if the standby is quite behind the primary and in case of\nthe primary crashes, the likelihood of losing commits might get\nhigher. The user can see the standby became synchronous standby via\npg_stat_replication but commit completes without a wait because the\ncheckpointer doesn't update sync_standbys_defined yet. If the primary\ncrashes before standby catching up and the user does failover, the\ncommitted transaction will be lost, even though the user expects that\ntransaction commit has been replicated to the standby synchronously.\nAnd this can happen even without the patch, right?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Aug 2020 15:32:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\n\nOn 2020/08/12 15:32, Masahiko Sawada wrote:\n> On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n>>\n>>\n>>\n>>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>>\n>>> I think this gets to the root of the issue. If we check the flag\n>>> without a lock, we might see a slightly stale value. But, considering\n>>> that there's no particular amount of time within which configuration\n>>> changes are guaranteed to take effect, maybe that's OK. However, there\n>>> is one potential gotcha here: if the walsender declares the standby to\n>>> be synchronous, a user can see that, right? So maybe there's this\n>>> problem: a user sees that the standby is synchronous and expects a\n>>> transaction committing afterward to provoke a wait, but really it\n>>> doesn't. Now the user is unhappy, feeling that the system didn't\n>>> perform according to expectations.\n>>\n>> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\n>>\n>> The potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\n> \n> I think that if the standby is quite behind the primary and in case of\n> the primary crashes, the likelihood of losing commits might get\n> higher. The user can see the standby became synchronous standby via\n> pg_stat_replication but commit completes without a wait because the\n> checkpointer doesn't update sync_standbys_defined yet. If the primary\n> crashes before standby catching up and the user does failover, the\n> committed transaction will be lost, even though the user expects that\n> transaction commit has been replicated to the standby synchronously.\n> And this can happen even without the patch, right?\n\nI think you're right. This issue can happen even without the patch.\n\nMaybe we should not mark the standby as \"sync\" whenever sync_standbys_defined\nis false even if synchronous_standby_names is actually set and walsenders have\nalready detect that? Or we need more aggressive approach;\nmake the checkpointer update sync_standby_priority values of\nall the walsenders? ISTM that the latter looks overkill...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 19 Aug 2020 21:41:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\r\n\r\n> On 12-Aug-2020, at 12:02 PM, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\r\n> \r\n> On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\r\n>> \r\n>> \r\n>> \r\n>>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\r\n>>> \r\n>>> I think this gets to the root of the issue. If we check the flag\r\n>>> without a lock, we might see a slightly stale value. But, considering\r\n>>> that there's no particular amount of time within which configuration\r\n>>> changes are guaranteed to take effect, maybe that's OK. However, there\r\n>>> is one potential gotcha here: if the walsender declares the standby to\r\n>>> be synchronous, a user can see that, right? So maybe there's this\r\n>>> problem: a user sees that the standby is synchronous and expects a\r\n>>> transaction committing afterward to provoke a wait, but really it\r\n>>> doesn't. Now the user is unhappy, feeling that the system didn't\r\n>>> perform according to expectations.\r\n>> \r\n>> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\r\n>> \r\n>> The potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\r\n> \r\n> I think that if the standby is quite behind the primary and in case of\r\n> the primary crashes, the likelihood of losing commits might get\r\n> higher. The user can see the standby became synchronous standby via\r\n> pg_stat_replication but commit completes without a wait because the\r\n> checkpointer doesn't update sync_standbys_defined yet. If the primary\r\n> crashes before standby catching up and the user does failover, the\r\n> committed transaction will be lost, even though the user expects that\r\n> transaction commit has been replicated to the standby synchronously.\r\n> And this can happen even without the patch, right?\r\n> \r\n\r\nIt is correct that the issue is orthogonal to the patch upthread and I’ve marked\r\nthe commitfest entry as ready-for-committer.\r\n\r\nRegarding the issue described above, the amount by which the standby is lagging\r\nbehind the primary does not affect the severity. A standby’s state will be\r\nreported as “sync” to the user only after the standby has caught up (state ==\r\nWALSNDSTATE_STREAMING). The time it takes for the checkpointer to update the\r\nsync_standbys_defined flag in shared memory is the important factor. Once\r\ncheckpointer sets this flag, commits start waiting for the standby (as long as\r\nit is in-sync).\r\n\r\nAsim\r\n\r\n", "msg_date": "Wed, 19 Aug 2020 13:20:46 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "At Wed, 19 Aug 2020 13:20:46 +0000, Asim Praveen <pasim@vmware.com> wrote in \n> \n> \n> > On 12-Aug-2020, at 12:02 PM, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > \n> > On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n> >> \n> >> \n> >> \n> >>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >>> \n> >>> I think this gets to the root of the issue. If we check the flag\n> >>> without a lock, we might see a slightly stale value. But, considering\n> >>> that there's no particular amount of time within which configuration\n> >>> changes are guaranteed to take effect, maybe that's OK. However, there\n> >>> is one potential gotcha here: if the walsender declares the standby to\n> >>> be synchronous, a user can see that, right? So maybe there's this\n> >>> problem: a user sees that the standby is synchronous and expects a\n> >>> transaction committing afterward to provoke a wait, but really it\n> >>> doesn't. Now the user is unhappy, feeling that the system didn't\n> >>> perform according to expectations.\n> >> \n> >> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\n> >> \n> >> The potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\n> > \n> > I think that if the standby is quite behind the primary and in case of\n> > the primary crashes, the likelihood of losing commits might get\n> > higher. The user can see the standby became synchronous standby via\n> > pg_stat_replication but commit completes without a wait because the\n> > checkpointer doesn't update sync_standbys_defined yet. If the primary\n> > crashes before standby catching up and the user does failover, the\n> > committed transaction will be lost, even though the user expects that\n> > transaction commit has been replicated to the standby synchronously.\n> > And this can happen even without the patch, right?\n> > \n> \n> It is correct that the issue is orthogonal to the patch upthread and I’ve marked\n> the commitfest entry as ready-for-committer.\n\nI find the name of SyncStandbysDefined macro is very confusing with\nthe struct member sync_standbys_defined, but that might be another\nissue..\n\n-\t * Fast exit if user has not requested sync replication.\n+\t * Fast exit if user has not requested sync replication, or there are no\n+\t * sync replication standby names defined.\n\nThis comment sounds like we just do that twice. The reason for the\ncheck is to avoid wasteful exclusive locks on SyncRepLock, or to form\ndouble-checked locking on the variable. I think we should explain that\nhere.\n\n> Regarding the issue described above, the amount by which the standby is lagging\n> behind the primary does not affect the severity. A standby’s state will be\n> reported as “sync” to the user only after the standby has caught up (state ==\n> WALSNDSTATE_STREAMING). The time it takes for the checkpointer to update the\n> sync_standbys_defined flag in shared memory is the important factor. Once\n> checkpointer sets this flag, commits start waiting for the standby (as long as\n> it is in-sync).\n\nCATCHUP state is only entered at replication startup. It stays at\nSTREAMING when sync_sby_names is switched from '' to a valid name,\nthus sync_state shows 'sync' even if checkpointer hasn't changed\nsync_standbys_defined. If the standby being switched had a big lag,\nthe chance of losing commits getting larger (up to certain extent) for\nthe same extent of checkpointer lag.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Aug 2020 11:29:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "At Wed, 19 Aug 2020 21:41:03 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/08/12 15:32, Masahiko Sawada wrote:\n> > On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n> >>\n> >>\n> >>\n> >>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >>>\n> >>> I think this gets to the root of the issue. If we check the flag\n> >>> without a lock, we might see a slightly stale value. But, considering\n> >>> that there's no particular amount of time within which configuration\n> >>> changes are guaranteed to take effect, maybe that's OK. However, there\n> >>> is one potential gotcha here: if the walsender declares the standby to\n> >>> be synchronous, a user can see that, right? So maybe there's this\n> >>> problem: a user sees that the standby is synchronous and expects a\n> >>> transaction committing afterward to provoke a wait, but really it\n> >>> doesn't. Now the user is unhappy, feeling that the system didn't\n> >>> perform according to expectations.\n> >>\n> >> Yes, pg_stat_replication reports a standby in sync as soon as\n> >> walsender updates priority of the standby to something other than 0.\n> >>\n> >> The potential gotcha referred above doesn’t seem too severe. What is\n> >> the likelihood of someone setting synchronous_standby_names GUC with\n> >> either “*” or a standby name and then immediately promoting that\n> >> standby? If the standby is promoted before the checkpointer on master\n> >> gets a chance to update sync_standbys_defined in shared memory,\n> >> commits made during this interval on master may not make it to\n> >> standby. Upon promotion, those commits may be lost.\n> > I think that if the standby is quite behind the primary and in case of\n> > the primary crashes, the likelihood of losing commits might get\n> > higher. The user can see the standby became synchronous standby via\n> > pg_stat_replication but commit completes without a wait because the\n> > checkpointer doesn't update sync_standbys_defined yet. If the primary\n> > crashes before standby catching up and the user does failover, the\n> > committed transaction will be lost, even though the user expects that\n> > transaction commit has been replicated to the standby synchronously.\n> > And this can happen even without the patch, right?\n> \n> I think you're right. This issue can happen even without the patch.\n> \n> Maybe we should not mark the standby as \"sync\" whenever\n> sync_standbys_defined\n> is false even if synchronous_standby_names is actually set and\n> walsenders have\n> already detect that? Or we need more aggressive approach;\n> make the checkpointer update sync_standby_priority values of\n> all the walsenders? ISTM that the latter looks overkill...\n\nIt seems to me that the issue here is what\npg_stat_replication.sync_status doens't show \"the working state of the\nwalsdner\", but \"the state the walsender is commanded\". Non-zero\nWalSnd.sync_standby_priority is immediately considered as \"I am in\nsync\" but actually it is \"I am going to sync from async or am already\nin sync\". And it is precisely \"..., or am already in sync if\ncheckpointer already notices any sync walsender exists\".\n\n1. if a walsender changes its state from async to sync, it should once\n change its state back to \"CATCHUP\" or something like that.\n\n2. pg_stat_replication.sync_status need to consider WalSnd.state\n and WalSndCtlData.sync_standbys_defined.\n\nWe might be able to let SyncRepUpdateSyncStandbysDefined postpone\nchanging sync_standbys_defined until any sync standby actually comes,\nbut it would be complex.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Aug 2020 12:12:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Wed, 19 Aug 2020 at 21:41, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/12 15:32, Masahiko Sawada wrote:\n> > On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n> >>\n> >>\n> >>\n> >>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >>>\n> >>> I think this gets to the root of the issue. If we check the flag\n> >>> without a lock, we might see a slightly stale value. But, considering\n> >>> that there's no particular amount of time within which configuration\n> >>> changes are guaranteed to take effect, maybe that's OK. However, there\n> >>> is one potential gotcha here: if the walsender declares the standby to\n> >>> be synchronous, a user can see that, right? So maybe there's this\n> >>> problem: a user sees that the standby is synchronous and expects a\n> >>> transaction committing afterward to provoke a wait, but really it\n> >>> doesn't. Now the user is unhappy, feeling that the system didn't\n> >>> perform according to expectations.\n> >>\n> >> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\n> >>\n> >> The potential gotcha referred above doesn’t seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either “*” or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\n> >\n> > I think that if the standby is quite behind the primary and in case of\n> > the primary crashes, the likelihood of losing commits might get\n> > higher. The user can see the standby became synchronous standby via\n> > pg_stat_replication but commit completes without a wait because the\n> > checkpointer doesn't update sync_standbys_defined yet. If the primary\n> > crashes before standby catching up and the user does failover, the\n> > committed transaction will be lost, even though the user expects that\n> > transaction commit has been replicated to the standby synchronously.\n> > And this can happen even without the patch, right?\n>\n> I think you're right. This issue can happen even without the patch.\n>\n> Maybe we should not mark the standby as \"sync\" whenever sync_standbys_defined\n> is false even if synchronous_standby_names is actually set and walsenders have\n> already detect that?\n\nIt seems good. I guess that we can set 'async' to sync_status and 0 to\nsync_priority when sync_standbys_defined is not true regardless of\nwalsender's actual priority value. We print the message \"standby\n\\\"%s\\\" now has synchronous standby priority %u\" in SyncRepInitConfig()\nregardless of sync_standbys_defined but maybe it's fine as the message\nisn't incorrect and it's DEBUG1 message.\n\n> Or we need more aggressive approach;\n> make the checkpointer update sync_standby_priority values of\n> all the walsenders? ISTM that the latter looks overkill...\n\nI think so too.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Aug 2020 10:20:09 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On 2020/08/20 11:29, Kyotaro Horiguchi wrote:\n> At Wed, 19 Aug 2020 13:20:46 +0000, Asim Praveen <pasim@vmware.com> wrote in\n>>\n>>\n>>> On 12-Aug-2020, at 12:02 PM, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>>\n>>> On Wed, 12 Aug 2020 at 14:06, Asim Praveen <pasim@vmware.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>>> On 11-Aug-2020, at 8:57 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>>>>>\n>>>>> I think this gets to the root of the issue. If we check the flag\n>>>>> without a lock, we might see a slightly stale value. But, considering\n>>>>> that there's no particular amount of time within which configuration\n>>>>> changes are guaranteed to take effect, maybe that's OK. However, there\n>>>>> is one potential gotcha here: if the walsender declares the standby to\n>>>>> be synchronous, a user can see that, right? So maybe there's this\n>>>>> problem: a user sees that the standby is synchronous and expects a\n>>>>> transaction committing afterward to provoke a wait, but really it\n>>>>> doesn't. Now the user is unhappy, feeling that the system didn't\n>>>>> perform according to expectations.\n>>>>\n>>>> Yes, pg_stat_replication reports a standby in sync as soon as walsender updates priority of the standby to something other than 0.\n>>>>\n>>>> The potential gotcha referred above doesn\u001b$B!G\u001b(Bt seem too severe. What is the likelihood of someone setting synchronous_standby_names GUC with either \u001b$B!H\u001b(B*\u001b$B!I\u001b(B or a standby name and then immediately promoting that standby? If the standby is promoted before the checkpointer on master gets a chance to update sync_standbys_defined in shared memory, commits made during this interval on master may not make it to standby. Upon promotion, those commits may be lost.\n>>>\n>>> I think that if the standby is quite behind the primary and in case of\n>>> the primary crashes, the likelihood of losing commits might get\n>>> higher. The user can see the standby became synchronous standby via\n>>> pg_stat_replication but commit completes without a wait because the\n>>> checkpointer doesn't update sync_standbys_defined yet. If the primary\n>>> crashes before standby catching up and the user does failover, the\n>>> committed transaction will be lost, even though the user expects that\n>>> transaction commit has been replicated to the standby synchronously.\n>>> And this can happen even without the patch, right?\n>>>\n>>\n>> It is correct that the issue is orthogonal to the patch upthread and I\u001b$B!G\u001b(Bve marked\n>> the commitfest entry as ready-for-committer.\n\nYes, thanks for the review!\n\n\n> I find the name of SyncStandbysDefined macro is very confusing with\n> the struct member sync_standbys_defined, but that might be another\n> issue..\n> \n> -\t * Fast exit if user has not requested sync replication.\n> +\t * Fast exit if user has not requested sync replication, or there are no\n> +\t * sync replication standby names defined.\n> \n> This comment sounds like we just do that twice. The reason for the\n> check is to avoid wasteful exclusive locks on SyncRepLock, or to form\n> double-checked locking on the variable. I think we should explain that\n> here.\n\nI added the following comments based on the suggestion by Sawada-san upthread. Thought?\n\n+\t * Since this routine gets called every commit time, it's important to\n+\t * exit quickly if sync replication is not requested. So we check\n+\t * WalSndCtl->sync_standbys_define without the lock and exit\n+\t * immediately if it's false. If it's true, we check it again later\n+\t * while holding the lock, to avoid the race condition described\n+\t * in SyncRepUpdateSyncStandbysDefined().\n\n\nAttached is the updated version of the patch. I didn't change how to\nfix the issue. But I changed the check for fast exit so that it's called\nbefore setting the \"mode\", to avoid a few cycle.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 27 Aug 2020 02:40:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\r\n> On 26-Aug-2020, at 11:10 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> \r\n> I added the following comments based on the suggestion by Sawada-san upthread. Thought?\r\n> \r\n> +\t * Since this routine gets called every commit time, it's important to\r\n> +\t * exit quickly if sync replication is not requested. So we check\r\n> +\t * WalSndCtl->sync_standbys_define without the lock and exit\r\n> +\t * immediately if it's false. If it's true, we check it again later\r\n> +\t * while holding the lock, to avoid the race condition described\r\n> +\t * in SyncRepUpdateSyncStandbysDefined().\r\n> \r\n\r\n+1. May I suggest the following addition to the above comment (feel free to\r\nrephrase / reject)?\r\n\r\n\"If sync_standbys_defined was being set from false to true and we observe it as\r\nfalse, it ok to skip the wait. Replication was async and it is in the process\r\nof being changed to sync, due to user request. Subsequent commits will observe\r\nthe change and start waiting.”\r\n\r\n> \r\n> Attached is the updated version of the patch. I didn't change how to\r\n> fix the issue. But I changed the check for fast exit so that it's called\r\n> before setting the \"mode\", to avoid a few cycle.\r\n> \r\n\r\n\r\nLooks good to me. There is a typo in the comment:\r\n\r\n s/sync_standbys_define/sync_standbys_defined/\r\n\r\nAsim", "msg_date": "Thu, 27 Aug 2020 06:59:13 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On 2020/08/27 15:59, Asim Praveen wrote:\n> \n>> On 26-Aug-2020, at 11:10 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> I added the following comments based on the suggestion by Sawada-san upthread. Thought?\n>>\n>> +\t * Since this routine gets called every commit time, it's important to\n>> +\t * exit quickly if sync replication is not requested. So we check\n>> +\t * WalSndCtl->sync_standbys_define without the lock and exit\n>> +\t * immediately if it's false. If it's true, we check it again later\n>> +\t * while holding the lock, to avoid the race condition described\n>> +\t * in SyncRepUpdateSyncStandbysDefined().\n>>\n> \n> +1. May I suggest the following addition to the above comment (feel free to\n> rephrase / reject)?\n> \n> \"If sync_standbys_defined was being set from false to true and we observe it as\n> false, it ok to skip the wait. Replication was async and it is in the process\n> of being changed to sync, due to user request. Subsequent commits will observe\n> the change and start waiting.”\n\nThanks for the suggestion! I'm not sure if it's worth adding this because\nit seems obvious thing. But maybe you imply that we need to comment\nwhy the lock is not necessary when sync_standbys_defined is false. Right?\nIf so, what about updating the comments as follows?\n\n+\t * Since this routine gets called every commit time, it's important to\n+\t * exit quickly if sync replication is not requested. So we check\n+\t * WalSndCtl->sync_standbys_defined flag without the lock and exit\n+\t * immediately if it's false. If it's true, we need to check it again later\n+\t * while holding the lock, to check the flag and operate the sync rep\n+\t * queue atomically. This is necessary to avoid the race condition\n+\t * described in SyncRepUpdateSyncStandbysDefined(). On the other\n+\t * hand, if it's false, the lock is not necessary because we don't touch\n+\t * the queue.\n\n> \n>>\n>> Attached is the updated version of the patch. I didn't change how to\n>> fix the issue. But I changed the check for fast exit so that it's called\n>> before setting the \"mode\", to avoid a few cycle.\n>>\n> \n> \n> Looks good to me. There is a typo in the comment:\n> \n> s/sync_standbys_define/sync_standbys_defined/\n\nFixed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 28 Aug 2020 10:33:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\r\n\r\n> On 28-Aug-2020, at 7:03 AM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> \r\n> On 2020/08/27 15:59, Asim Praveen wrote:\r\n>> \r\n>> +1. May I suggest the following addition to the above comment (feel free to\r\n>> rephrase / reject)?\r\n>> \"If sync_standbys_defined was being set from false to true and we observe it as\r\n>> false, it ok to skip the wait. Replication was async and it is in the process\r\n>> of being changed to sync, due to user request. Subsequent commits will observe\r\n>> the change and start waiting.”\r\n> \r\n> Thanks for the suggestion! I'm not sure if it's worth adding this because\r\n> it seems obvious thing. But maybe you imply that we need to comment\r\n> why the lock is not necessary when sync_standbys_defined is false. Right?\r\n> If so, what about updating the comments as follows?\r\n> \r\n> +\t * Since this routine gets called every commit time, it's important to\r\n> +\t * exit quickly if sync replication is not requested. So we check\r\n> +\t * WalSndCtl->sync_standbys_defined flag without the lock and exit\r\n> +\t * immediately if it's false. If it's true, we need to check it again later\r\n> +\t * while holding the lock, to check the flag and operate the sync rep\r\n> +\t * queue atomically. This is necessary to avoid the race condition\r\n> +\t * described in SyncRepUpdateSyncStandbysDefined(). On the other\r\n> +\t * hand, if it's false, the lock is not necessary because we don't touch\r\n> +\t * the queue.\r\n\r\nThank you for updating the comment. This looks better.\r\n\r\nAsim", "msg_date": "Fri, 28 Aug 2020 11:06:09 +0000", "msg_from": "Asim Praveen <pasim@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On Fri, 28 Aug 2020 at 10:33, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/27 15:59, Asim Praveen wrote:\n> >\n> >> On 26-Aug-2020, at 11:10 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >> I added the following comments based on the suggestion by Sawada-san upthread. Thought?\n> >>\n> >> + * Since this routine gets called every commit time, it's important to\n> >> + * exit quickly if sync replication is not requested. So we check\n> >> + * WalSndCtl->sync_standbys_define without the lock and exit\n> >> + * immediately if it's false. If it's true, we check it again later\n> >> + * while holding the lock, to avoid the race condition described\n> >> + * in SyncRepUpdateSyncStandbysDefined().\n> >>\n> >\n> > +1. May I suggest the following addition to the above comment (feel free to\n> > rephrase / reject)?\n> >\n> > \"If sync_standbys_defined was being set from false to true and we observe it as\n> > false, it ok to skip the wait. Replication was async and it is in the process\n> > of being changed to sync, due to user request. Subsequent commits will observe\n> > the change and start waiting.”\n>\n> Thanks for the suggestion! I'm not sure if it's worth adding this because\n> it seems obvious thing. But maybe you imply that we need to comment\n> why the lock is not necessary when sync_standbys_defined is false. Right?\n> If so, what about updating the comments as follows?\n>\n> + * Since this routine gets called every commit time, it's important to\n> + * exit quickly if sync replication is not requested. So we check\n> + * WalSndCtl->sync_standbys_defined flag without the lock and exit\n> + * immediately if it's false. If it's true, we need to check it again later\n> + * while holding the lock, to check the flag and operate the sync rep\n> + * queue atomically. This is necessary to avoid the race condition\n> + * described in SyncRepUpdateSyncStandbysDefined(). On the other\n> + * hand, if it's false, the lock is not necessary because we don't touch\n> + * the queue.\n>\n> >\n> >>\n> >> Attached is the updated version of the patch. I didn't change how to\n> >> fix the issue. But I changed the check for fast exit so that it's called\n> >> before setting the \"mode\", to avoid a few cycle.\n> >>\n> >\n> >\n> > Looks good to me. There is a typo in the comment:\n> >\n> > s/sync_standbys_define/sync_standbys_defined/\n>\n> Fixed. Thanks!\n>\n\nBoth v2 and v3 look good to me too. IMO I'm okay with and without the\nlast sentence.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Aug 2020 21:20:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "\n\nOn 2020/08/28 21:20, Masahiko Sawada wrote:\n> On Fri, 28 Aug 2020 at 10:33, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/08/27 15:59, Asim Praveen wrote:\n>>>\n>>>> On 26-Aug-2020, at 11:10 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>> I added the following comments based on the suggestion by Sawada-san upthread. Thought?\n>>>>\n>>>> + * Since this routine gets called every commit time, it's important to\n>>>> + * exit quickly if sync replication is not requested. So we check\n>>>> + * WalSndCtl->sync_standbys_define without the lock and exit\n>>>> + * immediately if it's false. If it's true, we check it again later\n>>>> + * while holding the lock, to avoid the race condition described\n>>>> + * in SyncRepUpdateSyncStandbysDefined().\n>>>>\n>>>\n>>> +1. May I suggest the following addition to the above comment (feel free to\n>>> rephrase / reject)?\n>>>\n>>> \"If sync_standbys_defined was being set from false to true and we observe it as\n>>> false, it ok to skip the wait. Replication was async and it is in the process\n>>> of being changed to sync, due to user request. Subsequent commits will observe\n>>> the change and start waiting.”\n>>\n>> Thanks for the suggestion! I'm not sure if it's worth adding this because\n>> it seems obvious thing. But maybe you imply that we need to comment\n>> why the lock is not necessary when sync_standbys_defined is false. Right?\n>> If so, what about updating the comments as follows?\n>>\n>> + * Since this routine gets called every commit time, it's important to\n>> + * exit quickly if sync replication is not requested. So we check\n>> + * WalSndCtl->sync_standbys_defined flag without the lock and exit\n>> + * immediately if it's false. If it's true, we need to check it again later\n>> + * while holding the lock, to check the flag and operate the sync rep\n>> + * queue atomically. This is necessary to avoid the race condition\n>> + * described in SyncRepUpdateSyncStandbysDefined(). On the other\n>> + * hand, if it's false, the lock is not necessary because we don't touch\n>> + * the queue.\n>>\n>>>\n>>>>\n>>>> Attached is the updated version of the patch. I didn't change how to\n>>>> fix the issue. But I changed the check for fast exit so that it's called\n>>>> before setting the \"mode\", to avoid a few cycle.\n>>>>\n>>>\n>>>\n>>> Looks good to me. There is a typo in the comment:\n>>>\n>>> s/sync_standbys_define/sync_standbys_defined/\n>>\n>> Fixed. Thanks!\n>>\n> \n> Both v2 and v3 look good to me too. IMO I'm okay with and without the\n> last sentence.\n\nAsim and Sawada-san, thanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 2 Sep 2020 10:58:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" }, { "msg_contents": "On 2020-09-02 10:58:58 +0900, Fujii Masao wrote:\n> Asim and Sawada-san, thanks for the review! I pushed the patch.\n\nThanks for all your combined work!\n\n\n", "msg_date": "Tue, 1 Sep 2020 19:12:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: SyncRepLock acquired exclusively in default configuration" } ]
[ { "msg_contents": "Executive summary: the \"MyWalSnd->write < sentPtr\" in WalSndWaitForWal() is\nimportant for promptly updating pg_stat_replication. When caught up, we\nshould impose that logic before every sleep. The one-line fix is to sleep in\nWalSndLoop() only when pq_is_send_pending(), not when caught up.\n\n\nOn my regular development machine, src/test/subscription/t/001_rep_changes.pl\nstalls for ~10s at this wait_for_catchup:\n\n $node_publisher->safe_psql('postgres', \"DELETE FROM tab_rep\");\n\n # Restart the publisher and check the state of the subscriber which\n # should be in a streaming state after catching up.\n $node_publisher->stop('fast');\n $node_publisher->start;\n\n $node_publisher->wait_for_catchup('tap_sub');\n\nThat snippet emits three notable physical WAL records. There's a\nTransaction/COMMIT at the end of the DELETE, an XLOG/CHECKPOINT_SHUTDOWN, and\nan XLOG/FPI_FOR_HINT.\n\nThe buildfarm has stalled there, but it happens probably less than half the\ntime. Examples[1] showing the stall:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mandrill&dt=2020-03-20%2017%3A09%3A53&stg=subscription-check\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=thorntail&dt=2020-03-22%2019%3A51%3A38&stg=subscription-check\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hoverfly&dt=2020-03-19%2003%3A35%3A01&stg=subscription-check\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hoverfly&dt=2020-03-20%2015%3A15%3A01&stg=subscription-check\n\nHere's the most-relevant walsender call tree:\n\nWalSndLoop\n XLogSendLogical (caller invokes once per loop iteration, via send_data callback)\n XLogReadRecord (caller invokes once)\n ReadPageInternal (caller invokes twice in this test; more calls are possible)\n logical_read_xlog_page (caller skips when page is same as last call, else invokes 1-2 times via state->read_page() callback, registered in StartLogicalReplication)\n WalSndWaitForWal (caller invokes once; has fast path)\n\n\nThe cause is a race involving the flow of reply messages (send_feedback()\nmessages) from logical apply worker to walsender. Here are two sequencing\npatterns; the more-indented parts are what differ. Stalling pattern:\n\n sender reads Transaction/COMMIT and sends the changes\n receiver applies the changes\n receiver send_feedback() reports progress up to Transaction/COMMIT\n sender accepts the report\n sender reads XLOG/CHECKPOINT_SHUTDOWN and/or XLOG/FPI_FOR_HINT, which are no-ops for logical rep\n sender WalSndCaughtUp becomes true; sender sleeps in WalSndLoop()\n receiver wal_receiver_status_interval elapses; receiver reports progress up to Transaction/COMMIT\n sender wakes up, accepts the report\n sender calls WalSndWaitForWal(), which sends a keepalive due to \"MyWalSnd->write < sentPtr\"\n receiver gets keepalive, send_feedback() reports progress up to XLOG/FPI_FOR_HINT\n\nNon-stalling pattern (more prevalent with lower machine performance):\n\n sender reads Transaction/COMMIT and sends the changes\n sender reads XLOG/CHECKPOINT_SHUTDOWN and/or XLOG/FPI_FOR_HINT, which are no-ops for logical rep\n sender WalSndCaughtUp becomes true; sender sleeps in WalSndLoop()\n receiver applies the changes\n receiver send_feedback() reports progress up to Transaction/COMMIT\n sender wakes up, accepts the report\n sender calls WalSndWaitForWal(), which sends a keepalive due to \"MyWalSnd->write < sentPtr\"\n receiver gets keepalive, send_feedback() reports progress up to XLOG/FPI_FOR_HINT\n\n\nThe fix is to test \"MyWalSnd->write < sentPtr\" before more sleeps. The test\nis unnecessary when sleeping due to pq_is_send_pending(); in that case, the\nreceiver is not idle and will reply before idling. I changed WalSndLoop() to\nsleep only for pq_is_send_pending(). For all other sleep reasons, the sleep\nwill happen in WalSndWaitForWal(). Attached. I don't know whether this is\nimportant outside of testing scenarios. I lean against back-patching, but I\nwill back-patch if someone thinks this qualifies as a performance bug.\n\nThanks,\nnm\n\n[1] I spot-checked only my animals, since I wanted to experiment on an\naffected animal.", "msg_date": "Sun, 5 Apr 2020 23:36:49 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "001_rep_changes.pl stalls" }, { "msg_contents": "On Sun, Apr 05, 2020 at 11:36:49PM -0700, Noah Misch wrote:\n> Executive summary: the \"MyWalSnd->write < sentPtr\" in WalSndWaitForWal() is\n> important for promptly updating pg_stat_replication. When caught up, we\n> should impose that logic before every sleep. The one-line fix is to sleep in\n> WalSndLoop() only when pq_is_send_pending(), not when caught up.\n\nThis seems to have made the following race condition easier to hit:\nhttps://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\nhttps://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n\nNow it happened eight times in three days, all on BSD machines:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2020-04-11%2018%3A30%3A21\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-04-11%2018%3A45%3A39\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2020-04-11%2020%3A30%3A26\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-04-11%2021%3A45%3A48\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-04-13%2010%3A45%3A35\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2020-04-13%2016%3A00%3A18\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-04-13%2018%3A45%3A34\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-04-13%2023%3A45%3A22\n\nWhile I don't think that indicates anything wrong with the fix for $SUBJECT,\ncreating more buildfarm noise is itself bad. I am inclined to revert the fix\nafter a week. Not immediately, in case it uncovers lower-probability bugs.\nI'd then re-commit it after one of those threads fixes the other bug. Would\nanyone like to argue for a revert earlier, later, or not at all?\n\n\nThere was a novel buildfarm failure, in 010_logical_decoding_timelines.pl:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-04-13%2008%3A35%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-04-13%2017%3A15%3A01\n\nMost-relevant lines of the test script:\n\n\t$node_master->safe_psql('postgres',\n\t\t\"INSERT INTO decoding(blah) VALUES ('afterbb');\");\n\t$node_master->safe_psql('postgres', 'CHECKPOINT');\n\t$node_master->stop('immediate');\n\nThe failure suggested the INSERT was not replicated before the immediate stop.\nI can reproduce that consistently, before or after the fix for $SUBJECT, by\nmodifying walsender to delay 0.2s before sending WAL:\n\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -65,2 +65,3 @@\n #include \"libpq/pqformat.h\"\n+#include \"libpq/pqsignal.h\"\n #include \"miscadmin.h\"\n@@ -2781,2 +2782,5 @@ retry:\n \n+\tPG_SETMASK(&BlockSig);\n+\tpg_usleep(200 * 1000);\n+\tPG_SETMASK(&UnBlockSig);\n \tpq_putmessage_noblock('d', output_message.data, output_message.len);\n\nI will shortly push a fix adding a wait_for_catchup to the test. I don't know\nif/how fixing $SUBJECT made this 010_logical_decoding_timelines.pl race\ncondition easier to hit.\n\n\n", "msg_date": "Mon, 13 Apr 2020 18:38:49 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> This seems to have made the following race condition easier to hit:\n> https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n> https://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n\nYeah, I just came to the same guess in the other thread.\n\n> While I don't think that indicates anything wrong with the fix for $SUBJECT,\n> creating more buildfarm noise is itself bad. I am inclined to revert the fix\n> after a week. Not immediately, in case it uncovers lower-probability bugs.\n> I'd then re-commit it after one of those threads fixes the other bug. Would\n> anyone like to argue for a revert earlier, later, or not at all?\n\nI don't think you should revert. Those failures are (just) often enough\nto be annoying but I do not think that a proper fix is very far away.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 21:45:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "On Mon, Apr 13, 2020 at 09:45:16PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > This seems to have made the following race condition easier to hit:\n> > https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n> > https://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n> \n> Yeah, I just came to the same guess in the other thread.\n> \n> > While I don't think that indicates anything wrong with the fix for $SUBJECT,\n> > creating more buildfarm noise is itself bad. I am inclined to revert the fix\n> > after a week. Not immediately, in case it uncovers lower-probability bugs.\n> > I'd then re-commit it after one of those threads fixes the other bug. Would\n> > anyone like to argue for a revert earlier, later, or not at all?\n> \n> I don't think you should revert. Those failures are (just) often enough\n> to be annoying but I do not think that a proper fix is very far away.\n\nThat works for me, but an actual defect may trigger a revert. Fujii Masao\nreported high walsender CPU usage after this patch. The patch caused idle\nphysical walsenders to use 100% CPU. When caught up, the\nWalSndSendDataCallback for logical replication, XLogSendLogical(), sleeps\nuntil more WAL is available. XLogSendPhysical() just returns when caught up.\nNo amount of WAL is too small for physical replication to dispatch, but\nlogical replication needs the full xl_tot_len of a record. Some options:\n\n1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n WalSndWaitForWal() when no WAL is available. A quick version of this\n passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n wrong for physical replication.\n\n2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n insufficient WAL is available. This complicates the xlogreader.h API to\n pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n resume decoding. This doesn't have any advantages to make up for those.\n\n3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n further drift between the two wait sites; on the other hand, one could\n refactor later to help avoid that.\n\n4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n minimal fix, but it crudely punches through the abstraction between\n WalSndLoop() and its WalSndSendDataCallback.\n\nI'm favoring (1). Other preferences?\n\n\n", "msg_date": "Thu, 16 Apr 2020 22:41:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "At Thu, 16 Apr 2020 22:41:46 -0700, Noah Misch <noah@leadboat.com> wrote in \n> On Mon, Apr 13, 2020 at 09:45:16PM -0400, Tom Lane wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > This seems to have made the following race condition easier to hit:\n> > > https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n> > > https://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n> > \n> > Yeah, I just came to the same guess in the other thread.\n> > \n> > > While I don't think that indicates anything wrong with the fix for $SUBJECT,\n> > > creating more buildfarm noise is itself bad. I am inclined to revert the fix\n> > > after a week. Not immediately, in case it uncovers lower-probability bugs.\n> > > I'd then re-commit it after one of those threads fixes the other bug. Would\n> > > anyone like to argue for a revert earlier, later, or not at all?\n> > \n> > I don't think you should revert. Those failures are (just) often enough\n> > to be annoying but I do not think that a proper fix is very far away.\n> \n> That works for me, but an actual defect may trigger a revert. Fujii Masao\n> reported high walsender CPU usage after this patch. The patch caused idle\n> physical walsenders to use 100% CPU. When caught up, the\n> WalSndSendDataCallback for logical replication, XLogSendLogical(), sleeps\n> until more WAL is available. XLogSendPhysical() just returns when caught up.\n> No amount of WAL is too small for physical replication to dispatch, but\n> logical replication needs the full xl_tot_len of a record. Some options:\n> \n> 1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n> WalSndWaitForWal() when no WAL is available. A quick version of this\n> passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n> wrong for physical replication.\n> \n> 2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n> insufficient WAL is available. This complicates the xlogreader.h API to\n> pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n> resume decoding. This doesn't have any advantages to make up for those.\n> \n> 3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n> WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n> further drift between the two wait sites; on the other hand, one could\n> refactor later to help avoid that.\n> \n> 4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n> minimal fix, but it crudely punches through the abstraction between\n> WalSndLoop() and its WalSndSendDataCallback.\n> \n> I'm favoring (1). Other preferences?\n\nStarting from the current shape, I think 1 is preferable, since that\nwaiting logic is no longer shared between logical and physical\nreplications. But I'm not sure I like calling WalSndWaitForWal()\n(maybe with previous location + 1?), because the function seems to do\ntoo-much.\n\nBy the way, if latch is consumed in WalSndLoop, succeeding call to\nWalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\ncause missing wakeups? (in other words, overlooking of wakeup latch).\nSince the only source other than timeout of walsender wakeup is latch,\nwe should avoid wasteful consuming of latch. (It is the same issue\nwith [1]).\n\nIf wakeup signal is not remembered on walsender (like\nInterruptPending), WalSndPhysical cannot enter a sleep with\nconfidence.\n\n\n[1] https://www.postgresql.org/message-id/20200408.164605.1874250940847340108.horikyota.ntt@gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Apr 2020 17:00:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "Sorry , I wrote something wrong.\n\nAt Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 16 Apr 2020 22:41:46 -0700, Noah Misch <noah@leadboat.com> wrote in \n> > On Mon, Apr 13, 2020 at 09:45:16PM -0400, Tom Lane wrote:\n> > > Noah Misch <noah@leadboat.com> writes:\n> > > > This seems to have made the following race condition easier to hit:\n> > > > https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n> > > > https://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n> > > \n> > > Yeah, I just came to the same guess in the other thread.\n> > > \n> > > > While I don't think that indicates anything wrong with the fix for $SUBJECT,\n> > > > creating more buildfarm noise is itself bad. I am inclined to revert the fix\n> > > > after a week. Not immediately, in case it uncovers lower-probability bugs.\n> > > > I'd then re-commit it after one of those threads fixes the other bug. Would\n> > > > anyone like to argue for a revert earlier, later, or not at all?\n> > > \n> > > I don't think you should revert. Those failures are (just) often enough\n> > > to be annoying but I do not think that a proper fix is very far away.\n> > \n> > That works for me, but an actual defect may trigger a revert. Fujii Masao\n> > reported high walsender CPU usage after this patch. The patch caused idle\n> > physical walsenders to use 100% CPU. When caught up, the\n> > WalSndSendDataCallback for logical replication, XLogSendLogical(), sleeps\n> > until more WAL is available. XLogSendPhysical() just returns when caught up.\n> > No amount of WAL is too small for physical replication to dispatch, but\n> > logical replication needs the full xl_tot_len of a record. Some options:\n> > \n> > 1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n> > WalSndWaitForWal() when no WAL is available. A quick version of this\n> > passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n> > wrong for physical replication.\n> > \n> > 2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n> > insufficient WAL is available. This complicates the xlogreader.h API to\n> > pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n> > resume decoding. This doesn't have any advantages to make up for those.\n> > \n> > 3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n> > WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n> > further drift between the two wait sites; on the other hand, one could\n> > refactor later to help avoid that.\n> > \n> > 4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n> > minimal fix, but it crudely punches through the abstraction between\n> > WalSndLoop() and its WalSndSendDataCallback.\n> > \n> > I'm favoring (1). Other preferences?\n> \n> Starting from the current shape, I think 1 is preferable, since that\n> waiting logic is no longer shared between logical and physical\n> replications. But I'm not sure I like calling WalSndWaitForWal()\n> (maybe with previous location + 1?), because the function seems to do\n> too-much.\n> \n> By the way, if latch is consumed in WalSndLoop, succeeding call to\n> WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n> cause missing wakeups? (in other words, overlooking of wakeup latch).\n\n- Since the only source other than timeout of walsender wakeup is latch,\n- we should avoid wasteful consuming of latch. (It is the same issue\n- with [1]).\n\n+ Since walsender is wokeup by LSN advancement via latch, we should\n+ avoid wasteful consuming of latch. (It is the same issue with [1]).\n\n\n> If wakeup signal is not remembered on walsender (like\n> InterruptPending), WalSndPhysical cannot enter a sleep with\n> confidence.\n> \n> \n> [1] https://www.postgresql.org/message-id/20200408.164605.1874250940847340108.horikyota.ntt@gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Apr 2020 17:06:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 16 Apr 2020 22:41:46 -0700, Noah Misch <noah@leadboat.com> wrote in \n>> I'm favoring (1). Other preferences?\n\n> Starting from the current shape, I think 1 is preferable, since that\n> waiting logic is no longer shared between logical and physical\n> replications. But I'm not sure I like calling WalSndWaitForWal()\n> (maybe with previous location + 1?), because the function seems to do\n> too-much.\n\nI'm far from an expert on this code, but it does look like there's\na lot of stuff in WalSndWaitForWal that is specific to logical rep,\nso I'm not sure that (1) is workable. At the very least there'd\nhave to be a bunch more conditionals in that function than there are\nnow. In the end, a separate copy for physical rep might be better.\n\n(BTW, I think this code is in desperate need of a renaming\ncampaign to make it clearer which functions are for logical rep,\nphysical rep, or both.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 09:59:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "\n\nOn 2020/04/17 14:41, Noah Misch wrote:\n> On Mon, Apr 13, 2020 at 09:45:16PM -0400, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> This seems to have made the following race condition easier to hit:\n>>> https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n>>> https://www.postgresql.org/message-id/flat/21519.1585272409%40sss.pgh.pa.us\n>>\n>> Yeah, I just came to the same guess in the other thread.\n>>\n>>> While I don't think that indicates anything wrong with the fix for $SUBJECT,\n>>> creating more buildfarm noise is itself bad. I am inclined to revert the fix\n>>> after a week. Not immediately, in case it uncovers lower-probability bugs.\n>>> I'd then re-commit it after one of those threads fixes the other bug. Would\n>>> anyone like to argue for a revert earlier, later, or not at all?\n>>\n>> I don't think you should revert. Those failures are (just) often enough\n>> to be annoying but I do not think that a proper fix is very far away.\n> \n> That works for me, but an actual defect may trigger a revert. Fujii Masao\n> reported high walsender CPU usage after this patch. The patch caused idle\n> physical walsenders to use 100% CPU. When caught up, the\n> WalSndSendDataCallback for logical replication, XLogSendLogical(), sleeps\n> until more WAL is available. XLogSendPhysical() just returns when caught up.\n> No amount of WAL is too small for physical replication to dispatch, but\n> logical replication needs the full xl_tot_len of a record. Some options:\n> \n> 1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n> WalSndWaitForWal() when no WAL is available. A quick version of this\n> passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n> wrong for physical replication.\n\n(1) makes even physical replication walsender sleep in two places and\nwhich seems to make the code for physical replication complicated\nmore than necessary. I'd like to avoid (1) if possible.\n\n> \n> 2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n> insufficient WAL is available. This complicates the xlogreader.h API to\n> pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n> resume decoding. This doesn't have any advantages to make up for those.\n> \n> 3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n> WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n> further drift between the two wait sites; on the other hand, one could\n> refactor later to help avoid that.\n\nSince the additional call of WalSndKeepalive() is necessary only for\nlogical replication, it should be copied to, e.g., XLogSendLogical(),\ninstead of WalSndLoop()? For example, when XLogSendLogical() sets\nWalSndCaughtUp to true, it should call WalSndKeepalive()?\n\nThe root problem seems that when WAL record that's no-opes for\nlogical rep is processed, keep alive message has not sent immediately,\nin spite of that we want pg_stat_replication to be updated promptly.\n(3) seems to try to address this problem straightly and looks better to me.\n\n> 4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n> minimal fix, but it crudely punches through the abstraction between\n> WalSndLoop() and its WalSndSendDataCallback.\n\n(4) also looks good because it's simple, if we can redesign those\nfunctions in good shape.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 18 Apr 2020 00:29:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "On Fri, Apr 17, 2020 at 05:06:29PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > By the way, if latch is consumed in WalSndLoop, succeeding call to\n> > WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n> > cause missing wakeups? (in other words, overlooking of wakeup latch).\n> \n> - Since the only source other than timeout of walsender wakeup is latch,\n> - we should avoid wasteful consuming of latch. (It is the same issue\n> - with [1]).\n> \n> + Since walsender is wokeup by LSN advancement via latch, we should\n> + avoid wasteful consuming of latch. (It is the same issue with [1]).\n> \n> \n> > If wakeup signal is not remembered on walsender (like\n> > InterruptPending), WalSndPhysical cannot enter a sleep with\n> > confidence.\n\nNo; per latch.h, \"What must be avoided is placing any checks for asynchronous\nevents after WaitLatch and before ResetLatch, as that creates a race\ncondition.\" In other words, the thing to avoid is calling ResetLatch()\nwithout next examining all pending work that a latch would signal. Each\nwalsender.c WaitLatch call does follow the rules.\n\nOn Sat, Apr 18, 2020 at 12:29:58AM +0900, Fujii Masao wrote:\n> On 2020/04/17 14:41, Noah Misch wrote:\n> >1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n> > WalSndWaitForWal() when no WAL is available. A quick version of this\n> > passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n> > wrong for physical replication.\n> \n> (1) makes even physical replication walsender sleep in two places and\n> which seems to make the code for physical replication complicated\n> more than necessary. I'd like to avoid (1) if possible.\n\nGood point.\n\n> >2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n> > insufficient WAL is available. This complicates the xlogreader.h API to\n> > pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n> > resume decoding. This doesn't have any advantages to make up for those.\n> >\n> >3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n> > WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n> > further drift between the two wait sites; on the other hand, one could\n> > refactor later to help avoid that.\n> \n> Since the additional call of WalSndKeepalive() is necessary only for\n> logical replication, it should be copied to, e.g., XLogSendLogical(),\n> instead of WalSndLoop()? For example, when XLogSendLogical() sets\n> WalSndCaughtUp to true, it should call WalSndKeepalive()?\n\nWe'd send a keepalive even when pq_flush_if_writable() can't empty the output\nbuffer. That could be acceptable, but it's not ideal.\n\n> The root problem seems that when WAL record that's no-opes for\n> logical rep is processed, keep alive message has not sent immediately,\n> in spite of that we want pg_stat_replication to be updated promptly.\n\nThe degree of promptness should be predictable, at least. If we removed the\nWalSndKeepalive() from WalSndWaitForWal(), pg_stat_replication updates would\nnot be prompt, but they would be predictable. I do, however, think prompt\nupdates are worthwhile.\n\n> (3) seems to try to address this problem straightly and looks better to me.\n> \n> >4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n> > minimal fix, but it crudely punches through the abstraction between\n> > WalSndLoop() and its WalSndSendDataCallback.\n> \n> (4) also looks good because it's simple, if we can redesign those\n> functions in good shape.\n\nLet's do that. I'm attaching the replacement implementation and the revert of\nv1.", "msg_date": "Sat, 18 Apr 2020 00:01:42 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "\n\nOn 2020/04/18 16:01, Noah Misch wrote:\n> On Fri, Apr 17, 2020 at 05:06:29PM +0900, Kyotaro Horiguchi wrote:\n>> At Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> By the way, if latch is consumed in WalSndLoop, succeeding call to\n>>> WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n>>> cause missing wakeups? (in other words, overlooking of wakeup latch).\n>>\n>> - Since the only source other than timeout of walsender wakeup is latch,\n>> - we should avoid wasteful consuming of latch. (It is the same issue\n>> - with [1]).\n>>\n>> + Since walsender is wokeup by LSN advancement via latch, we should\n>> + avoid wasteful consuming of latch. (It is the same issue with [1]).\n>>\n>>\n>>> If wakeup signal is not remembered on walsender (like\n>>> InterruptPending), WalSndPhysical cannot enter a sleep with\n>>> confidence.\n> \n> No; per latch.h, \"What must be avoided is placing any checks for asynchronous\n> events after WaitLatch and before ResetLatch, as that creates a race\n> condition.\" In other words, the thing to avoid is calling ResetLatch()\n> without next examining all pending work that a latch would signal. Each\n> walsender.c WaitLatch call does follow the rules.\n> \n> On Sat, Apr 18, 2020 at 12:29:58AM +0900, Fujii Masao wrote:\n>> On 2020/04/17 14:41, Noah Misch wrote:\n>>> 1. Make XLogSendPhysical() more like XLogSendLogical(), calling\n>>> WalSndWaitForWal() when no WAL is available. A quick version of this\n>>> passes tests, but I'll need to audit WalSndWaitForWal() for things that are\n>>> wrong for physical replication.\n>>\n>> (1) makes even physical replication walsender sleep in two places and\n>> which seems to make the code for physical replication complicated\n>> more than necessary. I'd like to avoid (1) if possible.\n> \n> Good point.\n> \n>>> 2. Make XLogSendLogical() more like XLogSendPhysical(), returning when\n>>> insufficient WAL is available. This complicates the xlogreader.h API to\n>>> pass back \"wait for this XLogRecPtr\", and we'd then persist enough state to\n>>> resume decoding. This doesn't have any advantages to make up for those.\n>>>\n>>> 3. Don't avoid waiting in WalSndLoop(); instead, fix the stall by copying the\n>>> WalSndKeepalive() call from WalSndWaitForWal() to WalSndLoop(). This risks\n>>> further drift between the two wait sites; on the other hand, one could\n>>> refactor later to help avoid that.\n>>\n>> Since the additional call of WalSndKeepalive() is necessary only for\n>> logical replication, it should be copied to, e.g., XLogSendLogical(),\n>> instead of WalSndLoop()? For example, when XLogSendLogical() sets\n>> WalSndCaughtUp to true, it should call WalSndKeepalive()?\n> \n> We'd send a keepalive even when pq_flush_if_writable() can't empty the output\n> buffer. That could be acceptable, but it's not ideal.\n> \n>> The root problem seems that when WAL record that's no-opes for\n>> logical rep is processed, keep alive message has not sent immediately,\n>> in spite of that we want pg_stat_replication to be updated promptly.\n> \n> The degree of promptness should be predictable, at least. If we removed the\n> WalSndKeepalive() from WalSndWaitForWal(), pg_stat_replication updates would\n> not be prompt, but they would be predictable. I do, however, think prompt\n> updates are worthwhile.\n> \n>> (3) seems to try to address this problem straightly and looks better to me.\n>>\n>>> 4. Keep the WalSndLoop() wait, but condition it on !logical. This is the\n>>> minimal fix, but it crudely punches through the abstraction between\n>>> WalSndLoop() and its WalSndSendDataCallback.\n>>\n>> (4) also looks good because it's simple, if we can redesign those\n>> functions in good shape.\n> \n> Let's do that. I'm attaching the replacement implementation and the revert of\n> v1.\n\nThanks for the patch! Though referencing XLogSendLogical inside WalSndLoop()\nmight be a bit ugly,, I'm fine with this change because it's simple and easier\nto understand.\n\n+\t\t * Block if we have unsent data. XXX For logical replication, let\n+\t\t * WalSndWaitForWal(), handle any other blocking; idle receivers need\n+\t\t * its additional actions. For physical replication, also block if\n+\t\t * caught up; its send_data does not block.\n\nIt might be better to s/WalSndWaitForWal()/send_data()? Because not only\nWalSndWaitForWal() but also WalSndWriteData() seems to handle the blocking.\nWalSndWriteData() is called also under send_data, i.e., XLogSendLogical().\n\n frame #2: 0x0000000106bcfa84 postgres`WalSndWriteData(ctx=0x00007fb2a4812d20, lsn=22608080, xid=488, last_write=false) at walsender.c:1247:2\n frame #3: 0x0000000106b98295 postgres`OutputPluginWrite(ctx=0x00007fb2a4812d20, last_write=false) at logical.c:540:2\n frame #4: 0x00000001073fe9b8 pgoutput.so`send_relation_and_attrs(relation=0x00000001073ba2c0, ctx=0x00007fb2a4812d20) at pgoutput.c:353:2\n frame #5: 0x00000001073fe7a0 pgoutput.so`maybe_send_schema(ctx=0x00007fb2a4812d20, relation=0x00000001073ba2c0, relentry=0x00007fb2a483aa60) at pgoutput.c:315:2\n frame #6: 0x00000001073fd4c0 pgoutput.so`pgoutput_change(ctx=0x00007fb2a4812d20, txn=0x00007fb2a502e428, relation=0x00000001073ba2c0, change=0x00007fb2a5030428) at pgoutput.c:394:2\n frame #7: 0x0000000106b99094 postgres`change_cb_wrapper(cache=0x00007fb2a482ed20, txn=0x00007fb2a502e428, relation=0x00000001073ba2c0, change=0x00007fb2a5030428) at logical.c:753:2\n frame #8: 0x0000000106ba2200 postgres`ReorderBufferCommit(rb=0x00007fb2a482ed20, xid=488, commit_lsn=22621088, end_lsn=22621136, commit_time=640675460323211, origin_id=0, origin_lsn=0) at reorderbuffer.c:1653:7\n frame #9: 0x0000000106b93c10 postgres`DecodeCommit(ctx=0x00007fb2a4812d20, buf=0x00007ffee954c0f8, parsed=0x00007ffee954bf90, xid=488) at decode.c:637:2\n frame #10: 0x0000000106b92fa9 postgres`DecodeXactOp(ctx=0x00007fb2a4812d20, buf=0x00007ffee954c0f8) at decode.c:245:5\n frame #11: 0x0000000106b92aee postgres`LogicalDecodingProcessRecord(ctx=0x00007fb2a4812d20, record=0x00007fb2a4812fe0) at decode.c:114:4\n frame #12: 0x0000000106bd2a16 postgres`XLogSendLogical at walsender.c:2863:3\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:30:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "On Mon, Apr 20, 2020 at 02:30:08PM +0900, Fujii Masao wrote:\n> +\t\t * Block if we have unsent data. XXX For logical replication, let\n> +\t\t * WalSndWaitForWal(), handle any other blocking; idle receivers need\n> +\t\t * its additional actions. For physical replication, also block if\n> +\t\t * caught up; its send_data does not block.\n> \n> It might be better to s/WalSndWaitForWal()/send_data()? Because not only\n> WalSndWaitForWal() but also WalSndWriteData() seems to handle the blocking.\n> WalSndWriteData() is called also under send_data, i.e., XLogSendLogical().\n\nThanks for reviewing. WalSndWriteData() blocks when we have unsent data,\nwhich is the same cause for blocking in WalSndLoop(). Since the comment you\nquote says we let WalSndWaitForWal() \"handle any other blocking\", I don't\nthink your proposed change makes it more correct. Also, if someone wants to\nrefactor this, the place to look is WalSndWaitForWal(), not any other part of\nsend_data().\n\n\n", "msg_date": "Mon, 20 Apr 2020 00:02:15 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "At Sat, 18 Apr 2020 00:01:42 -0700, Noah Misch <noah@leadboat.com> wrote in \n> On Fri, Apr 17, 2020 at 05:06:29PM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > By the way, if latch is consumed in WalSndLoop, succeeding call to\n> > > WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n> > > cause missing wakeups? (in other words, overlooking of wakeup latch).\n> > \n> > - Since the only source other than timeout of walsender wakeup is latch,\n> > - we should avoid wasteful consuming of latch. (It is the same issue\n> > - with [1]).\n> > \n> > + Since walsender is wokeup by LSN advancement via latch, we should\n> > + avoid wasteful consuming of latch. (It is the same issue with [1]).\n> > \n> > \n> > > If wakeup signal is not remembered on walsender (like\n> > > InterruptPending), WalSndPhysical cannot enter a sleep with\n> > > confidence.\n> \n> No; per latch.h, \"What must be avoided is placing any checks for asynchronous\n> events after WaitLatch and before ResetLatch, as that creates a race\n> condition.\" In other words, the thing to avoid is calling ResetLatch()\n> without next examining all pending work that a latch would signal. Each\n> walsender.c WaitLatch call does follow the rules.\n\nI didn't meant that, of course. I thought of more or less the same\nwith moving the trigger from latch to signal then the handler sets a\nflag and SetLatch(). If we use bare latch, we should avoid false\nentering to sleep, which also makes thinks compolex.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:15:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "At Mon, 20 Apr 2020 14:30:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/04/18 16:01, Noah Misch wrote:\n> > On Sat, Apr 18, 2020 at 12:29:58AM +0900, Fujii Masao wrote:\n> >>> 4. Keep the WalSndLoop() wait, but condition it on !logical. This is\n> >>> the\n> >>> minimal fix, but it crudely punches through the abstraction between\n> >>> WalSndLoop() and its WalSndSendDataCallback.\n> >>\n> >> (4) also looks good because it's simple, if we can redesign those\n> >> functions in good shape.\n> > Let's do that. I'm attaching the replacement implementation and the\n> > revert of\n> > v1.\n> \n> Thanks for the patch! Though referencing XLogSendLogical inside\n> WalSndLoop()\n> might be a bit ugly,, I'm fine with this change because it's simple\n> and easier\n> to understand.\n\nI thought that if we do this, read_data returns boolean that indiates\nwhether wait for latch or incoming packet, or returns a wake event\nmask.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:24:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "On Mon, Apr 20, 2020 at 04:15:40PM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 18 Apr 2020 00:01:42 -0700, Noah Misch <noah@leadboat.com> wrote in \n> > On Fri, Apr 17, 2020 at 05:06:29PM +0900, Kyotaro Horiguchi wrote:\n> > > At Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > By the way, if latch is consumed in WalSndLoop, succeeding call to\n> > > > WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n> > > > cause missing wakeups? (in other words, overlooking of wakeup latch).\n> > > \n> > > - Since the only source other than timeout of walsender wakeup is latch,\n> > > - we should avoid wasteful consuming of latch. (It is the same issue\n> > > - with [1]).\n> > > \n> > > + Since walsender is wokeup by LSN advancement via latch, we should\n> > > + avoid wasteful consuming of latch. (It is the same issue with [1]).\n> > > \n> > > \n> > > > If wakeup signal is not remembered on walsender (like\n> > > > InterruptPending), WalSndPhysical cannot enter a sleep with\n> > > > confidence.\n> > \n> > No; per latch.h, \"What must be avoided is placing any checks for asynchronous\n> > events after WaitLatch and before ResetLatch, as that creates a race\n> > condition.\" In other words, the thing to avoid is calling ResetLatch()\n> > without next examining all pending work that a latch would signal. Each\n> > walsender.c WaitLatch call does follow the rules.\n> \n> I didn't meant that, of course. I thought of more or less the same\n> with moving the trigger from latch to signal then the handler sets a\n> flag and SetLatch(). If we use bare latch, we should avoid false\n> entering to sleep, which also makes thinks compolex.\n\nI don't understand. If there's a defect, can you write a test case or\ndescribe a sequence of events (e.g. at line X, variable Y has value Z)?\n\n\n", "msg_date": "Mon, 20 Apr 2020 00:59:54 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "At Mon, 20 Apr 2020 00:59:54 -0700, Noah Misch <noah@leadboat.com> wrote in \n> On Mon, Apr 20, 2020 at 04:15:40PM +0900, Kyotaro Horiguchi wrote:\n> > At Sat, 18 Apr 2020 00:01:42 -0700, Noah Misch <noah@leadboat.com> wrote in \n> > > On Fri, Apr 17, 2020 at 05:06:29PM +0900, Kyotaro Horiguchi wrote:\n> > > > At Fri, 17 Apr 2020 17:00:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > > > By the way, if latch is consumed in WalSndLoop, succeeding call to\n> > > > > WalSndWaitForWal cannot be woke-up by the latch-set. Doesn't that\n> > > > > cause missing wakeups? (in other words, overlooking of wakeup latch).\n> > > > \n> > > > - Since the only source other than timeout of walsender wakeup is latch,\n> > > > - we should avoid wasteful consuming of latch. (It is the same issue\n> > > > - with [1]).\n> > > > \n> > > > + Since walsender is wokeup by LSN advancement via latch, we should\n> > > > + avoid wasteful consuming of latch. (It is the same issue with [1]).\n> > > > \n> > > > \n> > > > > If wakeup signal is not remembered on walsender (like\n> > > > > InterruptPending), WalSndPhysical cannot enter a sleep with\n> > > > > confidence.\n> > > \n> > > No; per latch.h, \"What must be avoided is placing any checks for asynchronous\n> > > events after WaitLatch and before ResetLatch, as that creates a race\n> > > condition.\" In other words, the thing to avoid is calling ResetLatch()\n> > > without next examining all pending work that a latch would signal. Each\n> > > walsender.c WaitLatch call does follow the rules.\n> > \n> > I didn't meant that, of course. I thought of more or less the same\n> > with moving the trigger from latch to signal then the handler sets a\n> > flag and SetLatch(). If we use bare latch, we should avoid false\n> > entering to sleep, which also makes thinks compolex.\n> \n> I don't understand. If there's a defect, can you write a test case or\n> describe a sequence of events (e.g. at line X, variable Y has value Z)?\n\nIndeed. Anyway the current version cannot have such a possible issue.\n\nThanks.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 17:38:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "\n\nOn 2020/04/20 16:02, Noah Misch wrote:\n> On Mon, Apr 20, 2020 at 02:30:08PM +0900, Fujii Masao wrote:\n>> +\t\t * Block if we have unsent data. XXX For logical replication, let\n>> +\t\t * WalSndWaitForWal(), handle any other blocking; idle receivers need\n>> +\t\t * its additional actions. For physical replication, also block if\n>> +\t\t * caught up; its send_data does not block.\n>>\n>> It might be better to s/WalSndWaitForWal()/send_data()? Because not only\n>> WalSndWaitForWal() but also WalSndWriteData() seems to handle the blocking.\n>> WalSndWriteData() is called also under send_data, i.e., XLogSendLogical().\n> \n> Thanks for reviewing. WalSndWriteData() blocks when we have unsent data,\n> which is the same cause for blocking in WalSndLoop(). Since the comment you\n> quote says we let WalSndWaitForWal() \"handle any other blocking\", I don't\n> think your proposed change makes it more correct.\n\nI was misreading this as something like \"any other blocking than\nthe blocking in WalSndLoop()\". Ok, I have no more comments on\nthe patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Apr 2020 19:24:28 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" }, { "msg_contents": "On Mon, Apr 20, 2020 at 07:24:28PM +0900, Fujii Masao wrote:\n> I was misreading this as something like \"any other blocking than\n> the blocking in WalSndLoop()\". Ok, I have no more comments on\n> the patch.\n\nPatch looks rather sane to me at quick glance. I can see that WAL\nsenders are now not stuck at 100% CPU per process when sitting idle, \nfor both physical and logical replication. Thanks.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 11:25:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: 001_rep_changes.pl stalls" } ]
[ { "msg_contents": "Hi all,\n\nA quick make check with Postgres 11 and 12 for src/test/ssl/ shows a\nlot of difference in run time, using the same set of options with SSL\nand the same compilation flags (OpenSSL 1.1.1f, with debugging and\nassertions enabled among other things FWIW), with 12 taking up to five\nminutes to complete and 11 finishing as a matter of seconds for me.\n\nI have spent a couple of hours on that, to find out that libpq tries\nto initialize a GSS context where the client remains stuck:\n#9 0x00007fcd839bf72c in krb5_expand_hostname () from\n/usr/lib/x86_64-linux-gnu/libkrb5.so.3\n#10 0x00007fcd839bf9e0 in krb5_sname_to_principal () from\n/usr/lib/x86_64-linux-gnu/libkrb5.so.3\n#11 0x00007fcd83ad55b4 in ?? () from\n/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n#12 0x00007fcd83ac0a98 in ?? () from\n/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n#13 0x00007fcd83ac200f in gss_init_sec_context () from\n/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n#14 0x00007fcd8423b24d in pqsecure_open_gss (conn=0x5582fa8cad90) at\nfe-secure-gssapi.c:626\n#15 0x00007fcd8421cd2b in PQconnectPoll (conn=0x5582fa8cad90) at\nfe-connect.c:3165\n#16 0x00007fcd8421b311 in connectDBComplete (conn=0x5582fa8cad90) at\nfe-connect.c:2182\n#17 0x00007fcd84218c1f in PQconnectdbParams (keywords=0x5582fa8cacf0,\nvalues=0x5582fa8cad40, expand_dbname=1) at fe-connect.c:647\n#18 0x00005582f8a81c87 in main (argc=8, argv=0x7ffe5ddb9df8) at\nstartup.c:266\n\nHowever this makes little sense, why would libpq do that in the\ncontext of an OpenSSL connection? Well, makeEmptyPGconn() does that,\nwhich means that libpq would try by default to use GSS just if libpq\nis *built* with GSS:\n#ifdef ENABLE_GSS\n conn->try_gss = true;\n#endif\n\nIt is possible to enforce this flag to false by using\ngssencmode=disable, but that's not really user-friendly in my opinion\nbecause nobody is going to remember that for connection strings with\nSSL settings so a lot of application are taking a performance hit at\nconnection because of that in my opinion. I think that's also a bad\nidea from the start to assume that we have to try GSS by default, as\nany new code path opening a secured connection may fail into the trap\nof attempting to use GSS if this flag is not reset. Shouldn't we try\nto set this flag to false by default, and set it to true only if\nnecessary depending on gssencmode? A quick hack switching this flag\nto false in makeEmptyPGconn() gives back the past performance to\nsrc/test/ssl/, FWIW.\n\nLooking around, it seems to me that there is a second issue as of\nPQconnectPoll(), where we don't reset the state machine correctly for\ntry_gss within reset_connection_state_machine, and instead HEAD does\nit in connectDBStart().\n\nAlso, I have noted a hack as of pqsecure_open_gss() which does that:\n /*\n * We're done - hooray! Kind of gross, but we need to disable SSL\n * here so that we don't accidentally tunnel one over the other.\n */\n#ifdef USE_SSL\n conn->allow_ssl_try = false;\n#endif\nAnd that looks like a rather bad idea to me..\n\nThanks,\n--\nMichael", "msg_date": "Mon, 6 Apr 2020 16:25:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Problems with GSS encryption and SSL in libpq in 12~" }, { "msg_contents": "On Mon, Apr 06, 2020 at 04:25:57PM +0900, Michael Paquier wrote:\n> It is possible to enforce this flag to false by using\n> gssencmode=disable, but that's not really user-friendly in my opinion\n> because nobody is going to remember that for connection strings with\n> SSL settings so a lot of application are taking a performance hit at\n> connection because of that in my opinion. I think that's also a bad\n> idea from the start to assume that we have to try GSS by default, as\n> any new code path opening a secured connection may fail into the trap\n> of attempting to use GSS if this flag is not reset. Shouldn't we try\n> to set this flag to false by default, and set it to true only if\n> necessary depending on gssencmode? A quick hack switching this flag\n> to false in makeEmptyPGconn() gives back the past performance to\n> src/test/ssl/, FWIW.\n> \n> Looking around, it seems to me that there is a second issue as of\n> PQconnectPoll(), where we don't reset the state machine correctly for\n> try_gss within reset_connection_state_machine, and instead HEAD does\n> it in connectDBStart().\n\nSo, a lot of things come down to PQconnectPoll() here. Once the\nconnection state reached is CONNECTION_MADE, we first try a GSS\nconnection if try_gss is true, and a SSL connection attempt follows\njust after. This makes me wonder about the following things:\n- gssencmode is prefer by default, the same as sslmode. Shouldn't we\nissue an error if any of them is not disabled to avoid any conflicts\nin the client, making the choice of gssencmode=prefer by default a bad\nchoice? It seems to me that there could be an argument to make\ngssencmode disabled by default, and issue an error if somebody\nattempts a connection without at least gssencode or sslmode set as\ndisabled.\n- The current code tries a GSS connection first, and then it follows\nwith SSL, which is annoying because gssencmode=prefer by default means\nthat any user would pay the cost of attempting a GSS connection for\nnothing (with or even without SSL). Shouldn't we do the opposite\nhere, by trying SSL first, and then GSS?\n\nFor now, I am attaching a WIP patch which seems like a good angle of\nattack for libpq, meaning that if sslmode and gssencmode are both set,\nthen we would attempt a SSL connection before attempting GSS, so as\nany user of SSL does not pay a performance hit compared to past\nversions (I know that src/test/kerberos/ fails with that because\nsslmode=prefer is triggered first in PQconnectPoll(), but that's just\nto show the idea I had in mind).\n\nAny thoughts?\n--\nMichael", "msg_date": "Mon, 6 Apr 2020 17:17:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Problems with GSS encryption and SSL in libpq in 12~" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> A quick make check with Postgres 11 and 12 for src/test/ssl/ shows a\n> lot of difference in run time, using the same set of options with SSL\n> and the same compilation flags (OpenSSL 1.1.1f, with debugging and\n> assertions enabled among other things FWIW), with 12 taking up to five\n> minutes to complete and 11 finishing as a matter of seconds for me.\n> \n> I have spent a couple of hours on that, to find out that libpq tries\n> to initialize a GSS context where the client remains stuck:\n> #9 0x00007fcd839bf72c in krb5_expand_hostname () from\n> /usr/lib/x86_64-linux-gnu/libkrb5.so.3\n> #10 0x00007fcd839bf9e0 in krb5_sname_to_principal () from\n> /usr/lib/x86_64-linux-gnu/libkrb5.so.3\n> #11 0x00007fcd83ad55b4 in ?? () from\n> /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n> #12 0x00007fcd83ac0a98 in ?? () from\n> /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n> #13 0x00007fcd83ac200f in gss_init_sec_context () from\n> /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2\n> #14 0x00007fcd8423b24d in pqsecure_open_gss (conn=0x5582fa8cad90) at\n> fe-secure-gssapi.c:626\n> #15 0x00007fcd8421cd2b in PQconnectPoll (conn=0x5582fa8cad90) at\n> fe-connect.c:3165\n> #16 0x00007fcd8421b311 in connectDBComplete (conn=0x5582fa8cad90) at\n> fe-connect.c:2182\n> #17 0x00007fcd84218c1f in PQconnectdbParams (keywords=0x5582fa8cacf0,\n> values=0x5582fa8cad40, expand_dbname=1) at fe-connect.c:647\n> #18 0x00005582f8a81c87 in main (argc=8, argv=0x7ffe5ddb9df8) at\n> startup.c:266\n> \n> However this makes little sense, why would libpq do that in the\n> context of an OpenSSL connection? Well, makeEmptyPGconn() does that,\n> which means that libpq would try by default to use GSS just if libpq\n> is *built* with GSS:\n> #ifdef ENABLE_GSS\n> conn->try_gss = true;\n> #endif\n\nSure, but if you look at what is done with it:\n\n/*\n * If GSSAPI encryption is enabled, then call\n * pg_GSS_have_cred_cache() which will return true if we can\n * acquire credentials (and give us a handle to use in\n * conn->gcred), and then send a packet to the server asking\n * for GSSAPI Encryption (and skip past SSL negotiation and\n * regular startup below).\n */\nif (conn->try_gss && !conn->gctx)\n conn->try_gss = pg_GSS_have_cred_cache(&conn->gcred);\n\nIn other words, it's trying because a call to gss_acquire_cred() (called\nfrom pg_GSS_have_cred_cache()) returned without error, indicating that\nGSS should be possible to attempt.\n\nIf you have GSS compiled in, and you've got a credential cache such that\ngss_acquire_cred() returns true, it seems entirely reasonable that you'd\nlike to connect using GSS encryption.\n\n> It is possible to enforce this flag to false by using\n> gssencmode=disable, but that's not really user-friendly in my opinion\n> because nobody is going to remember that for connection strings with\n> SSL settings so a lot of application are taking a performance hit at\n> connection because of that in my opinion. I think that's also a bad\n> idea from the start to assume that we have to try GSS by default, as\n> any new code path opening a secured connection may fail into the trap\n> of attempting to use GSS if this flag is not reset. Shouldn't we try\n> to set this flag to false by default, and set it to true only if\n> necessary depending on gssencmode? A quick hack switching this flag\n> to false in makeEmptyPGconn() gives back the past performance to\n> src/test/ssl/, FWIW.\n\nWe don't just always try to do GSS, that certainly wouldn't make sense-\nwe only try when gss_acquire_cred() comes back without an error.\n\nAs this is part of the initial connection, it's also not possible to\ndecide to do it by \"depending on gssencmode\", as we haven't talked to\nthe server at all at this point and need to decide if we're going to do\nGSS encryption or not with the initial packet. Note that this is\nmore-or-less identical to what we do with SSL, and, as you saw, we\ndefault to 'prefer' with GSSENCMODE, but you can set it to 'disable' on\nthe client side if you don't want to try GSS, even when you have a\nclient compiled with GSS and you have a credential cache.\n\n> Looking around, it seems to me that there is a second issue as of\n> PQconnectPoll(), where we don't reset the state machine correctly for\n> try_gss within reset_connection_state_machine, and instead HEAD does\n> it in connectDBStart().\n\nNot following exactly what you're referring to here, but I see you've\nsent a follow-up email about this and will respond to that\nindependently.\n\n> Also, I have noted a hack as of pqsecure_open_gss() which does that:\n> /*\n> * We're done - hooray! Kind of gross, but we need to disable SSL\n> * here so that we don't accidentally tunnel one over the other.\n> */\n> #ifdef USE_SSL\n> conn->allow_ssl_try = false;\n> #endif\n> And that looks like a rather bad idea to me..\n\nTunneling SSL over GSS encryption is definitely a bad idea, which is why\nwe prevent that from happening. I'm not sure what the issue here is-\nare you suggesting that we should support tunneling SSL over GSS\nencryption..?\n\nThanks,\n\nStephen", "msg_date": "Sat, 2 May 2020 14:15:08 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Problems with GSS encryption and SSL in libpq in 12~" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Apr 06, 2020 at 04:25:57PM +0900, Michael Paquier wrote:\n> > It is possible to enforce this flag to false by using\n> > gssencmode=disable, but that's not really user-friendly in my opinion\n> > because nobody is going to remember that for connection strings with\n> > SSL settings so a lot of application are taking a performance hit at\n> > connection because of that in my opinion. I think that's also a bad\n> > idea from the start to assume that we have to try GSS by default, as\n> > any new code path opening a secured connection may fail into the trap\n> > of attempting to use GSS if this flag is not reset. Shouldn't we try\n> > to set this flag to false by default, and set it to true only if\n> > necessary depending on gssencmode? A quick hack switching this flag\n> > to false in makeEmptyPGconn() gives back the past performance to\n> > src/test/ssl/, FWIW.\n> > \n> > Looking around, it seems to me that there is a second issue as of\n> > PQconnectPoll(), where we don't reset the state machine correctly for\n> > try_gss within reset_connection_state_machine, and instead HEAD does\n> > it in connectDBStart().\n> \n> So, a lot of things come down to PQconnectPoll() here. Once the\n> connection state reached is CONNECTION_MADE, we first try a GSS\n> connection if try_gss is true, and a SSL connection attempt follows\n> just after. This makes me wonder about the following things:\n> - gssencmode is prefer by default, the same as sslmode. Shouldn't we\n> issue an error if any of them is not disabled to avoid any conflicts\n> in the client, making the choice of gssencmode=prefer by default a bad\n> choice? It seems to me that there could be an argument to make\n> gssencmode disabled by default, and issue an error if somebody\n> attempts a connection without at least gssencode or sslmode set as\n> disabled.\n\nI don't see why it would make sense to throw an error and require that\none of them be disabled. I certainly don't agree that we should disable\nGSS encryption by default, or that there's any reason to throw an error\nif both GSS and SSL are set to 'prefer' (as our current default is).\n\n> - The current code tries a GSS connection first, and then it follows\n> with SSL, which is annoying because gssencmode=prefer by default means\n> that any user would pay the cost of attempting a GSS connection for\n> nothing (with or even without SSL). Shouldn't we do the opposite\n> here, by trying SSL first, and then GSS?\n\nA GSS connection is only attempted, as mentioned, if your GSS library\nclaims that there's a possibility that credentials could be acquired for\nthe connection.\n\n> For now, I am attaching a WIP patch which seems like a good angle of\n> attack for libpq, meaning that if sslmode and gssencmode are both set,\n> then we would attempt a SSL connection before attempting GSS, so as\n> any user of SSL does not pay a performance hit compared to past\n> versions (I know that src/test/kerberos/ fails with that because\n> sslmode=prefer is triggered first in PQconnectPoll(), but that's just\n> to show the idea I had in mind).\n> \n> Any thoughts?\n\nI don't agree with the assumption that, in the face of having GSS up and\nrunning, which actually validates the client and the server, that we\nshould prefer SSL, where our default configuration for SSL does *not*\nvalidate properly *either* the client or the server.\n\nWhat it sounds like to me is that it'd be helpful for you to review why\nyour environment has a GSS credential cache (or, at least,\ngss_acquire_cred() returns without any error) but GSS isn't properly\nworking. If you're using OSX, it's possible the issue here is actually\nthe broken and ridiculously ancient kerberos code that OSX ships with\n(and which, in another thread, there's a discussion about detecting and\nfailing to build if it's detected as it's just outright broken). If\nit's not that then it'd be great to understand more about the specific\nenvironment and what things like 'klist' return.\n\nThanks,\n\nStephen", "msg_date": "Sat, 2 May 2020 14:26:50 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Problems with GSS encryption and SSL in libpq in 12~" } ]
[ { "msg_contents": "When testing commit c6b9204 with CLOBBER_CACHE_ALWAYS, of the 20 hours for\ncheck-world, 001_rep_changes.pl took 1.8 hours. At commit 5406513, the test\nfailed at a poll_query_until() timeout[1]. The slow part is the logical\nreplication of \"DELETE FROM tab_ins WHERE a > 0\", which deletes 100 records\nfrom a table of ~1100 records, using RelationFindReplTupleSeq().\ntuples_equal() called lookup_type_cache() for every comparison. Performing\nthose lookups once per RelationFindReplTupleSeq(), as attached, cut the test's\nruntime by an order of magnitude. While performance for CLOBBER_CACHE_ALWAYS\nis not important, this is consistent with record_eq() and is easy. I'm\nslightly inclined not to back-patch it, though.\n\n[1] This seemed to result from the poll query being 2-3x faster at commit\n5406513, not from logical replication being slower. (poll_query_until() times\nout after 1800 polls separated by 0.1s sleeps, however long that takes.) I\nhad guessed that commit 1c7a0b3 greatly accelerated this test case, but it\ngave about a 4% improvement under CLOBBER_CACHE_ALWAYS.", "msg_date": "Mon, 6 Apr 2020 01:54:20 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Optimizing RelationFindReplTupleSeq() for CLOBBER_CACHE_ALWAYS" } ]
[ { "msg_contents": "Propose\n\n Optimization of using the family of macros VALGRIND_MEMPOOL_*.\n\nHow?\n\n - add the GetMemoryChunkCapacity method, which returns the size of\nusable space in the chunk\n - use GetMemoryChunkCapacity on VALGRIND_MEMPOOL_* calls\n - call VALGRIND_MEMPOOL_CHANGED only for really changed chunks\n\n*) Full patch code see in attachment\n001-mem-chunk-capacity-and-repalloc-valgrind.patch\n\nWhy?\n\nUnder the valgrind control, the VALGRIND_MEMPOOL_CHANGE call works\nvery slooowly. With a large number of allocated memory chunks (a few\nthousand and more) it is almost impossible to wait for the\nprogram/test to done. This creates problems. During debugging and\nauto-tests.\n\nFor example, below code is executed 90000ms on Core i7\n\n for (int64 i = 0; i < 16000; ++i)\n chunks[i] = palloc(64);\n\n for (int64 i = 0; i < 16000; ++i)\n chunks[i] = repalloc(chunks[i], 62);\n\nWith patch above - this code is executed 150ms.\n\n*) Full extension code to demonstrate the problem see in attachment\nvalgrind_demo.tar.gz\n\nAn additional example\n\n is the rum extension (https://github.com/postgrespro/rum).\n\nTo be able to perform the tests - need reduce the generate_series size\nfrom 100K to 16K (https://github.com/postgrespro/rum/issues/76). But\nunder valgrind test execution remaining unacceptably sloow. Patch\nabove - completely solves the problem.", "msg_date": "Mon, 6 Apr 2020 12:47:38 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[PATCH] optimization of VALGRIND_MEMPOOL_* usage" } ]
[ { "msg_contents": "Hi All,\n\nI am working on “pg_locale compilation error with Visual Studio 2017”,\nRelated threads[1],[2].\nWe are getting compilation error in static char *IsoLocaleName(const char\n*winlocname) function in pg_locale.c file. This function is trying to\nconvert the locale name into the Unix style. For example, it will change\nthe locale name \"en-US\" into \"en_US\".\nIt is creating a locale using _locale_t _create_locale( int category, const\nchar *locale) and then trying to access the name of that locale by pointing\nto internal elements of the structure loct->locinfo->locale_name[LC_CTYPE]\nbut it has been missing from the _locale_t since VS2015 which is causing\nthe compilation error. I found a few useful APIs that can be used here.\n\nResolveLocaleName and GetLocaleInfoEx both can take locale in the following\nformat.\n<language>-<REGION>\n<language>-<Script>-<REGION>\n\nResolveLocaleName will try to find the closest matching locale name to the\ninput locale name even if the input locale name is invalid given that the\n<language> is correct.\n\nen-something-YZ => en-US\nex-US => error\nAa-aaaa-aa => aa-ET represents (Afar,Ethiopia)\nAa-aaa-aa => aa-ET\n\nRefer [4] for more details\n\nBut in the case of GetLocaleInfoEx, it will only check the format of the\ninput locale, as mentioned before, if correct it will return the name of\nthe locale otherwise it will return an error.\nFor example.\n\nen-something-YZ => error\nex-US => ex-US\naa-aaaa-aa => aa-Aaaa-AA\naa-aaa-aa => error.\n\nRefer [5] for more details.\n\nCurrently, it is using _create_locale it behaves similarly to\nGetLocaleInfoEx i.e. it also only checks the format only difference is for\na bigger set.\nI thought of using GetLocaleInfoEx for the fix because it behaves similarly\nto the already existing and a similar issue was resolved earlier using the\nsame. I have attached the patch, let me know your thoughts.\n\n[1]\nhttps://www.postgresql.org/message-id/e317eec9-d40d-49b9-b776-e89cf1d18c82@postgrespro.ru\n[2] https://www.postgresql.org/message-id/23073.1526049547%40sss.pgh.pa.us\n[3] https://docs.microsoft.com/en-us/windows/win32/intl/locale-names\n[4]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-resolvelocalename\n[5]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoex\n-- \nRegards,\nDavinder.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Apr 2020 16:37:57 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 6, 2020 at 1:08 PM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n>\n> I am working on “pg_locale compilation error with Visual Studio 2017”,\n> Related threads[1],[2].\n> We are getting compilation error in static char *IsoLocaleName(const char\n> *winlocname) function in pg_locale.c file. This function is trying to\n> convert the locale name into the Unix style. For example, it will change\n> the locale name \"en-US\" into \"en_US\".\n>\n\nHow do you reproduce this issue with Visual Studio? I see there is an ifdef\ndirective above IsoLocaleName():\n\n#if defined(WIN32) && defined(LC_MESSAGES)\n\nI would expect defined(LC_MESSAGES) to be false in MSVC.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Apr 6, 2020 at 1:08 PM davinder singh <davindersingh2692@gmail.com> wrote:I am working on “pg_locale compilation error with Visual Studio 2017”, Related threads[1],[2].We are getting compilation error in static char *IsoLocaleName(const char *winlocname) function in pg_locale.c file. This function is trying to convert the locale name into the Unix style. For example, it will change the locale name \"en-US\" into \"en_US\".How do you reproduce this issue with Visual Studio? I see there is an ifdef directive above IsoLocaleName():#if defined(WIN32) && defined(LC_MESSAGES)I would expect \n\ndefined(LC_MESSAGES) to be false in MSVC.Regards,Juan José Santamaría Flecha", "msg_date": "Mon, 6 Apr 2020 16:47:17 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 6, 2020 at 8:17 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> How do you reproduce this issue with Visual Studio? I see there is an\n> ifdef directive above IsoLocaleName():\n>\n> #if defined(WIN32) && defined(LC_MESSAGES)\n>\n> I would expect defined(LC_MESSAGES) to be false in MSVC.\n>\n\nYou need to enable NLS support in the config file. Let me know if that\nanswers your question.\n\n-- \nRegards,\nDavinder.\n\nOn Mon, Apr 6, 2020 at 8:17 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:How do you reproduce this issue with Visual Studio? I see there is an ifdef directive above IsoLocaleName():#if defined(WIN32) && defined(LC_MESSAGES)I would expect \n\ndefined(LC_MESSAGES) to be false in MSVC.You need to enable NLS support in the config file. Let me know if that answers your question. -- Regards,Davinder.", "msg_date": "Tue, 7 Apr 2020 11:13:59 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 7, 2020 at 7:44 AM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n>\n> You need to enable NLS support in the config file. Let me know if that\n> answers your question.\n>\n\nYes, I can see where this is coming from now, thanks.\n\nCurrently, it is using _create_locale it behaves similarly to\n> GetLocaleInfoEx i.e. it also only checks the format only difference is for\n> a bigger set.\n> I thought of using GetLocaleInfoEx for the fix because it behaves\n> similarly to the already existing and a similar issue was resolved earlier\n> using the same. I have attached the patch, let me know your thoughts.\n>\n\nYou are right, that the way to get the locale name information is through\nGetLocaleInfoEx().\n\nA couple of comments on the patch:\n\n* I think you could save a couple of code lines, and make it clearer, by\nmerging both tests on _MSC_VER into a single #if... #else and leaving as\ncommon code:\n+ }\n+ else\n+ return NULL;\n+#endif /*_MSC_VER && _MSC_VER < 1900*/\n\n* The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as\n\"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would\nrecommend using the later.\n\n* This looks like a spurious change:\n - sizeof(iso_lc_messages), NULL);\n+ sizeof(iso_lc_messages), NULL);\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Apr 7, 2020 at 7:44 AM davinder singh <davindersingh2692@gmail.com> wrote:You need to enable NLS support in the config file. Let me know if that answers your question.Yes, I can see where this is coming from now, thanks.Currently, it is using _create_locale it behaves similarly to GetLocaleInfoEx i.e. it also only checks the format only difference is for a bigger set.I thought of using GetLocaleInfoEx for the fix because it behaves similarly to the already existing and a similar issue was resolved earlier using the same. I have attached the patch, let me know your thoughts.You are right, that the way to get the locale name information is through GetLocaleInfoEx().A couple of comments on the patch:* I think you could save a couple of code \n\nlines, and make it clearer, by merging both tests on \n\n_MSC_VER  into a single #if... #else and leaving as common code:+\t}+\telse+\t\treturn NULL;+#endif\t\t/*_MSC_VER && _MSC_VER < 1900*/* The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would recommend using the later.\n\n* This looks like a spurious change:  -\t\t\t\t\t\tsizeof(iso_lc_messages), NULL);+\t\t\tsizeof(iso_lc_messages), NULL);Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 7 Apr 2020 17:00:18 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> * The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as\n> \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would\n> recommend using the later.\n>\nI think \"_WIN32_WINNT >= 0x0600\" represents windows versions only and\ndoesn't include any information about Visual Studio versions. So I am\nsticking to \" defined(_MSC_VER) && (_MSC_VER >= 1900)\".\nI have resolved other comments. I have attached a new version of the patch.\n-- \nRegards,\nDavinder.", "msg_date": "Wed, 8 Apr 2020 12:32:51 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 8, 2020 at 9:03 AM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n> On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha <\n> juanjo.santamaria@gmail.com> wrote:\n>\n>>\n>> * The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as\n>> \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would\n>> recommend using the later.\n>>\n> I think \"_WIN32_WINNT >= 0x0600\" represents windows versions only and\n> doesn't include any information about Visual Studio versions. So I am\n> sticking to \" defined(_MSC_VER) && (_MSC_VER >= 1900)\".\n>\n\nLet me explain further, in pg_config_os.h you can check that the value of\n_WIN32_WINNT is solely based on checking _MSC_VER. This patch should also\nbe meaningful for WIN32 builds using MinGW, or we might see this issue\nreappear in those systems if update the MIN_WINNT value to more current\nOS versions. So, I still think _WIN32_WINNT is a better option.\n\nI have resolved other comments. I have attached a new version of the patch.\n>\n\nI still see the same last lines in both #ifdef blocks, and pgindent might\nchange a couple of lines to:\n+ MultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,\n+ LOCALE_NAME_MAX_LENGTH);\n+\n+ if ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,\n+ (LPWSTR)&buffer, LOCALE_NAME_MAX_LENGTH)) > 0)\n+ {\n\nBut that is pretty trivial stuff.\n\nPlease open an item in the commitfest for this patch.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Apr 8, 2020 at 9:03 AM davinder singh <davindersingh2692@gmail.com> wrote:On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:* The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would recommend using the later.I think  \"_WIN32_WINNT >= 0x0600\" represents windows versions only and doesn't include any information about Visual Studio versions. So I am sticking to \" defined(_MSC_VER) && (_MSC_VER >= 1900)\".Let me explain further, in pg_config_os.h you can check that the value of \n\n_WIN32_WINNT is solely based on checking _MSC_VER. This patch should also be meaningful for WIN32 builds using MinGW, or we might see this issue reappear in those  systems if update the \n\nMIN_WINNT value to more current OS versions. So, I still think \n\n_WIN32_WINNT is a better option.I have resolved other comments. I have attached a new version of the patch.I still see the same last lines in both #ifdef blocks, and pgindent might change a couple of lines to:+\tMultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,+\t\t\t\t\t\tLOCALE_NAME_MAX_LENGTH);++\tif ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,+\t\t(LPWSTR)&buffer, LOCALE_NAME_MAX_LENGTH)) > 0)+\t{But that is pretty trivial stuff.Please open an item in the commitfest for this patch.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 8 Apr 2020 16:09:33 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 8, 2020 at 7:39 PM Juan José Santamaría Flecha\n\n> Let me explain further, in pg_config_os.h you can check that the value of\n> _WIN32_WINNT is solely based on checking _MSC_VER. This patch should also\n> be meaningful for WIN32 builds using MinGW, or we might see this issue\n> reappear in those systems if update the MIN_WINNT value to more current\n> OS versions. So, I still think _WIN32_WINNT is a better option.\n>\nThanks for explanation, I was not aware of that, you are right it make\nsense to use \" _WIN32_WINNT\", Now I am using this only.\n\nI still see the same last lines in both #ifdef blocks, and pgindent might\n> change a couple of lines to:\n> + MultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,\n> + LOCALE_NAME_MAX_LENGTH);\n> +\n> + if ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,\n> + (LPWSTR)&buffer, LOCALE_NAME_MAX_LENGTH)) > 0)\n> + {\n>\nNow I have resolved these comments also, Please check updated version of\nthe patch.\n\n\n> Please open an item in the commitfest for this patch.\n>\nI have created with same title.\n\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Apr 2020 13:55:22 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 6, 2020 at 4:38 PM davinder singh\n<davindersingh2692@gmail.com> wrote:\n>\n> Hi All,\n>\n> I am working on “pg_locale compilation error with Visual Studio 2017”, Related threads[1],[2].\n> We are getting compilation error in static char *IsoLocaleName(const char *winlocname) function in pg_locale.c file. This function is trying to convert the locale name into the Unix style. For example, it will change the locale name \"en-US\" into \"en_US\".\n> It is creating a locale using _locale_t _create_locale( int category, const char *locale) and then trying to access the name of that locale by pointing to internal elements of the structure loct->locinfo->locale_name[LC_CTYPE] but it has been missing from the _locale_t since VS2015 which is causing the compilation error. I found a few useful APIs that can be used here.\n>\n> ResolveLocaleName and GetLocaleInfoEx both can take locale in the following format.\n> <language>-<REGION>\n> <language>-<Script>-<REGION>\n>\n> ResolveLocaleName will try to find the closest matching locale name to the input locale name even if the input locale name is invalid given that the <language> is correct.\n>\n> en-something-YZ => en-US\n> ex-US => error\n> Aa-aaaa-aa => aa-ET represents (Afar,Ethiopia)\n> Aa-aaa-aa => aa-ET\n>\n> Refer [4] for more details\n>\n> But in the case of GetLocaleInfoEx, it will only check the format of the input locale, as mentioned before, if correct it will return the name of the locale otherwise it will return an error.\n> For example.\n>\n> en-something-YZ => error\n> ex-US => ex-US\n> aa-aaaa-aa => aa-Aaaa-AA\n> aa-aaa-aa => error.\n>\n> Refer [5] for more details.\n>\n> Currently, it is using _create_locale it behaves similarly to GetLocaleInfoEx i.e. it also only checks the format only difference is for a bigger set.\n> I thought of using GetLocaleInfoEx for the fix because it behaves similarly to the already existing and a similar issue was resolved earlier using the same.\n>\n\nIt seems the right direction to use GetLocaleInfoEx as we have already\nused it to handle a similar problem (lc_codepage is missing in\n_locale_t in higher versions of MSVC (cf commit 0fb54de9aa)) in\nchklocale.c. However, I see that we have added a manual parsing there\nif GetLocaleInfoEx doesn't parse it. I think that addresses your\nconcern for _create_locale handling bigger sets. Don't we need\nsomething equivalent here for the cases which GetLocaleInfoEx doesn't\nsupport?\n\nHow have you ensured the testing of this code? I see that we have\nsrc\\test\\locale in our test directory. Can we test using that?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:33:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> * I think you could save a couple of code lines, and make it clearer, by merging both tests on _MSC_VER into a single #if... #else and leaving as common code:\n> + }\n> + else\n> + return NULL;\n> +#endif /*_MSC_VER && _MSC_VER < 1900*/\n>\n> * The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would recommend using the later.\n>\n\nI see that we have used _MSC_VER form of checks in win32_langinfo\n(chklocale.c) for a similar kind of handling. So, isn't it better to\nbe consistent with that? Which exact part of the code you are\nreferring to?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:35:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Fri, Apr 10, 2020 at 5:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 8:30 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > * I think you could save a couple of code lines, and make it clearer, by merging both tests on _MSC_VER into a single #if... #else and leaving as common code:\n> > + }\n> > + else\n> > + return NULL;\n> > +#endif /*_MSC_VER && _MSC_VER < 1900*/\n> >\n> > * The logic on \"defined(_MSC_VER) && (_MSC_VER >= 1900)\" is defined as \"_WIN32_WINNT >= 0x0600\" on other parts of the code. I would recommend using the later.\n> >\n>\n> I see that we have used _MSC_VER form of checks in win32_langinfo\n> (chklocale.c) for a similar kind of handling. So, isn't it better to\n> be consistent with that? Which exact part of the code you are\n> referring to?\n>\n\nI see that the kind of check you are talking is recently added by\ncommit 352f6f2d. I think it is better to be consistent in all places.\nLet's pick one and use that if possible.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:55:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Fri, Apr 10, 2020 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> I see that the kind of check you are talking is recently added by\n> commit 352f6f2d. I think it is better to be consistent in all places.\n> Let's pick one and use that if possible.\n\n\nCurrently there are two constructs to test the same logic, which is not\ngreat. I think that using _MSC_VER makes it seem as MSVC exclusive code,\nwhen MinGW should also be considered.\n\nIn the longterm aligning Postgres with MS product obsolescence will make\nthese tests unneeded, but I can propose a patch for making the test\nconsistent in all cases, on a different thread since this has little to do\nwith $SUBJECT.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Fri, Apr 10, 2020 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nI see that the kind of check you are talking is recently added by\ncommit 352f6f2d.  I think it is better to be consistent in all places.\nLet's pick one and use that if possible.Currently there are two constructs to test the same logic, which is not great. I think that using _MSC_VER  makes it seem as MSVC exclusive code, when MinGW should also be considered.In the longterm aligning Postgres with MS product obsolescence will make these tests unneeded, but I can propose a patch for making the test consistent in all cases, on a different thread since this has little to do with $SUBJECT.Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 10 Apr 2020 17:05:26 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Fri, Apr 10, 2020 at 5:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> It seems the right direction to use GetLocaleInfoEx as we have already\n> used it to handle a similar problem (lc_codepage is missing in\n> _locale_t in higher versions of MSVC (cf commit 0fb54de9aa)) in\n> chklocale.c. However, I see that we have added a manual parsing there\n> if GetLocaleInfoEx doesn't parse it. I think that addresses your\n> concern for _create_locale handling bigger sets. Don't we need\n> something equivalent here for the cases which GetLocaleInfoEx doesn't\n> support?\nI am in investigating in similar lines, I think the following explanation\ncan help.\n From Microsoft doc.\nThe locale argument to the setlocale, _wsetlocale, _create_locale, and\n_wcreate_locale is\nlocale :: \"locale-name\"\n | *\"language[_country-region[.code-page]]\"*\n | \".code-page\"\n | \"C\"\n | \"\"\n | NULL\n\nFor GetLocaleInfoEx locale argument is\n*<language>-<REGION>*\n<language>-<Script>-<REGION>\n\nAs we can see _create_locale will also accept the locale appended with\ncode-page but that is not the case in lateral.\nThe important point is in our current issue we need locale name only and\nboth\nfunctions(GetLocaleInfoEx, _create_locale) support an equal number of\nlocales\nnames if go by the syntax of the locale described on Microsoft documents.\nWith that thought, I am parsing the input\nstring to remove the code-page and give it as input to GelLocaleInfoEx.\nI have attached the new patch.\n\n> How have you ensured the testing of this code? I see that we have\n> src\\test\\locale in our test directory. Can we test using that?\nI don't know how to use these tests on windows, but I had a look in these\ntests, I didn't found any test which could hit the function we are\nmodifying.\nI m still working on testing this patch. If anyone has Idea please suggest.\n\n[1] https://docs.microsoft.com/en-us/windows/win32/intl/locale-names\n[2]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoex\n[3]\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/create-locale-wcreate-locale?view=vs-2019\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Apr 2020 13:07:40 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Thanks for the review comments.\n\nOn Tue, Apr 14, 2020 at 9:12 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> >>I m still working on testing this patch. If anyone has Idea please\n> suggest.\n> I still see problems with this patch.\n>\n> 1. Variable loct have redundant initialization, it would be enough to\n> declare so: _locale_t loct;\n> 2. Style white space in variable rc declaration.\n> 3. Style variable cp_index can be reduced.\n> if (tmp != NULL) {\n> size_t cp_index;\n>\n> cp_index = (size_t)(tmp - winlocname);\n> strncpy(loc_name, winlocname, cp_index);\n> loc_name[cp_index] = '\\0';\n> 4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is\n> not called.\n>\nI resolved the above comments.\n\n\n> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is\n> not used?\n>\n_create_locale can take bigger input than GetLocaleInfoEx. But we are\ninterested in\n*language[_country-region[.code-page]]*. We are using _create_locale to\nvalidate\nthe given input. The reason is we can't verify the locale name if it is\nappended with\ncode-page by using GetLocaleInfoEx. So before parsing, we verify if the\nwhole input\nlocale name is valid by using _create_locale. I hope that answers your\nquestion.\n\nI have attached the patch.\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Apr 14, 2020 at 1:07 PM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n>\n> On Fri, Apr 10, 2020 at 5:33 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > It seems the right direction to use GetLocaleInfoEx as we have already\n> > used it to handle a similar problem (lc_codepage is missing in\n> > _locale_t in higher versions of MSVC (cf commit 0fb54de9aa)) in\n> > chklocale.c. However, I see that we have added a manual parsing there\n> > if GetLocaleInfoEx doesn't parse it. I think that addresses your\n> > concern for _create_locale handling bigger sets. Don't we need\n> > something equivalent here for the cases which GetLocaleInfoEx doesn't\n> > support?\n> I am in investigating in similar lines, I think the following explanation\n> can help.\n> From Microsoft doc.\n> The locale argument to the setlocale, _wsetlocale, _create_locale, and\n> _wcreate_locale is\n> locale :: \"locale-name\"\n> | *\"language[_country-region[.code-page]]\"*\n> | \".code-page\"\n> | \"C\"\n> | \"\"\n> | NULL\n>\n> For GetLocaleInfoEx locale argument is\n> *<language>-<REGION>*\n> <language>-<Script>-<REGION>\n>\n> As we can see _create_locale will also accept the locale appended with\n> code-page but that is not the case in lateral.\n> The important point is in our current issue we need locale name only and\n> both\n> functions(GetLocaleInfoEx, _create_locale) support an equal number of\n> locales\n> names if go by the syntax of the locale described on Microsoft documents.\n> With that thought, I am parsing the input\n> string to remove the code-page and give it as input to GelLocaleInfoEx.\n> I have attached the new patch.\n>\n> > How have you ensured the testing of this code? I see that we have\n> > src\\test\\locale in our test directory. Can we test using that?\n> I don't know how to use these tests on windows, but I had a look in these\n> tests, I didn't found any test which could hit the function we are\n> modifying.\n> I m still working on testing this patch. If anyone has Idea please suggest.\n>\n> [1] https://docs.microsoft.com/en-us/windows/win32/intl/locale-names\n> [2]\n> https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoex\n> [3]\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/create-locale-wcreate-locale?view=vs-2019\n> --\n> Regards,\n> Davinder\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Apr 2020 11:38:16 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em qua., 15 de abr. de 2020 às 03:08, davinder singh <\ndavindersingh2692@gmail.com> escreveu:\n\n>\n> Thanks for the review comments.\n>\n> On Tue, Apr 14, 2020 at 9:12 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> >>I m still working on testing this patch. If anyone has Idea please\n>> suggest.\n>> I still see problems with this patch.\n>>\n>> 1. Variable loct have redundant initialization, it would be enough to\n>> declare so: _locale_t loct;\n>> 2. Style white space in variable rc declaration.\n>> 3. Style variable cp_index can be reduced.\n>> if (tmp != NULL) {\n>> size_t cp_index;\n>>\n>> cp_index = (size_t)(tmp - winlocname);\n>> strncpy(loc_name, winlocname, cp_index);\n>> loc_name[cp_index] = '\\0';\n>> 4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is\n>> not called.\n>>\n> I resolved the above comments.\n>\nOk, thanks.\n\n\n>\n>> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is\n>> not used?\n>>\n> _create_locale can take bigger input than GetLocaleInfoEx. But we are\n> interested in\n> *language[_country-region[.code-page]]*. We are using _create_locale to\n> validate\n> the given input. The reason is we can't verify the locale name if it is\n> appended with\n> code-page by using GetLocaleInfoEx. So before parsing, we verify if the\n> whole input\n> locale name is valid by using _create_locale. I hope that answers your\n> question.\n>\nUnderstood. In this case, _create_locale, is being used only to validate\nthe input.\nPerhaps, in addition, you could create an additional function, which only\nvalidates winlocname, without having to create structures or use malloc, to\nbe used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion,\nif you think it is necessary.\n\nBut have a last problem, in case GetLocaleInfoEx fail, there is still one\nmemory leak,\nbefore return NULL is needed call: _free_locale(loct);\n\nAnother suggestion, if GetLocaleInfoEx fail wouldn't it be good to log the\nerror and the error number?\n\nregards,\nRanier Vilela\n\nEm qua., 15 de abr. de 2020 às 03:08, davinder singh <davindersingh2692@gmail.com> escreveu:Thanks for the review comments.On Tue, Apr 14, 2020 at 9:12 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>I m still working on testing this patch. If anyone has Idea please suggest. I still see problems with this patch.1. Variable loct have redundant initialization, it would be enough to declare so: _locale_t\tloct;2. Style white space in variable rc declaration.3. Style variable cp_index can be reduced.\t\tif (tmp != NULL) {\t\t    size_t\t\tcp_index;\t\t\tcp_index = (size_t)(tmp - winlocname);\t\t\tstrncpy(loc_name, winlocname, cp_index);\t\t\tloc_name[cp_index] = '\\0';4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is not called.I resolved the above comments.Ok, thanks.  5. Why call _create_locale \nif _WIN32_WINNT >= 0x0600 is true and loct is not used?_create_locale can take bigger input than GetLocaleInfoEx. But we are interested inlanguage[_country-region[.code-page]]. We are using _create_locale to validatethe given input. The reason is we can't verify the locale name if it is appended withcode-page by using GetLocaleInfoEx. So before parsing, we verify if the whole inputlocale name is valid by using _create_locale. I hope that answers your question.Understood. In this case, _create_locale, is being used only to validate the input.Perhaps, in addition, you could create an additional function, which only validates winlocname, without having to create structures or use malloc, to be used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion, if you think it is necessary. But have a last problem, in case GetLocaleInfoEx fail, there is still one memory leak,before return NULL is needed call: _free_locale(loct);Another suggestion, \nif GetLocaleInfoEx fail\n\nwouldn't it be good to log the error and the error number?regards,Ranier Vilela", "msg_date": "Wed, 15 Apr 2020 11:45:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 15, 2020 at 4:46 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em qua., 15 de abr. de 2020 às 03:08, davinder singh <\n> davindersingh2692@gmail.com> escreveu:\n>\n>>\n>> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is\n>>> not used?\n>>>\n>> _create_locale can take bigger input than GetLocaleInfoEx. But we are\n>> interested in\n>> *language[_country-region[.code-page]]*. We are using _create_locale to\n>> validate\n>> the given input. The reason is we can't verify the locale name if it is\n>> appended with\n>> code-page by using GetLocaleInfoEx. So before parsing, we verify if the\n>> whole input\n>> locale name is valid by using _create_locale. I hope that answers your\n>> question.\n>>\n> Understood. In this case, _create_locale, is being used only to validate\n> the input.\n> Perhaps, in addition, you could create an additional function, which only\n> validates winlocname, without having to create structures or use malloc, to\n> be used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion,\n> if you think it is necessary.\n>\n\nLooking at the comments for IsoLocaleName() I see: \"MinGW headers declare\n_create_locale(), but msvcrt.dll lacks that symbol\". This is outdated\n[1][2], and _create_locale() could be used from Windows 8, but I think we\nshould use GetLocaleInfoEx() as a complete alternative to _create_locale().\n\nPlease find attached a patch for so.\n\n[1]\nhttps://sourceforge.net/p/mingw-w64/mailman/mingw-w64-public/?limit=250&page=2\n[2] https://github.com/mirror/mingw-w64/commit/b508bb87ad\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>", "msg_date": "Wed, 15 Apr 2020 20:27:58 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em qua., 15 de abr. de 2020 às 15:28, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n>\n> On Wed, Apr 15, 2020 at 4:46 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Em qua., 15 de abr. de 2020 às 03:08, davinder singh <\n>> davindersingh2692@gmail.com> escreveu:\n>>\n>>>\n>>> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is\n>>>> not used?\n>>>>\n>>> _create_locale can take bigger input than GetLocaleInfoEx. But we are\n>>> interested in\n>>> *language[_country-region[.code-page]]*. We are using _create_locale to\n>>> validate\n>>> the given input. The reason is we can't verify the locale name if it is\n>>> appended with\n>>> code-page by using GetLocaleInfoEx. So before parsing, we verify if the\n>>> whole input\n>>> locale name is valid by using _create_locale. I hope that answers your\n>>> question.\n>>>\n>> Understood. In this case, _create_locale, is being used only to validate\n>> the input.\n>> Perhaps, in addition, you could create an additional function, which only\n>> validates winlocname, without having to create structures or use malloc, to\n>> be used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion,\n>> if you think it is necessary.\n>>\n>\n> Looking at the comments for IsoLocaleName() I see: \"MinGW headers declare\n> _create_locale(), but msvcrt.dll lacks that symbol\". This is outdated\n> [1][2], and _create_locale() could be used from Windows 8, but I think we\n> should use GetLocaleInfoEx() as a complete alternative to _create_locale().\n>\nSounds good to me, the exception maybe log error in case fail?\n\nregards,\nRanier Vilela\n\nEm qua., 15 de abr. de 2020 às 15:28, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:On Wed, Apr 15, 2020 at 4:46 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em qua., 15 de abr. de 2020 às 03:08, davinder singh <davindersingh2692@gmail.com> escreveu:5. Why call _create_locale \nif _WIN32_WINNT >= 0x0600 is true and loct is not used?_create_locale can take bigger input than GetLocaleInfoEx. But we are interested inlanguage[_country-region[.code-page]]. We are using _create_locale to validatethe given input. The reason is we can't verify the locale name if it is appended withcode-page by using GetLocaleInfoEx. So before parsing, we verify if the whole inputlocale name is valid by using _create_locale. I hope that answers your question.Understood. In this case, _create_locale, is being used only to validate the input.Perhaps, in addition, you could create an additional function, which only validates winlocname, without having to create structures or use malloc, to be used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion, if you think it is necessary.Looking at the comments for IsoLocaleName() I see: \"MinGW headers declare _create_locale(), but msvcrt.dll lacks that symbol\". This is outdated [1][2], and \n\n_create_locale() could be used from Windows 8, but I think we should use GetLocaleInfoEx() as a complete alternative to \n\n_create_locale().Sounds good to me, the exception maybe log error in case fail?regards,Ranier Vilela", "msg_date": "Wed, 15 Apr 2020 16:12:08 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:58 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 4:46 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>> Em qua., 15 de abr. de 2020 às 03:08, davinder singh <davindersingh2692@gmail.com> escreveu:\n>>>\n>>>\n>>>> 5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is not used?\n>>>\n>>> _create_locale can take bigger input than GetLocaleInfoEx. But we are interested in\n>>> language[_country-region[.code-page]]. We are using _create_locale to validate\n>>> the given input. The reason is we can't verify the locale name if it is appended with\n>>> code-page by using GetLocaleInfoEx. So before parsing, we verify if the whole input\n>>> locale name is valid by using _create_locale. I hope that answers your question.\n>>\n>> Understood. In this case, _create_locale, is being used only to validate the input.\n>> Perhaps, in addition, you could create an additional function, which only validates winlocname, without having to create structures or use malloc, to be used when _WIN32_WINNT> = 0x0600 is true, but it is only a suggestion, if you think it is necessary.\n>\n>\n> Looking at the comments for IsoLocaleName() I see: \"MinGW headers declare _create_locale(), but msvcrt.dll lacks that symbol\". This is outdated [1][2], and _create_locale() could be used from Windows 8, but I think we should use GetLocaleInfoEx() as a complete alternative to _create_locale().\n>\n\nI see some differences in the output when _create_locale() is used vs.\nwhen GetLocaleInfoEx() is used. Forex.\n\nSet LC_MESSAGES=\"English_New Zealand\";\n\n-- returns en-NZ, then code changes it to en_NZ when _create_locale()\nis used whereas GetLocaleInfoEx returns error.\n\nSet LC_MESSAGES=\"English_Republic of the Philippines\";\n\n-- returns en-PH, then code changes it to en_PH when _create_locale()\nis used whereas GetLocaleInfoEx returns error.\n\nSet LC_MESSAGES=\"English_New Zealand\";\n\n-- returns en-NZ, then code changes it to en_NZ when _create_locale()\nis used whereas GetLocaleInfoEx returns error.\n\nSet LC_MESSAGES=\"French_Canada\";\n\n--returns fr-CA when _create_locale() is used whereas GetLocaleInfoEx\nreturns error.\n\nThe function IsoLocaleName() header comment says \"Convert a Windows\nsetlocale() argument to a Unix-style one\", so I was expecting above\ncases which gives valid values with _create_locale() should also work\nwith GetLocaleInfoEx(). If it is fine for GetLocaleInfoEx() to return\nan error for the above cases, then we need an explanation of the same\nand probably add a few comments as well. So, I am not sure if we can\nconclude that GetLocaleInfoEx() is an alternative to _create_locale()\nat this stage.\n\nI have used the attached hack to make _create_locale work on the\nlatest MSVC. Just to be clear this is mainly for the purpose of\ntesting the behavior _create_locale.\n\nOn the code side,\n+ GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME, (LPWSTR) &buffer,\n+ LOCALE_NAME_MAX_LENGTH);\n\n /* Locale names use only ASCII, any conversion locale suffices. */\n- rc = wchar2char(iso_lc_messages, loct->locinfo->locale_name[LC_CTYPE],\n- sizeof(iso_lc_messages), NULL);\n+ rc = wchar2char(iso_lc_messages, buffer, sizeof(iso_lc_messages), NULL);\n\nCheck the return value of GetLocaleInfoEx and if it is successful,\nthen only use wchar2char, otherwise, this function will return an\nempty string (\"\") instead of NULL.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Apr 2020 14:02:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Fri, Apr 17, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n>\n> I see some differences in the output when _create_locale() is used vs.\n> when GetLocaleInfoEx() is used. Forex.\n>\n\nThanks for the thorough review.\n\n\n> The function IsoLocaleName() header comment says \"Convert a Windows\n> setlocale() argument to a Unix-style one\", so I was expecting above\n> cases which gives valid values with _create_locale() should also work\n> with GetLocaleInfoEx(). If it is fine for GetLocaleInfoEx() to return\n> an error for the above cases, then we need an explanation of the same\n> and probably add a few comments as well. So, I am not sure if we can\n> conclude that GetLocaleInfoEx() is an alternative to _create_locale()\n> at this stage.\n>\n\nWe can get a match for those locales in non-ISO format by enumerating\navailable locales with EnumSystemLocalesEx(), and trying to find a match.\n\nPlease find a new patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 17 Apr 2020 20:43:35 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Sat, Apr 18, 2020 at 12:14 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> We can get a match for those locales in non-ISO format by enumerating available locales with EnumSystemLocalesEx(), and trying to find a match.\n>\n> Please find a new patch for so.\n>\n\nI have not reviewed or tested the new patch but one thing I would like\nto see is the impact of setting LC_MESAGGES with different locale\ninformation. Basically, the error messages after setting the locale\nwith _create_locale and with the new method being discussed. This\nwill help us in ensuring that we didn't break anything which was\nworking with prior versions of MSVC. Can you or someone try to test\nand share the results of the same?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Apr 2020 09:37:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em sex., 17 de abr. de 2020 às 15:44, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Fri, Apr 17, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>>\n>> I see some differences in the output when _create_locale() is used vs.\n>> when GetLocaleInfoEx() is used. Forex.\n>>\n>\n> Thanks for the thorough review.\n>\n>\n>> The function IsoLocaleName() header comment says \"Convert a Windows\n>> setlocale() argument to a Unix-style one\", so I was expecting above\n>> cases which gives valid values with _create_locale() should also work\n>> with GetLocaleInfoEx(). If it is fine for GetLocaleInfoEx() to return\n>> an error for the above cases, then we need an explanation of the same\n>> and probably add a few comments as well. So, I am not sure if we can\n>> conclude that GetLocaleInfoEx() is an alternative to _create_locale()\n>> at this stage.\n>>\n>\n> We can get a match for those locales in non-ISO format by enumerating\n> available locales with EnumSystemLocalesEx(), and trying to find a match.\n>\n> Please find a new patch for so.\n>\nI have some observations about this patch, related to style, if you will\nallow me.\n1. argv variable on function enum_locales_fn can be reduced.\n2. Var declaration len escapes the Postgres style.\n3. Why call wcslen(test_locale), again, when var len have the size?\n\n+static BOOL CALLBACK\n+enum_locales_fn(LPWSTR pStr, DWORD dwFlags, LPARAM lparam)\n+{\n+ WCHAR test_locale[LOCALE_NAME_MAX_LENGTH];\n+\n+ memset(test_locale, 0, sizeof(test_locale));\n+ if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHLANGUAGENAME,\n+ test_locale, LOCALE_NAME_MAX_LENGTH) > 0)\n+ {\n+ size_t len;\n+\n+ wcscat(test_locale, L\"_\");\n+ len = wcslen(test_locale);\n+ if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,\n+ test_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)\n+ {\n+ WCHAR **argv = (WCHAR **) lparam;\n+\n+ if (wcsncmp(argv[0], test_locale, len) == 0)\n+ {\n+ wcscpy(argv[1], pStr);\n+ return FALSE;\n+ }\n+ }\n+ }\n+ return TRUE;\n+}\n\nregards,\nRanier Vilela\n\nEm sex., 17 de abr. de 2020 às 15:44, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:On Fri, Apr 17, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\nI see some differences in the output when _create_locale() is used vs.\nwhen GetLocaleInfoEx() is used.  Forex.Thanks for the thorough review. The function IsoLocaleName() header comment says \"Convert a Windows\nsetlocale() argument to a Unix-style one\", so I was expecting above\ncases which gives valid values with _create_locale() should also work\nwith GetLocaleInfoEx().  If it is fine for GetLocaleInfoEx() to return\nan error for the above cases, then we need an explanation of the same\nand probably add a few comments as well.  So, I am not sure if we can\nconclude that GetLocaleInfoEx() is an alternative to _create_locale()\nat this stage.We can get a match for those locales in non-ISO format by enumerating available locales with EnumSystemLocalesEx(), and trying to find a match.Please find a new patch for so.I have some observations about this patch, related to style, if you will allow me.1. argv variable on function enum_locales_fn can be reduced.2. Var declaration len escapes the Postgres style.3. Why call \nwcslen(test_locale), again, when var len have the size?+static BOOL CALLBACK+enum_locales_fn(LPWSTR pStr, DWORD dwFlags, LPARAM lparam)+{+\tWCHAR\ttest_locale[LOCALE_NAME_MAX_LENGTH];++\tmemset(test_locale, 0, sizeof(test_locale));+\tif (GetLocaleInfoEx(pStr, LOCALE_SENGLISHLANGUAGENAME,+\t\t\t\t\t\ttest_locale, LOCALE_NAME_MAX_LENGTH) > 0)+\t{\n+ size_t\t\tlen;++\t\twcscat(test_locale, L\"_\");+\t\tlen = wcslen(test_locale);+\t\tif (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,+\t\t\t\t\t\t\ttest_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)+\t\t{\n+\tWCHAR\t**argv = (WCHAR **) lparam;++\t\t\tif (wcsncmp(argv[0], test_locale, len) == 0)+\t\t\t{+\t\t\t\twcscpy(argv[1], pStr);+\t\t\t\treturn FALSE;+\t\t\t}+\t\t}+\t}+\treturn TRUE;+}regards,Ranier Vilela", "msg_date": "Sat, 18 Apr 2020 08:42:27 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Sat, Apr 18, 2020 at 6:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, Apr 18, 2020 at 12:14 AM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > We can get a match for those locales in non-ISO format by enumerating\n> available locales with EnumSystemLocalesEx(), and trying to find a match.\n>\n> I have not reviewed or tested the new patch but one thing I would like\n> to see is the impact of setting LC_MESAGGES with different locale\n> information. Basically, the error messages after setting the locale\n> with _create_locale and with the new method being discussed. This\n> will help us in ensuring that we didn't break anything which was\n> working with prior versions of MSVC. Can you or someone try to test\n> and share the results of the same?\n>\n\nI cannot find a single place where all supported locales are listed, but I\nhave created a small test program (WindowsNLSLocales.c) based on:\n<language>[_<location>] format locales [1], additional supported language\nstrings [2], and additional supported country and region strings [3]. Based\non the results from this test program, it is possible to to do a good job\nwith the <language>[_<location>] types using the proposed logic, but the\ntwo later cases are Windows specific, and there is no way arround a\nlookup-table.\n\nThe attached results (WindowsNLSLocales.ods) come from Windows 10 (1903)\nand Visual C++ build 1924, 64-bit.\n\nOn Sat, Apr 18, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> I have some observations about this patch, related to style, if you will\n> allow me.\n>\n\nPlease find attached a revised version.\n\n[1]\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c\n[2] https://docs.microsoft.com/en-us/cpp/c-runtime-library/language-strings\n<https://docs.microsoft.com/en-us/cpp/c-runtime-library/language-strings?view=vs-2019>\n[3]\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/country-region-strings\n<https://docs.microsoft.com/en-us/cpp/c-runtime-library/country-region-strings?view=vs-2019>", "msg_date": "Sun, 19 Apr 2020 12:16:32 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 07:16, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Sat, Apr 18, 2020 at 6:07 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Sat, Apr 18, 2020 at 12:14 AM Juan José Santamaría Flecha\n>> <juanjo.santamaria@gmail.com> wrote:\n>> >\n>> > We can get a match for those locales in non-ISO format by enumerating\n>> available locales with EnumSystemLocalesEx(), and trying to find a match.\n>>\n>> I have not reviewed or tested the new patch but one thing I would like\n>> to see is the impact of setting LC_MESAGGES with different locale\n>> information. Basically, the error messages after setting the locale\n>> with _create_locale and with the new method being discussed. This\n>> will help us in ensuring that we didn't break anything which was\n>> working with prior versions of MSVC. Can you or someone try to test\n>> and share the results of the same?\n>>\n>\n> I cannot find a single place where all supported locales are listed, but I\n> have created a small test program (WindowsNLSLocales.c) based on:\n> <language>[_<location>] format locales [1], additional supported language\n> strings [2], and additional supported country and region strings [3]. Based\n> on the results from this test program, it is possible to to do a good job\n> with the <language>[_<location>] types using the proposed logic, but the\n> two later cases are Windows specific, and there is no way arround a\n> lookup-table.\n>\n> The attached results (WindowsNLSLocales.ods) come from Windows 10 (1903)\n> and Visual C++ build 1924, 64-bit.\n>\n> On Sat, Apr 18, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> I have some observations about this patch, related to style, if you will\n>> allow me.\n>>\n>\n> Please find attached a revised version.\n>\nLooks good to me, but, sorry, I think I missed a glitch in the previous\nreview..\n\n+#else /* _WIN32_WINNT < 0x0600 */\n+ _locale_t loct;\n+\n+ loct = _create_locale(LC_CTYPE, winlocname);\n+ if (loct != NULL)\n+{\n+ rc = wchar2char(iso_lc_messages, loct->locinfo->locale_name[LC_CTYPE],\n+ sizeof(iso_lc_messages), NULL);\n _free_locale(loct);\n+}\n+#endif /* _WIN32_WINNT >= 0x0600 */\n\nIf _create_locale fail, no need to call _free_locale(loct);.\n\nAnother point is, what is the difference between pg_mbstrlen and wcslen?\nIt would not be better to use only wcslen?\n\nAttached have the patch with this comments.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 19 Apr 2020 08:56:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Sun, Apr 19, 2020 at 1:58 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em dom., 19 de abr. de 2020 às 07:16, Juan José Santamaría Flecha <\n> juanjo.santamaria@gmail.com> escreveu:\n>\n>> On Sat, Apr 18, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com>\n>> wrote:\n>>\n>>> I have some observations about this patch, related to style, if you will\n>>> allow me.\n>>>\n>> Please find attached a revised version.\n>>\n> Looks good to me, but, sorry, I think I missed a glitch in the previous\n> review.\n> If _create_locale fail, no need to call _free_locale(loct);.\n>\n> Another point is, what is the difference between pg_mbstrlen and wcslen?\n> It would not be better to use only wcslen?\n>\n\npg_mbstrlen() is for multibyte strings and wcslen() is for wide-character\nstrings, the \"pg\" equivalent would be pg_wchar_strlen().\n\nAttached have the patch with this comments.\n>\n\n+ } else\n\nThis line needs a break, other than that LGTM.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sun, Apr 19, 2020 at 1:58 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 19 de abr. de 2020 às 07:16, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:On Sat, Apr 18, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com> wrote:I have some observations about this patch, related to style, if you will allow me. Please find attached a revised version.Looks good to me, but, sorry, I think I missed a glitch in the previous review.If \n_create_locale fail, no need to call \n_free_locale(loct);.Another point is, what is the difference between pg_mbstrlen and wcslen?It would not be better to use only \nwcslen?pg_mbstrlen() is for multibyte strings and \n\nwcslen() is for wide-character strings, the \"pg\" equivalent would be pg_wchar_strlen().Attached have the patch with this comments.+\t\t} elseThis line needs a break, other than that LGTM.Regards,Juan José Santamaría Flecha", "msg_date": "Sun, 19 Apr 2020 17:33:38 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em dom., 19 de abr. de 2020 às 12:34, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Sun, Apr 19, 2020 at 1:58 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Em dom., 19 de abr. de 2020 às 07:16, Juan José Santamaría Flecha <\n>> juanjo.santamaria@gmail.com> escreveu:\n>>\n>>> On Sat, Apr 18, 2020 at 1:43 PM Ranier Vilela <ranier.vf@gmail.com>\n>>> wrote:\n>>>\n>>>> I have some observations about this patch, related to style, if you\n>>>> will allow me.\n>>>>\n>>> Please find attached a revised version.\n>>>\n>> Looks good to me, but, sorry, I think I missed a glitch in the previous\n>> review.\n>> If _create_locale fail, no need to call _free_locale(loct);.\n>>\n>> Another point is, what is the difference between pg_mbstrlen and wcslen?\n>> It would not be better to use only wcslen?\n>>\n>\n> pg_mbstrlen() is for multibyte strings and wcslen() is for wide-character\n> strings, the \"pg\" equivalent would be pg_wchar_strlen().\n>\n> Attached have the patch with this comments.\n>>\n>\n> + } else\n>\n> This line needs a break, other than that LGTM.\n>\nSure. Attached new patch with this revision.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 19 Apr 2020 15:59:50 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Sun, Apr 19, 2020 at 03:59:50PM -0300, Ranier Vilela wrote:\n> Em dom., 19 de abr. de 2020 às 12:34, Juan José Santamaría Flecha <\n> juanjo.santamaria@gmail.com> escreveu:\n>> This line needs a break, other than that LGTM.\n>\n> Sure. Attached new patch with this revision.\n\nAmit, are you planning to look at this patch? I may be able to spend\na couple of hours on this thread this week and that's an area of the\ncode I have worked with in the past, though I am not sure.\n--\nMichael", "msg_date": "Mon, 20 Apr 2020 08:02:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 20, 2020 at 4:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Apr 19, 2020 at 03:59:50PM -0300, Ranier Vilela wrote:\n> > Em dom., 19 de abr. de 2020 às 12:34, Juan José Santamaría Flecha <\n> > juanjo.santamaria@gmail.com> escreveu:\n> >> This line needs a break, other than that LGTM.\n> >\n> > Sure. Attached new patch with this revision.\n>\n> Amit, are you planning to look at this patch?\n>\n\nYes, I am planning to look into it. Actually, I think the main thing\nhere is to ensure that we don't break something which was working with\n_create_locale API.\n\n> I may be able to spend\n> a couple of hours on this thread this week and that's an area of the\n> code I have worked with in the past, though I am not sure.\n>\n\nOkay, that will be good.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 10:10:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 20, 2020 at 10:10 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> Yes, I am planning to look into it. Actually, I think the main thing\n> here is to ensure that we don't break something which was working with\n> _create_locale API.\n\nI am trying to understand how lc_messages affect the error messages on\nWindows,\nbut I haven't seen any change in the error message like on the Linux system\nwe change lc_messages.\nCan someone help me with this? Please let me know if there is any\nconfiguration setting that I need to adjust.\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Apr 20, 2020 at 10:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\nYes, I am planning to look into it.  Actually, I think the main thing\nhere is to ensure that we don't break something which was working with\n_create_locale API.I am trying to understand how lc_messages affect the error messages on Windows,but I haven't seen any change in the error message like on the Linux system we change lc_messages.Can someone help me with this? Please let me know if there is any configuration setting that I need to adjust.-- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Apr 2020 14:23:03 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Sun, Apr 19, 2020 at 3:46 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Sat, Apr 18, 2020 at 6:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Sat, Apr 18, 2020 at 12:14 AM Juan José Santamaría Flecha\n>> <juanjo.santamaria@gmail.com> wrote:\n>> >\n>> > We can get a match for those locales in non-ISO format by enumerating available locales with EnumSystemLocalesEx(), and trying to find a match.\n>>\n>> I have not reviewed or tested the new patch but one thing I would like\n>> to see is the impact of setting LC_MESAGGES with different locale\n>> information. Basically, the error messages after setting the locale\n>> with _create_locale and with the new method being discussed. This\n>> will help us in ensuring that we didn't break anything which was\n>> working with prior versions of MSVC. Can you or someone try to test\n>> and share the results of the same?\n>\n>\n> I cannot find a single place where all supported locales are listed, but I have created a small test program (WindowsNLSLocales.c) based on: <language>[_<location>] format locales [1], additional supported language strings [2], and additional supported country and region strings [3]. Based on the results from this test program, it is possible to to do a good job with the <language>[_<location>] types using the proposed logic, but the two later cases are Windows specific, and there is no way arround a lookup-table.\n>\n> The attached results (WindowsNLSLocales.ods) come from Windows 10 (1903) and Visual C++ build 1924, 64-bit.\n>\n\nI think these are quite intensive tests but I wonder do we need to\ntest some locales with code_page? Generally speaking, in this case it\nshould not matter as we are trying to get the locale name but no harm\nin testing. Also, I think it would be good if we can test how this\nimpacts the error messages as Davinder is trying to do.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:32:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 20, 2020 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Apr 19, 2020 at 3:46 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > I cannot find a single place where all supported locales are listed, but I have created a small test program (WindowsNLSLocales.c) based on: <language>[_<location>] format locales [1], additional supported language strings [2], and additional supported country and region strings [3]. Based on the results from this test program, it is possible to to do a good job with the <language>[_<location>] types using the proposed logic, but the two later cases are Windows specific, and there is no way arround a lookup-table.\n> >\n> > The attached results (WindowsNLSLocales.ods) come from Windows 10 (1903) and Visual C++ build 1924, 64-bit.\n> >\n>\n> I think these are quite intensive tests but I wonder do we need to\n> test some locales with code_page? Generally speaking, in this case it\n> should not matter as we are trying to get the locale name but no harm\n> in testing. Also, I think it would be good if we can test how this\n> impacts the error messages as Davinder is trying to do.\n>\n\n\nI have tried a simple test with the latest patch and it failed for me.\n\nSet LC_MESSAGES=\"English_United Kingdom\";\n-- returns en-GB, then code changes it to en_NZ when _create_locale()\nis used whereas with the patch it returns \"\" (empty string).\n\nThere seem to be two problems here (a) The input to enum_locales_fn\ndoesn't seem to get the input name as \"English_United Kingdom\" due to\nwhich it can't find a match even if the same exists. (b) After\nexecuting EnumSystemLocalesEx, there is no way the patch can detect if\nit is successful in finding the passed name due to which it appends\nempty string in such cases.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 12:51:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 21, 2020 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 20, 2020 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Apr 19, 2020 at 3:46 PM Juan José Santamaría Flecha\n> > <juanjo.santamaria@gmail.com> wrote:\n> > >\n> > > I cannot find a single place where all supported locales are listed, but I have created a small test program (WindowsNLSLocales.c) based on: <language>[_<location>] format locales [1], additional supported language strings [2], and additional supported country and region strings [3]. Based on the results from this test program, it is possible to to do a good job with the <language>[_<location>] types using the proposed logic, but the two later cases are Windows specific, and there is no way arround a lookup-table.\n> > >\n> > > The attached results (WindowsNLSLocales.ods) come from Windows 10 (1903) and Visual C++ build 1924, 64-bit.\n> > >\n> >\n> > I think these are quite intensive tests but I wonder do we need to\n> > test some locales with code_page? Generally speaking, in this case it\n> > should not matter as we are trying to get the locale name but no harm\n> > in testing. Also, I think it would be good if we can test how this\n> > impacts the error messages as Davinder is trying to do.\n> >\n>\n>\n> I have tried a simple test with the latest patch and it failed for me.\n>\n> Set LC_MESSAGES=\"English_United Kingdom\";\n> -- returns en-GB, then code changes it to en_NZ when _create_locale()\n> is used whereas with the patch it returns \"\" (empty string).\n>\n> There seem to be two problems here (a) The input to enum_locales_fn\n> doesn't seem to get the input name as \"English_United Kingdom\" due to\n> which it can't find a match even if the same exists. (b) After\n> executing EnumSystemLocalesEx, there is no way the patch can detect if\n> it is successful in finding the passed name due to which it appends\n> empty string in such cases.\n>\n\nFew more comments:\n1. I have tried the first one in the list provided by you and that\nalso didn't work. Basically, I got empty string when I tried Set\nLC_MESSAGES='Afar';\n\n2. Getting below warning\npg_locale.c(1072): warning C4133: 'function': incompatible types -\nfrom 'const char *' to 'const wchar_t *'\n\n3.\n+ if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,\n+ test_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)\n\nAll > or <= 0 checks should be changed to \"!\" types which mean to\ncheck whether the call toGetLocaleInfoEx is success or not.\n\n4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\nthen with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\nI think we should add comments indicating why we try to get the locale\ninformation with three LCTypes and why the specific order of trying\nthose types is required.\n\n5. In one of the previous emails, you asked whether we have a list of\nsupported locales. I don't find any such list. I think it depends on\nWindows locales for which you can get the information from\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 16:10:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Tue, Apr 21, 2020 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > I have tried a simple test with the latest patch and it failed for me.\n> >\n> > Set LC_MESSAGES=\"English_United Kingdom\";\n> > -- returns en-GB, then code changes it to en_NZ when _create_locale()\n> > is used whereas with the patch it returns \"\" (empty string).\n> >\n> > There seem to be two problems here (a) The input to enum_locales_fn\n> > doesn't seem to get the input name as \"English_United Kingdom\" due to\n> > which it can't find a match even if the same exists. (b) After\n> > executing EnumSystemLocalesEx, there is no way the patch can detect if\n> > it is successful in finding the passed name due to which it appends\n> > empty string in such cases.\n>\n> Few more comments:\n> 1. I have tried the first one in the list provided by you and that\n> also didn't work. Basically, I got empty string when I tried Set\n> LC_MESSAGES='Afar';\n>\n\nI cannot reproduce any of these errors on my end. When using\n_create_locale(), returning \"en_NZ\" is also a wrong result.\n\n\n> 2. Getting below warning\n> pg_locale.c(1072): warning C4133: 'function': incompatible types -\n> from 'const char *' to 'const wchar_t *'\n>\n\nYes, that is a regression.\n\n\n> 3.\n> + if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,\n> + test_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)\n>\n> All > or <= 0 checks should be changed to \"!\" types which mean to\n> check whether the call toGetLocaleInfoEx is success or not.\n>\n\nMSVC does not recommend \"!\" in all cases, but GetLocaleInfoEx() looks fine,\nso agreed.\n\n4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n> I think we should add comments indicating why we try to get the locale\n> information with three LCTypes and why the specific order of trying\n> those types is required.\n>\n\nAgreed.\n\n\n> 5. In one of the previous emails, you asked whether we have a list of\n> supported locales. I don't find any such list. I think it depends on\n> Windows locales for which you can get the information from\n>\n> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c\n\n\nYes, that is the information we get from EnumSystemLocalesEx(), without the\nadditional entries _create_locale() has.\n\nPlease find attached a new version addressing the above mentioned, and so\nadding a debug message for trying to get more information on the failed\ncases.\n\nRegards,\n\nJuan José Santamaría Flecha\n.\n\n>\n>", "msg_date": "Tue, 21 Apr 2020 14:01:39 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em ter., 21 de abr. de 2020 às 09:02, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Tue, Apr 21, 2020 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> >\n>> > I have tried a simple test with the latest patch and it failed for me.\n>> >\n>> > Set LC_MESSAGES=\"English_United Kingdom\";\n>> > -- returns en-GB, then code changes it to en_NZ when _create_locale()\n>> > is used whereas with the patch it returns \"\" (empty string).\n>> >\n>> > There seem to be two problems here (a) The input to enum_locales_fn\n>> > doesn't seem to get the input name as \"English_United Kingdom\" due to\n>> > which it can't find a match even if the same exists. (b) After\n>> > executing EnumSystemLocalesEx, there is no way the patch can detect if\n>> > it is successful in finding the passed name due to which it appends\n>> > empty string in such cases.\n>>\n>> Few more comments:\n>> 1. I have tried the first one in the list provided by you and that\n>> also didn't work. Basically, I got empty string when I tried Set\n>> LC_MESSAGES='Afar';\n>>\n>\n> I cannot reproduce any of these errors on my end. When using\n> _create_locale(), returning \"en_NZ\" is also a wrong result.\n>\n>\n>> 2. Getting below warning\n>> pg_locale.c(1072): warning C4133: 'function': incompatible types -\n>> from 'const char *' to 'const wchar_t *'\n>>\n>\n> Yes, that is a regression.\n>\n>\n>> 3.\n>> + if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,\n>> + test_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)\n>>\n>> All > or <= 0 checks should be changed to \"!\" types which mean to\n>> check whether the call toGetLocaleInfoEx is success or not.\n>>\n>\n> MSVC does not recommend \"!\" in all cases, but GetLocaleInfoEx() looks\n> fine, so agreed.\n>\n> 4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n>> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n>> I think we should add comments indicating why we try to get the locale\n>> information with three LCTypes and why the specific order of trying\n>> those types is required.\n>>\n>\n> Agreed.\n>\n>\n>> 5. In one of the previous emails, you asked whether we have a list of\n>> supported locales. I don't find any such list. I think it depends on\n>> Windows locales for which you can get the information from\n>>\n>> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9c\n>\n>\n> Yes, that is the information we get from EnumSystemLocalesEx(), without\n> the additional entries _create_locale() has.\n>\n> Please find attached a new version addressing the above mentioned, and so\n> adding a debug message for trying to get more information on the failed\n> cases.\n>\nMore few comments.\n\n1. Comments about order:\n/*\n * Callback function for EnumSystemLocalesEx.\n * Stop enumerating if a match is found for a locale with the format\n * <Language>_<Country>.\n * The order for search locale is essential:\n * Find LCType first as LOCALE_SNAME, if not found try\nLOCALE_SENGLISHLANGUAGENAME and\n * finally LOCALE_SENGLISHCOUNTRYNAME, before return.\n */\n\nTypo \"enumarating\".\n\n2. Maybe the fail has here:\n\nif (hyphen == NULL || underscore == NULL)\n\nChange || to &&, the logical is wrong?\n\n3. Why iso_lc_messages[0] = '\\0'?\n\nIf we go call strchr, soon after, it's a waste.\n\nregards,\nRanier Vilela\n\nEm ter., 21 de abr. de 2020 às 09:02, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Apr 21, 2020 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>\n> I have tried a simple test with the latest patch and it failed for me.\n>\n> Set LC_MESSAGES=\"English_United Kingdom\";\n> -- returns en-GB, then code changes it to en_NZ when _create_locale()\n> is used whereas with the patch it returns \"\" (empty string).\n>\n> There seem to be two problems here (a) The input to enum_locales_fn\n> doesn't seem to get the input name as \"English_United Kingdom\" due to\n> which it can't find a match even if the same exists. (b) After\n> executing EnumSystemLocalesEx, there is no way the patch can detect if\n> it is successful in finding the passed name due to which it appends\n> empty string in such cases.\nFew more comments:\n1. I have tried the first one in the list provided by you and that\nalso didn't work. Basically, I got  empty string when I tried Set\nLC_MESSAGES='Afar';\n\nI cannot reproduce any of these errors on my end. When using _create_locale(),  returning \"en_NZ\" is also a wrong result.   2. Getting below warning\npg_locale.c(1072): warning C4133: 'function': incompatible types -\nfrom 'const char *' to 'const wchar_t *'Yes, that is a regression. \n3.\n+ if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHCOUNTRYNAME,\n+ test_locale + len, LOCALE_NAME_MAX_LENGTH - len) > 0)\n\nAll > or <= 0 checks should be changed to \"!\" types which mean to\ncheck whether the call toGetLocaleInfoEx is success or not.MSVC does not recommend \"!\" in all cases, but GetLocaleInfoEx() looks fine, so agreed.4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\nthen with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\nI think we should add comments indicating why we try to get the locale\ninformation with three LCTypes and why the specific order of trying\nthose types is required.Agreed. 5. In one of the previous emails, you asked whether we have a list of\nsupported locales.  I don't find any such list. I think it depends on\nWindows locales for which you can get the information from\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lcid/a9eac961-e77d-41a6-90a5-ce1a8b0cdb9cYes, that is the information we get from EnumSystemLocalesEx(), without the additional entries _create_locale() has.Please find attached a new version addressing the above mentioned, and so adding a debug message for trying to get more information on the failed cases.More few comments.1. Comments about order:/* * Callback function for EnumSystemLocalesEx. * Stop enumerating if a match is found for a locale with the format * <Language>_<Country>. * The order for search locale is essential: * Find LCType first as LOCALE_SNAME, if not found try LOCALE_SENGLISHLANGUAGENAME and  * finally LOCALE_SENGLISHCOUNTRYNAME, before return. */ Typo \"enumarating\".2. Maybe the fail has here:if (hyphen == NULL || underscore == NULL)Change || to &&, the logical is wrong?3. Why iso_lc_messages[0] = '\\0'?If we go call strchr, soon after, it's a waste.regards,Ranier Vilela", "msg_date": "Tue, 21 Apr 2020 09:20:31 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 21, 2020 at 2:22 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> More few comments.\n>\n> 1. Comments about order:\n> /*\n> * Callback function for EnumSystemLocalesEx.\n> * Stop enumerating if a match is found for a locale with the format\n> * <Language>_<Country>.\n> * The order for search locale is essential:\n> * Find LCType first as LOCALE_SNAME, if not found try\n> LOCALE_SENGLISHLANGUAGENAME and\n> * finally LOCALE_SENGLISHCOUNTRYNAME, before return.\n> */\n>\n> Typo \"enumarating\".\n>\n\nI would not call the order essential, is just meant to try the easier ways\nfirst: is already \"ISO\" formatted !-> is just a \"language\" !-> is a full\n\"language_country\" tag.\n\nI take note about \"enumarating\".\n\n2. Maybe the fail has here:\n>\n> if (hyphen == NULL || underscore == NULL)\n>\n> Change || to &&, the logical is wrong?\n>\n\nIf the Windows locale does not have a hyphen (\"aa\") *or* the lc_message\ndoes not have an underscore (\"Afar\"), only a comparison on language is\nneeded.\n\n3. Why iso_lc_messages[0] = '\\0'?\n>\n> If we go call strchr, soon after, it's a waste.\n>\n\nLess code churn, and strchr() againts an empty string did not look too\nawful.\n\nI would like to find were the errors come from before sending a new\nversion, can you reproduce them?\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Tue, Apr 21, 2020 at 2:22 PM Ranier Vilela <ranier.vf@gmail.com> wrote:More few comments.1. Comments about order:/* * Callback function for EnumSystemLocalesEx. * Stop enumerating if a match is found for a locale with the format * <Language>_<Country>. * The order for search locale is essential: * Find LCType first as LOCALE_SNAME, if not found try LOCALE_SENGLISHLANGUAGENAME and  * finally LOCALE_SENGLISHCOUNTRYNAME, before return. */ Typo \"enumarating\".I would not call the order essential, is just meant to try the easier ways first: is already \"ISO\" formatted !-> is just a \"language\" !-> is a full \"language_country\" tag.I take note about \n\n\"enumarating\".\n\n2. Maybe the fail has here:if (hyphen == NULL || underscore == NULL)Change || to &&, the logical is wrong?\n\nIf the Windows locale does not have a hyphen (\"aa\") *or*  the lc_message does not have an underscore (\"Afar\"), only a comparison on \n\nlanguage \n\nis needed.3. Why iso_lc_messages[0] = '\\0'?If we go call strchr, soon after, it's a waste.Less code churn, and \n\nstrchr() againts an empty string did not look too awful.I would like to find were the errors come from before sending a new version, can you reproduce them?Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 21 Apr 2020 14:49:41 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 21, 2020 at 5:32 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Apr 21, 2020 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > I have tried a simple test with the latest patch and it failed for me.\n>> >\n>> > Set LC_MESSAGES=\"English_United Kingdom\";\n>> > -- returns en-GB, then code changes it to en_NZ when _create_locale()\n>> > is used whereas with the patch it returns \"\" (empty string).\n>> >\n>> > There seem to be two problems here (a) The input to enum_locales_fn\n>> > doesn't seem to get the input name as \"English_United Kingdom\" due to\n>> > which it can't find a match even if the same exists. (b) After\n>> > executing EnumSystemLocalesEx, there is no way the patch can detect if\n>> > it is successful in finding the passed name due to which it appends\n>> > empty string in such cases.\n>>\n>> Few more comments:\n>> 1. I have tried the first one in the list provided by you and that\n>> also didn't work. Basically, I got empty string when I tried Set\n>> LC_MESSAGES='Afar';\n>\n>\n> I cannot reproduce any of these errors on my end.\n>\n\nThe first problem related to the English_United Kingdom was due to the\nusage of wcslen instead of pg_mbstrlen to compute the length of\nwinlocname. So, this is fixed with your latest patch. I have\ndebugged the case for 'Afar' and found that _create_locale also didn't\nreturn anything for that in my machine, so probably that locale\ninformation is not there in my environment.\n\n> When using _create_locale(), returning \"en_NZ\" is also a wrong result.\n>\n\nHmm, that was a typo, it should be en_GB instead.\n\n>\n>> 4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n>> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n>> I think we should add comments indicating why we try to get the locale\n>> information with three LCTypes and why the specific order of trying\n>> those types is required.\n>\n>\n> Agreed.\n>\n\nBut, I don't see much in the comments?\n\nFew more comments:\n1.\n if (rc == -1 || rc == sizeof(iso_lc_messages))\n- return NULL;\n+\niso_lc_messages[0] = '\\0';\n\nI don't think this change is required. The caller expects NULL in\ncase the API is not successful so that it can point result directly to\nthe locale passed. I have changed this back to the original code in\nthe attached patch.\n\n2.\nI see some differences in the output of GetLocaleInfoEx and\n_create_locale for some locales as mentioned in one of the documents\nshared by you. Ex.\n\nBemba_Zambia bem-ZM bem\nBena_Tanzania bez-TZ bez\nBulgarian_Bulgaria bg-BG bg\n\nNow, these might be okay but I think unless we test such things by\nseeing the error message changes due to these locales we can't be\nsure.\n\n3. In the attached patch, I have handled one of the problem reported\nearlier aka \"After executing EnumSystemLocalesEx, there is no way the\npatch can detect if it is successful in finding the passed name due to\nwhich it appends empty string in such cases.\"\n\n4. I think for the matter of this API, it is better to use _MSC_VER\nrelated checks instead of _WIN32_WINNT so as to be consistent with\nsimilar usage in chklocale.c (see win32_langinfo). We can later\nchange the checks at all places to _WIN32_WINNT if required. I have\nchanged this as well in the attached patch.\n\n5. I am slightly nervous about the usage of wchar functions like\n_wcsicmp, wcslen, etc. as those are not used anywhere in the code.\nOTOH, I don't see any problem with that. There is pg_wchar datatype\nin the code and some corresponding functions to manipulate it. Have\nyou considered using it?\n\n6. I have additionally done some cosmetic changes in the attached patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Apr 2020 17:13:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em qua., 22 de abr. de 2020 às 08:43, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Tue, Apr 21, 2020 at 5:32 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n>\n6. I have additionally done some cosmetic changes in the attached patch.\n>\nI made some style changes too.\n\n1. Change:\n strcpy(iso_lc_messages, \"C\");\nto\niso_lc_messages[0] = 'C';\niso_lc_messages[1] = '\\0';\n2. Remove vars hyphen and underscore;\n3. Avoid call second wcsrchr, if hyphen is not found.\n\nIf it's not too much perfectionism.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 22 Apr 2020 11:06:29 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 22, 2020 at 1:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Apr 21, 2020 at 5:32 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > I cannot reproduce any of these errors on my end.\n> >\n> The first problem related to the English_United Kingdom was due to the\n> usage of wcslen instead of pg_mbstrlen to compute the length of\n> winlocname. So, this is fixed with your latest patch. I have\n> debugged the case for 'Afar' and found that _create_locale also didn't\n> return anything for that in my machine, so probably that locale\n> information is not there in my environment.\n>\n> > When using _create_locale(), returning \"en_NZ\" is also a wrong result.\n>\n> Hmm, that was a typo, it should be en_GB instead.\n>\n\nI am glad we could clear that out, sorry because it was on my hand to\nprevent.\n\n\n> >> 4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n> >> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n> >> I think we should add comments indicating why we try to get the locale\n> >> information with three LCTypes and why the specific order of trying\n> >> those types is required.\n> >\n> >\n> > Agreed.\n> >\n>\n> But, I don't see much in the comments?\n>\n\nI take notice.\n\n\n> Few more comments:\n> 1.\n> if (rc == -1 || rc == sizeof(iso_lc_messages))\n> - return NULL;\n> +\n> iso_lc_messages[0] = '\\0';\n>\n> I don't think this change is required. The caller expects NULL in\n> case the API is not successful so that it can point result directly to\n> the locale passed. I have changed this back to the original code in\n> the attached patch.\n>\n\nI did not want to return anything without logging its value.\n\n\n> 2.\n> I see some differences in the output of GetLocaleInfoEx and\n> _create_locale for some locales as mentioned in one of the documents\n> shared by you. Ex.\n>\n> Bemba_Zambia bem-ZM bem\n> Bena_Tanzania bez-TZ bez\n> Bulgarian_Bulgaria bg-BG bg\n>\n> Now, these might be okay but I think unless we test such things by\n> seeing the error message changes due to these locales we can't be\n> sure.\n>\n\nThere are some cases where the language tag does not match, although I do\nnot think is wrong:\n\nAsu asa Asu\nEdo bin Edo\nEwe ee Ewe\nRwa rwk Rwa\n\nTo check the messages, do you have a regression test in mind?\n\n\n> 3. In the attached patch, I have handled one of the problem reported\n> earlier aka \"After executing EnumSystemLocalesEx, there is no way the\n> patch can detect if it is successful in finding the passed name due to\n> which it appends empty string in such cases.\"\n>\n\nLGTM.\n\n\n> 4. I think for the matter of this API, it is better to use _MSC_VER\n> related checks instead of _WIN32_WINNT so as to be consistent with\n> similar usage in chklocale.c (see win32_langinfo). We can later\n> change the checks at all places to _WIN32_WINNT if required. I have\n> changed this as well in the attached patch.\n>\n\nOk, there is substance for a cleanup patch.\n\n5. I am slightly nervous about the usage of wchar functions like\n> _wcsicmp, wcslen, etc. as those are not used anywhere in the code.\n> OTOH, I don't see any problem with that. There is pg_wchar datatype\n> in the code and some corresponding functions to manipulate it. Have\n> you considered using it?\n>\n\nIn Windows wchar_t is 2 bytes, so we would have to do make UTF16 to UFT32\nconversions back and forth. Not sure if it is worth the effort.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Wed, Apr 22, 2020 at 1:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Apr 21, 2020 at 5:32 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:>\n> I cannot reproduce any of these errors on my end.\n>\nThe first problem related to the English_United Kingdom was due to the\nusage of wcslen instead of pg_mbstrlen to compute the length of\nwinlocname.  So, this is fixed with your latest patch.  I have\ndebugged the case for 'Afar' and found that _create_locale also didn't\nreturn anything for that in my machine, so probably that locale\ninformation is not there in my environment.\n\n>  When using _create_locale(),  returning \"en_NZ\" is also a wrong result.\nHmm, that was a typo, it should be en_GB instead.I am glad we could clear that out, sorry because it was on my hand to prevent. >> 4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n>> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n>> I think we should add comments indicating why we try to get the locale\n>> information with three LCTypes and why the specific order of trying\n>> those types is required.\n>\n>\n> Agreed.\n>\n\nBut, I don't see much in the comments?I take notice. Few more comments:\n1.\n  if (rc == -1 || rc == sizeof(iso_lc_messages))\n- return NULL;\n+\niso_lc_messages[0] = '\\0';\n\nI don't think this change is required.  The caller expects NULL in\ncase the API is not successful so that it can point result directly to\nthe locale passed.  I have changed this back to the original code in\nthe attached patch.I did not want to return anything without logging its value. 2.\nI see some differences in the output of GetLocaleInfoEx and\n_create_locale for some locales as mentioned in one of the documents\nshared by you. Ex.\n\nBemba_Zambia bem-ZM bem\nBena_Tanzania bez-TZ bez\nBulgarian_Bulgaria bg-BG bg\n\nNow, these might be okay but I think unless we test such things by\nseeing the error message changes due to these locales we can't be\nsure.There are some cases where the language tag does not match, although I do not think is wrong:Asu\tasa\tAsuEdo\tbin\tEdoEwe\tee\tEweRwa\trwk\tRwaTo check the messages, do you have a regression test in mind?  3. In the attached patch, I have handled one of the problem reported\nearlier aka \"After executing EnumSystemLocalesEx, there is no way the\npatch can detect if it is successful in finding the passed name due to\nwhich it appends empty string in such cases.\"LGTM. 4. I think for the matter of this API, it is better to use _MSC_VER\nrelated checks instead of _WIN32_WINNT so as to be consistent with\nsimilar usage in chklocale.c (see win32_langinfo).  We can later\nchange the checks at all places to _WIN32_WINNT if required.  I have\nchanged this as well in the attached patch.Ok, there is substance for a cleanup patch. 5. I am slightly nervous about the usage of wchar functions like\n_wcsicmp, wcslen, etc. as those are not used anywhere in the code.\nOTOH, I don't see any problem with that.  There is pg_wchar datatype\nin the code and some corresponding functions to manipulate it.  Have\nyou considered using it?In Windows \n\nwchar_t is 2 bytes, so we would have to do make UTF16 to UFT32 conversions back and forth. Not sure if it is worth the effort.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 22 Apr 2020 17:56:42 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 22, 2020 at 7:37 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em qua., 22 de abr. de 2020 às 08:43, Amit Kapila <amit.kapila16@gmail.com> escreveu:\n>>\n>> On Tue, Apr 21, 2020 at 5:32 PM Juan José Santamaría Flecha\n>> <juanjo.santamaria@gmail.com> wrote:\n>> >\n>> > On Tue, Apr 21, 2020 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>>\n>> 6. I have additionally done some cosmetic changes in the attached patch.\n>\n> I made some style changes too.\n>\n> 1. Change:\n> strcpy(iso_lc_messages, \"C\");\n> to\n> iso_lc_messages[0] = 'C';\n> iso_lc_messages[1] = '\\0';\n>\n\nThis is an existing code and this patch has no purpose to touch it.\nSo, I don't think we should make this change.\n\n> 2. Remove vars hyphen and underscore;\n> 3. Avoid call second wcsrchr, if hyphen is not found.\n>\n> If it's not too much perfectionism.\n>\n\n(2) and (3) are improvements, so we can take those.\n\nThanks for participating in the review and development of this patch.\nIt really helps if more people help in improving the patch.\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 08:43:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 22, 2020 at 9:27 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Wed, Apr 22, 2020 at 1:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> >> 4. In the patch, first, we try to get with LCType as LOCALE_SNAME and\n>> >> then with LOCALE_SENGLISHLANGUAGENAME and LOCALE_SENGLISHCOUNTRYNAME.\n>> >> I think we should add comments indicating why we try to get the locale\n>> >> information with three LCTypes and why the specific order of trying\n>> >> those types is required.\n>> >\n>> >\n>> > Agreed.\n>> >\n>>\n>> But, I don't see much in the comments?\n>\n>\n> I take notice.\n>\n\nOkay, I hope we will see better comments in the next version.\n\n>>\n>> Few more comments:\n>> 1.\n>> if (rc == -1 || rc == sizeof(iso_lc_messages))\n>> - return NULL;\n>> +\n>> iso_lc_messages[0] = '\\0';\n>>\n>> I don't think this change is required. The caller expects NULL in\n>> case the API is not successful so that it can point result directly to\n>> the locale passed. I have changed this back to the original code in\n>> the attached patch.\n>\n>\n> I did not want to return anything without logging its value.\n>\n\nHmm, if you really want to log the value then do it in the caller. I\ndon't think making special arrangements just for logging this value is\na good idea.\n\n>>\n>> 2.\n>> I see some differences in the output of GetLocaleInfoEx and\n>> _create_locale for some locales as mentioned in one of the documents\n>> shared by you. Ex.\n>>\n>> Bemba_Zambia bem-ZM bem\n>> Bena_Tanzania bez-TZ bez\n>> Bulgarian_Bulgaria bg-BG bg\n>>\n>> Now, these might be okay but I think unless we test such things by\n>> seeing the error message changes due to these locales we can't be\n>> sure.\n>\n>\n> There are some cases where the language tag does not match, although I do not think is wrong:\n>\n> Asu asa Asu\n> Edo bin Edo\n> Ewe ee Ewe\n> Rwa rwk Rwa\n>\n> To check the messages, do you have a regression test in mind?\n>\n\nI think we can check with simple error messages. So, basically after\nsetting a particular value of LC_MESSAGES, execute a query which\nreturns syntax or any other error, if the error message is the same\nirrespective of the locale name returned by _create_locale and\nGetLocaleInfoEx, then we should be fine. I want to especially try\nwhere the return value is slightly different by _create_locale and\nGetLocaleInfoEx. I know Davinder is trying something like this but\nif you can also try then it would be good.\n\n>\n>> 5. I am slightly nervous about the usage of wchar functions like\n>> _wcsicmp, wcslen, etc. as those are not used anywhere in the code.\n>> OTOH, I don't see any problem with that. There is pg_wchar datatype\n>> in the code and some corresponding functions to manipulate it. Have\n>> you considered using it?\n>\n>\n> In Windows wchar_t is 2 bytes, so we would have to do make UTF16 to UFT32 conversions back and forth. Not sure if it is worth the effort.\n>\n\nYeah, I am also not sure about this. So, let us see if anyone else\nhas any thoughts on this point, otherwise, we can go with wchar\nfunctions as you have in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 09:00:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 23, 2020 at 5:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Okay, I hope we will see better comments in the next version.\n>\n\nI have focused on improving comments in this version.\n\n\n> Hmm, if you really want to log the value then do it in the caller. I\n> don't think making special arrangements just for logging this value is\n> a good idea.\n>\n\nAgreed.\n\n\n> I think we can check with simple error messages. So, basically after\n> setting a particular value of LC_MESSAGES, execute a query which\n> returns syntax or any other error, if the error message is the same\n> irrespective of the locale name returned by _create_locale and\n> GetLocaleInfoEx, then we should be fine. I want to especially try\n> where the return value is slightly different by _create_locale and\n> GetLocaleInfoEx. I know Davinder is trying something like this but\n> if you can also try then it would be good.\n>\n\nI have composed a small set of queries to test the output with\ndifferent lc_message settings (lc_messages_test.sql). Please find attached\nthe output from debug3 logging using both EnumSystemLocalesEx\n(lc_messages_EnumSystemLocalesEx.log) and _create_locale\n(lc_messages_create_locale.log).\n\n> In Windows wchar_t is 2 bytes, so we would have to do make UTF16 to\n> UFT32 conversions back and forth. Not sure if it is worth the effort.\n>\n> Yeah, I am also not sure about this. So, let us see if anyone else\n> has any thoughts on this point, otherwise, we can go with wchar\n> functions as you have in the patch.\n>\n\nOk, the attached version still uses that approach.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Thu, 23 Apr 2020 14:07:09 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 23, 2020 at 5:37 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Thu, Apr 23, 2020 at 5:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> I think we can check with simple error messages. So, basically after\n>> setting a particular value of LC_MESSAGES, execute a query which\n>> returns syntax or any other error, if the error message is the same\n>> irrespective of the locale name returned by _create_locale and\n>> GetLocaleInfoEx, then we should be fine. I want to especially try\n>> where the return value is slightly different by _create_locale and\n>> GetLocaleInfoEx. I know Davinder is trying something like this but\n>> if you can also try then it would be good.\n>\n>\n> I have composed a small set of queries to test the output with different lc_message settings (lc_messages_test.sql). Please find attached the output from debug3 logging using both EnumSystemLocalesEx (lc_messages_EnumSystemLocalesEx.log) and _create_locale (lc_messages_create_locale.log).\n>\n\nThanks, I will verify these. BTW, have you done something special to\nget the error messages which are not in English because on my Windows\nbox I am not getting that in spite of setting it to the appropriate\nlocale. Did you use ICU or something else?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 18:30:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Thanks, I will verify these. BTW, have you done something special to\n> get the error messages which are not in English because on my Windows\n> box I am not getting that in spite of setting it to the appropriate\n> locale. Did you use ICU or something else?\n>\n\nIf you are trying to view the messages using a CMD, I do not think is\npossible unless you have the OS language installed. I read the results from\nthe log file.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nThanks, I will verify these.  BTW, have you done something special to\nget the error messages which are not in English because on my Windows\nbox I am not getting that in spite of setting it to the appropriate\nlocale.  Did you use ICU or something else?If you are trying to view the messages using a CMD, I do not think is possible unless you have the OS language installed. I read the results from the log file.Regards,Juan José Santamaría Flecha", "msg_date": "Thu, 23 Apr 2020 15:18:54 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 23, 2020 at 6:49 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>>\n>> Thanks, I will verify these. BTW, have you done something special to\n>> get the error messages which are not in English because on my Windows\n>> box I am not getting that in spite of setting it to the appropriate\n>> locale. Did you use ICU or something else?\n>>\n>\n> If you are trying to view the messages using a CMD, I do not think is\n> possible unless you have the OS language installed. I read the results from\n> the log file.\n>\nI have checked the log file also but still, I am not seeing any changes in\nerror message language. I am checking two log files one is by enabling\nLogging_collector in the conf file and the second is generated using\n\"pg_ctl -l\" option.\nI am using windows 10.\nIs there another way you are generating the log file?\nDid you install any of the locales manually you mentioned in the test file?\n\nAlso after initdb I am seeing only following standard locales in the\npg_collation catalog.\npostgres=# select * from pg_collation;\n oid | collname | collnamespace | collowner | collprovider |\ncollisdeterministic | collencoding | collcollate | collctype | collversion\n-------+-----------+---------------+-----------+--------------+---------------------+--------------+-------------+-----------+-------------\n 100 | default | 11 | 10 | d | t | -1 | | |\n 950 | C | 11 | 10 | c | t | -1 | C | C |\n 951 | POSIX | 11 | 10 | c | t | -1 | POSIX |\nPOSIX |\n 12327 | ucs_basic | 11 | 10 | c | t | 6 | C |\nC |\n(4 rows)\n\nMaybe Postgres is not able to get all the installed locales from the system\nin my case. Can you confirm if you are getting different results in\npg_collation?\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Apr 23, 2020 at 6:49 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nThanks, I will verify these.  BTW, have you done something special to\nget the error messages which are not in English because on my Windows\nbox I am not getting that in spite of setting it to the appropriate\nlocale.  Did you use ICU or something else?If you are trying to view the messages using a CMD, I do not think is possible unless you have the OS language installed. I read the results from the log file.I have checked the log file also but still, I am not seeing any changes in error message language. I am checking two log files one is by enabling Logging_collector in the conf file and the second is generated using \"pg_ctl -l\" option.I am using windows 10. Is there another way you are generating the log file?Did you install any of the locales manually you mentioned in the test file? Also after initdb I am seeing only following standard locales in the pg_collation catalog.postgres=# select * from pg_collation;  oid  | collname  | collnamespace | collowner | collprovider | collisdeterministic | collencoding | collcollate | collctype | collversion-------+-----------+---------------+-----------+--------------+---------------------+--------------+-------------+-----------+-------------   100 | default   | 11 |        10 | d | t               | -1 | | |   950 | C         | 11 |        10 | c | t               | -1 | C | C |   951 | POSIX     | 11 |        10 | c | t               | -1 | POSIX | POSIX | 12327 | ucs_basic |            11 | 10 | c | t                   | 6 | C | C |(4 rows)Maybe Postgres is not able to get all the installed locales from the system in my case. Can you confirm if you are getting different results in pg_collation?-- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Apr 2020 11:16:52 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Fri, Apr 24, 2020 at 7:47 AM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n> On Thu, Apr 23, 2020 at 6:49 PM Juan José Santamaría Flecha <\n> juanjo.santamaria@gmail.com> wrote:\n>\n>> On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>>\n>>>\n>>> Thanks, I will verify these. BTW, have you done something special to\n>>> get the error messages which are not in English because on my Windows\n>>> box I am not getting that in spite of setting it to the appropriate\n>>> locale. Did you use ICU or something else?\n>>>\n>>\n>> If you are trying to view the messages using a CMD, I do not think is\n>> possible unless you have the OS language installed. I read the results from\n>> the log file.\n>>\n> I have checked the log file also but still, I am not seeing any changes in\n> error message language. I am checking two log files one is by enabling\n> Logging_collector in the conf file and the second is generated using\n> \"pg_ctl -l\" option.\n> I am using windows 10.\n> Is there another way you are generating the log file?\n> Did you install any of the locales manually you mentioned in the test file?\n> Maybe Postgres is not able to get all the installed locales from the\n> system in my case. Can you confirm if you are getting different results in\n> pg_collation?\n> <http://www.enterprisedb.com/>\n>\n\nHmm, my building environment only has en_US and es_ES installed, and the\ndb has the same collations.\n\nI am not sure it is a locale problem, the only thing that needed some\nconfiguration on my end to make the build was related to gettext. I got the\nlibintl library from the PHP repositories [1] (libintl-0.18.3-5,\nprecompiled at [2]) and the utilities from MinGW (mingw32-libintl\n0.18.3.2-2).\n\n[1] https://github.com/winlibs/gettext\n[2] https://windows.php.net/downloads/php-sdk/\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Apr 24, 2020 at 7:47 AM davinder singh <davindersingh2692@gmail.com> wrote:On Thu, Apr 23, 2020 at 6:49 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Thu, Apr 23, 2020 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nThanks, I will verify these.  BTW, have you done something special to\nget the error messages which are not in English because on my Windows\nbox I am not getting that in spite of setting it to the appropriate\nlocale.  Did you use ICU or something else?If you are trying to view the messages using a CMD, I do not think is possible unless you have the OS language installed. I read the results from the log file.I have checked the log file also but still, I am not seeing any changes in error message language. I am checking two log files one is by enabling Logging_collector in the conf file and the second is generated using \"pg_ctl -l\" option.I am using windows 10. Is there another way you are generating the log file?Did you install any of the locales manually you mentioned in the test file?Maybe Postgres is not able to get all the installed locales from the system in my case. Can you confirm if you are getting different results in pg_collation?Hmm, \n\nmy building environment  only has en_US and es_ES installed, and the db has the same collations.I am not sure it is a locale problem, the only thing that needed some configuration on my end to make the build was related to gettext. I got the libintl library from the PHP repositories [1] (libintl-0.18.3-5, precompiled at [2]) and the utilities from MinGW (mingw32-libintl 0.18.3.2-2).[1] https://github.com/winlibs/gettext[2] https://windows.php.net/downloads/php-sdk/Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 24 Apr 2020 10:54:31 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 23, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 23, 2020 at 5:37 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > I have composed a small set of queries to test the output with different lc_message settings (lc_messages_test.sql). Please find attached the output from debug3 logging using both EnumSystemLocalesEx (lc_messages_EnumSystemLocalesEx.log) and _create_locale (lc_messages_create_locale.log).\n> >\n>\n> Thanks, I will verify these.\n>\n\nThe result looks good to me. However, I think we should test a few\nmore locales, especially where we know there is some difference in\nwhat _create_locale returns and what we get via enumerating locales\nand using GetLocaleInfoEx. Also, we should test some more locales\nwith the code page. For ex.\n\nBemba_Zambia\nBena_Tanzania\nBulgarian_Bulgaria\nSwedish_Sweden.1252\nSwedish_Sweden\n\nThen, I think we can also test a few where you mentioned that the\nlanguage tag is different.\nAsu asa Asu\nEdo bin Edo\nEwe ee Ewe\nRwa rwk Rwa\n\nBTW, we have a list of code page which can be used for this testing in\nbelow link. I think we can primarily test Windows code page\nidentifiers (like 1250, 1251, .. 1258) from the link [1].\n\nI think we should backpatch this till 9.5 as I could see the changes\nmade by commit 0fb54de9 to support MSVC2015 are present in that branch\nand the same is mentioned in the commit message. Would you like to\nprepare patches (and test those) for back-branches?\n\nI have made few cosmetic changes in the attached patch which includes\nadding/editing a few comments, ran pgindent, etc. I have replaced the\nreference of \"IETF-standardized\" with \"Unix-style\" as we are already\nusing it at other places in the comments as well.\n\n[1] - https://docs.microsoft.com/en-us/windows/win32/intl/code-page-identifiers\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Apr 2020 16:50:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 27, 2020 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Bemba_Zambia\n> Bena_Tanzania\n> Bulgarian_Bulgaria\n> Swedish_Sweden.1252\n> Swedish_Sweden\n>\n\nI have tested with different locales with codepages including above. There\nare few which return different locale code but the error messages in both\nthe cases are the same. I have attached the test and log files.\nBut there was one case, where locale code and error messages both are\ndifferent.\nPortuguese_Brazil.1252\n\nlog from [1]\n2020-04-28 14:27:39.785 GMT [2284] DEBUG: IsoLocaleName() executed;\nlocale: \"pt\"\n2020-04-28 14:27:39.787 GMT [2284] ERROR: division by zero\n2020-04-28 14:27:39.787 GMT [2284] STATEMENT: Select 1/0;\n\nlog from [2]\n2020-04-28 14:36:20.666 GMT [14608] DEBUG: IsoLocaleName() executed;\nlocale: \"pt_BR\"\n2020-04-28 14:36:20.673 GMT [14608] ERRO: divisão por zero\n2020-04-28 14:36:20.673 GMT [14608] COMANDO: Select 1/0;\n\n[1] full_locale_lc_message_test_create_locale_1.txt: log generated by using\nthe old patch (it uses _create_locale API to get locale info)\n[2] full_locale_lc_message_test_getlocale_1.txt: log generated using the\npatch v13\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 28 Apr 2020 20:45:47 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 27, 2020 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> I think we should backpatch this till 9.5 as I could see the changes\n> made by commit 0fb54de9 to support MSVC2015 are present in that branch\n> and the same is mentioned in the commit message. Would you like to\n> prepare patches (and test those) for back-branches?\n>\n\nI do not have means to test these patches using Visual Studio previous to\n2012, but please find attached patches for 9.5-9.6 and 10-11-12 as of\nversion 14. The extension is 'txt' not to break the cfbot.\n\n\n> I have made few cosmetic changes in the attached patch which includes\n> adding/editing a few comments, ran pgindent, etc. I have replaced the\n> reference of \"IETF-standardized\" with \"Unix-style\" as we are already\n> using it at other places in the comments as well.\n\n\nLGTM.\n\nRegards,\nJuan José Santamaría Flecha\n\n>\n>", "msg_date": "Tue, 28 Apr 2020 20:08:57 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 28, 2020 at 5:16 PM davinder singh <davindersingh2692@gmail.com>\nwrote:\n\n> On Mon, Apr 27, 2020 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> Bemba_Zambia\n>> Bena_Tanzania\n>> Bulgarian_Bulgaria\n>> Swedish_Sweden.1252\n>> Swedish_Sweden\n>>\n>\n> I have tested with different locales with codepages including above. There\n> are few which return different locale code but the error messages in both\n> the cases are the same. I have attached the test and log files.\n>\nBut there was one case, where locale code and error messages both are\n> different.\n> Portuguese_Brazil.1252\n>\n> log from [1]\n> 2020-04-28 14:27:39.785 GMT [2284] DEBUG: IsoLocaleName() executed;\n> locale: \"pt\"\n> 2020-04-28 14:27:39.787 GMT [2284] ERROR: division by zero\n> 2020-04-28 14:27:39.787 GMT [2284] STATEMENT: Select 1/0;\n>\n> log from [2]\n> 2020-04-28 14:36:20.666 GMT [14608] DEBUG: IsoLocaleName() executed;\n> locale: \"pt_BR\"\n> 2020-04-28 14:36:20.673 GMT [14608] ERRO: divisão por zero\n> 2020-04-28 14:36:20.673 GMT [14608] COMANDO: Select 1/0;\n>\n\nAFAICT, the good result is coming from the new logic.\n\nOn Tue, Apr 28, 2020 at 5:16 PM davinder singh <davindersingh2692@gmail.com> wrote:On Mon, Apr 27, 2020 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nBemba_Zambia\nBena_Tanzania\nBulgarian_Bulgaria\nSwedish_Sweden.1252\nSwedish_SwedenI have tested with different locales with codepages including above. There are few which return different locale code but the error messages in both the cases are the same. I have attached the test and log files.But there was one case, where locale code and error messages both are different.Portuguese_Brazil.1252log from [1]2020-04-28 14:27:39.785 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: \"pt\"2020-04-28 14:27:39.787 GMT [2284] ERROR:  division by zero2020-04-28 14:27:39.787 GMT [2284] STATEMENT:  Select 1/0;log from [2]2020-04-28 14:36:20.666 GMT [14608] DEBUG:  IsoLocaleName() executed; locale: \"pt_BR\"2020-04-28 14:36:20.673 GMT [14608] ERRO:  divisão por zero2020-04-28 14:36:20.673 GMT [14608] COMANDO:  Select 1/0;AFAICT, the good result is coming from the new logic.", "msg_date": "Tue, 28 Apr 2020 20:14:55 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 28, 2020 at 9:38 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> \"pt\" means portuguese language.\n> \"pt_BR\" means portuguese language from Brazil, \"divisão por zero\", is\n> correct.\n> \"pt_PT\" means portuguese language from Portugal, \"division by zero\"?\n> poderia ser \"divisão por zero\", too.\n>\n> Why \"pt_PT\" do not is translated?\n>\n\nThe translation files are generated as 'pt_BR.po', so this is the expected\nbehaviour.\n\nWith my limited knowledge of Portuguese, it makes little sense to have a\nlocalized version.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Apr 28, 2020 at 9:38 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\"pt\" means portuguese language.\"pt_BR\" means portuguese language from Brazil, \"divisão por zero\", is correct.\"pt_PT\" means portuguese language from Portugal, \"division by zero\"? poderia ser \"divisão por zero\", too.Why \"pt_PT\" do not is translated?The translation files are generated as 'pt_BR.po', so this is the expected behaviour.With my limited knowledge of Portuguese, it makes little sense to have a localized version.Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 28 Apr 2020 21:53:19 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 1:32 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em ter., 28 de abr. de 2020 às 16:53, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:\n>>\n>>\n>>\n>> On Tue, Apr 28, 2020 at 9:38 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>>\n>>> \"pt\" means portuguese language.\n>>> \"pt_BR\" means portuguese language from Brazil, \"divisão por zero\", is correct.\n>>> \"pt_PT\" means portuguese language from Portugal, \"division by zero\"? poderia ser \"divisão por zero\", too.\n>>>\n>>> Why \"pt_PT\" do not is translated?\n>>\n>>\n>> The translation files are generated as 'pt_BR.po', so this is the expected behaviour.\n>>\n>> With my limited knowledge of Portuguese, it makes little sense to have a localized version.\n>\n> Well, both are PORTUGUE language, but, do not the same words.\n> pt_PT.po, obviously is missing, I can provide a version, but still, it wouldn't be 100%, but it's something.\n> Would it be useful?\n>\n\nI am not sure but that doesn't seem to be related to this patch. If\nit is not related to this patch then it is better to start a separate\nthread (probably on pgsql-translators list).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 08:21:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 29, 2020 at 1:32 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em ter., 28 de abr. de 2020 às 16:53, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escreveu:\n> >>\n> >>\n> >>\n> >> On Tue, Apr 28, 2020 at 9:38 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >>>\n> >>> \"pt\" means portuguese language.\n> >>> \"pt_BR\" means portuguese language from Brazil, \"divisão por zero\", is correct.\n> >>> \"pt_PT\" means portuguese language from Portugal, \"division by zero\"? poderia ser \"divisão por zero\", too.\n> >>>\n> >>> Why \"pt_PT\" do not is translated?\n> >>\n> >>\n> >> The translation files are generated as 'pt_BR.po', so this is the expected behaviour.\n> >>\n> >> With my limited knowledge of Portuguese, it makes little sense to have a localized version.\n> >\n> > Well, both are PORTUGUE language, but, do not the same words.\n> > pt_PT.po, obviously is missing, I can provide a version, but still, it wouldn't be 100%, but it's something.\n> > Would it be useful?\n> >\n>\n> I am not sure but that doesn't seem to be related to this patch. If\n> it is not related to this patch then it is better to start a separate\n> thread (probably on pgsql-translators list).\n>\n\nBTW, do you see any different results for pt_PT with create_locale\nversion or the new patch version being discussed here?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 08:24:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> BTW, do you see any different results for pt_PT with create_locale\n> version or the new patch version being discussed here?\n>\nNo, there is no difference for pt_PT. The difference you are noticing is\nbecause of the previous locale setting.\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Apr 29, 2020 at 8:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\nBTW, do you see any different results for pt_PT with create_locale\nversion or the new patch version being discussed here?No, there is no difference for pt_PT. The difference you are noticing is because of the previous locale setting. -- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Apr 2020 10:49:28 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 28, 2020 at 11:45 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> On Tue, Apr 28, 2020 at 5:16 PM davinder singh <\n> davindersingh2692@gmail.com> wrote:\n>\n>> I have tested with different locales with codepages including above.\n>> There are few which return different locale code but the error messages in\n>> both the cases are the same. I have attached the test and log files.\n>>\n> But there was one case, where locale code and error messages both are\n>> different.\n>> Portuguese_Brazil.1252\n>>\n>> log from [1]\n>> 2020-04-28 14:27:39.785 GMT [2284] DEBUG: IsoLocaleName() executed;\n>> locale: \"pt\"\n>> 2020-04-28 14:27:39.787 GMT [2284] ERROR: division by zero\n>> 2020-04-28 14:27:39.787 GMT [2284] STATEMENT: Select 1/0;\n>>\n>> log from [2]\n>> 2020-04-28 14:36:20.666 GMT [14608] DEBUG: IsoLocaleName() executed;\n>> locale: \"pt_BR\"\n>> 2020-04-28 14:36:20.673 GMT [14608] ERRO: divisão por zero\n>> 2020-04-28 14:36:20.673 GMT [14608] COMANDO: Select 1/0;\n>>\n>\n> AFAICT, the good result is coming from the new logic.\n>\nYes, I also feel the same.\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Apr 28, 2020 at 11:45 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Tue, Apr 28, 2020 at 5:16 PM davinder singh <davindersingh2692@gmail.com> wrote:I have tested with different locales with codepages including above. There are few which return different locale code but the error messages in both the cases are the same. I have attached the test and log files.But there was one case, where locale code and error messages both are different.Portuguese_Brazil.1252log from [1]2020-04-28 14:27:39.785 GMT [2284] DEBUG:  IsoLocaleName() executed; locale: \"pt\"2020-04-28 14:27:39.787 GMT [2284] ERROR:  division by zero2020-04-28 14:27:39.787 GMT [2284] STATEMENT:  Select 1/0;log from [2]2020-04-28 14:36:20.666 GMT [14608] DEBUG:  IsoLocaleName() executed; locale: \"pt_BR\"2020-04-28 14:36:20.673 GMT [14608] ERRO:  divisão por zero2020-04-28 14:36:20.673 GMT [14608] COMANDO:  Select 1/0;AFAICT, the good result is coming from the new logic. \nYes, I also feel the same.-- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 29 Apr 2020 10:50:38 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, Apr 28, 2020 at 11:39 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Mon, Apr 27, 2020 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I think we should backpatch this till 9.5 as I could see the changes\n>> made by commit 0fb54de9 to support MSVC2015 are present in that branch\n>> and the same is mentioned in the commit message. Would you like to\n>> prepare patches (and test those) for back-branches?\n>\n>\n> I do not have means to test these patches using Visual Studio previous to 2012, but please find attached patches for 9.5-9.6 and 10-11-12 as of version 14. The extension is 'txt' not to break the cfbot.\n>\n\nI see some problems with these patches.\n1.\n+ loct = _create_locale(LC_CTYPE, winlocname);\n+ if (loct != NULL)\n+ {\n+ lcid = loct->locinfo->lc_handle[LC_CTYPE];\n+ if (lcid == 0)\n+ lcid = MAKELCID(MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), SORT_DEFAULT);\n+ _free_locale(loct);\n+ }\n\n if (!GetLocaleInfoA(lcid, LOCALE_SISO639LANGNAME, isolang, sizeof(isolang)))\n return NULL;\n if (!GetLocaleInfoA(lcid, LOCALE_SISO3166CTRYNAME, isocrty, sizeof(isocrty)))\n return NULL;\n\nIn the above change even if loct is NULL, we call GetLocaleInfoA()\nwhich is wrong and the same is not a problem without the patch.\n\n2. I think the code in IsoLocaleName is quite confusing and difficult\nto understand in back branches and the changes due to this bug-fix\nmade it more complicated. I am thinking to refactor it such that the\ncode for (_MSC_VER >= 1700 && _MSC_VER < 1900), (_MSC_VER >= 1900)\nand last #else code (the code for version < 17) resides in their own\nfunctions. That might make this function easier to understand, what\ndo you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:57:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, Apr 27, 2020 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think we should backpatch this till 9.5 as I could see the changes\n> made by commit 0fb54de9 to support MSVC2015 are present in that branch\n> and the same is mentioned in the commit message.\n>\n\nToday, I was thinking about the pros and cons of backpatching this.\nThe pros are that this is bug-fix and is reported multiple times so it\nis good to backpatch it. The cons are the code in the back branches\nis not very straight forward and this change will make it a bit more\ncomplicated, so we might want to do it only in HEAD. I am not\ncompletely sure about this. What do others think?\n\nMichael, others who have worked in this area, do you have any opinion\non this matter?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 18:02:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 5:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> 2. I think the code in IsoLocaleName is quite confusing and difficult\n> to understand in back branches and the changes due to this bug-fix\n> made it more complicated. I am thinking to refactor it such that the\n> code for (_MSC_VER >= 1700 && _MSC_VER < 1900), (_MSC_VER >= 1900)\n> and last #else code (the code for version < 17) resides in their own\n> functions.\n>\n\nAnother possibility could be to add just a branch for (_MSC_VER >=\n1900) and add that code in a separate function without touching other\nparts of this function. That would avoid testing it various versions\nof MSVC.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 19:20:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Apr 29, 2020 at 5:57 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > 2. I think the code in IsoLocaleName is quite confusing and difficult\n> > to understand in back branches and the changes due to this bug-fix\n> > made it more complicated. I am thinking to refactor it such that the\n> > code for (_MSC_VER >= 1700 && _MSC_VER < 1900), (_MSC_VER >= 1900)\n> > and last #else code (the code for version < 17) resides in their own\n> > functions.\n> >\n>\n> Another possibility could be to add just a branch for (_MSC_VER >=\n> 1900) and add that code in a separate function without touching other\n> parts of this function. That would avoid testing it various versions\n> of MSVC.\n>\n\nI was not aware of how many switches IsoLocaleName() already had before\ntrying to backpatch. I think offering an alternative might be a cleaner\napproach, I will work on that.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Apr 29, 2020 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Apr 29, 2020 at 5:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>\n> 2. I think the code in IsoLocaleName is quite confusing and difficult\n> to understand in back branches and the changes due to this bug-fix\n> made it more complicated.  I am thinking to refactor it such that the\n> code for (_MSC_VER >= 1700 && _MSC_VER  < 1900), (_MSC_VER >= 1900)\n> and last #else code (the code for version < 17) resides in their own\n> functions.\n>\n\nAnother possibility could be to add just a branch for (_MSC_VER >=\n1900) and add that code in a separate function without touching other\nparts of this function.  That would avoid testing it various versions\nof MSVC.I was not aware of how many switches IsoLocaleName() already had before trying to backpatch. I think offering an alternative might be a cleaner approach, I will work on that.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 29 Apr 2020 18:05:31 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, Apr 29, 2020 at 9:36 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Wed, Apr 29, 2020 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Apr 29, 2020 at 5:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > 2. I think the code in IsoLocaleName is quite confusing and difficult\n>> > to understand in back branches and the changes due to this bug-fix\n>> > made it more complicated. I am thinking to refactor it such that the\n>> > code for (_MSC_VER >= 1700 && _MSC_VER < 1900), (_MSC_VER >= 1900)\n>> > and last #else code (the code for version < 17) resides in their own\n>> > functions.\n>> >\n>>\n>> Another possibility could be to add just a branch for (_MSC_VER >=\n>> 1900) and add that code in a separate function without touching other\n>> parts of this function. That would avoid testing it various versions\n>> of MSVC.\n>\n>\n> I was not aware of how many switches IsoLocaleName() already had before trying to backpatch. I think offering an alternative might be a cleaner approach, I will work on that.\n>\n\nOkay, thanks. The key point to keep in mind is to avoid touching the\ncode related to prior MSVC versions as we might not have set up to\ntest those.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Apr 2020 08:36:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 30, 2020 at 5:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> >\n> > I was not aware of how many switches IsoLocaleName() already had before\n> trying to backpatch. I think offering an alternative might be a cleaner\n> approach, I will work on that.\n> >\n>\n> Okay, thanks. The key point to keep in mind is to avoid touching the\n> code related to prior MSVC versions as we might not have set up to\n> test those.\n>\n\nPlease find attached a new version following this approach.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Mon, 4 May 2020 15:29:15 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Mon, May 4, 2020 at 6:59 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Thu, Apr 30, 2020 at 5:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> Okay, thanks. The key point to keep in mind is to avoid touching the\n>> code related to prior MSVC versions as we might not have set up to\n>> test those.\n>\n>\n> Please find attached a new version following this approach.\n>\n\nThanks for the new version. I have found few problems and made\nchanges accordingly. In back-branch patches, I found one major\nproblem.\n\n+#if (_MSC_VER >= 1900) /* Visual Studio 2015 or later */\n+ rc = get_iso_localename(winlocname, iso_lc_messages);\n+#else\n\nHere, we need to free loct, otherwise, it will leak each time this\nfunction is called on a newer MSVC version. Also, call to\n_create_locale is redundant in _MSC_VER >= 1900. So, I have tried to\nwrite it differently, see what do you think about it?\n\n*\n+ * BEWARE: this function is WIN32 specific, so wchar_t are UTF-16.\nI am not sure how much relevant is this comment so removed for now.\n\nApart from that, I have made a few other changes in comments, fixed\ntypos, and ran pgindent. Let me know what do you think of attached\npatches?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 5 May 2020 17:04:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Tue, May 5, 2020 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Apart from that, I have made a few other changes in comments, fixed\n> typos, and ran pgindent. Let me know what do you think of attached\n> patches?\n>\n\nThe patches are definitely in better shape.\n\nI think that the definition of get_iso_localename() should be consistent\nacross all versions, that is HEAD like back-patched.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, May 5, 2020 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nApart from that, I have made a few other changes in comments, fixed\ntypos, and ran pgindent.  Let me know what do you think of attached\npatches?The patches are definitely in better shape.I think that the definition of get_iso_localename() should be consistent across all versions, that is HEAD like back-patched.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 6 May 2020 00:48:28 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Tue, May 5, 2020 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> Apart from that, I have made a few other changes in comments, fixed\n>> typos, and ran pgindent. Let me know what do you think of attached\n>> patches?\n>\n>\n> The patches are definitely in better shape.\n>\n> I think that the definition of get_iso_localename() should be consistent across all versions, that is HEAD like back-patched.\n>\n\nFair enough. I have changed such that get_iso_localename is the same\nin HEAD as it is backbranch patches. I have attached backbranch\npatches for the ease of verification.\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 May 2020 10:10:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, May 6, 2020 at 6:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha\n> >\n> > I think that the definition of get_iso_localename() should be consistent\n> across all versions, that is HEAD like back-patched.\n> >\n>\n> Fair enough. I have changed such that get_iso_localename is the same\n> in HEAD as it is backbranch patches. I have attached backbranch\n> patches for the ease of verification.\n>\n\nLGTM, and I see no regression in the manual SQL tests, so no further\ncomments from my part.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, May 6, 2020 at 6:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha>\n> I think that the definition of get_iso_localename() should be consistent across all versions, that is HEAD like back-patched.\n>\n\nFair enough.  I have changed such that get_iso_localename is the same\nin HEAD as it is backbranch patches.  I have attached backbranch\npatches for the ease of verification. LGTM, and I see no regression in the manual SQL tests, so no further comments from my part.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 6 May 2020 19:30:50 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, May 6, 2020 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> > I think that the definition of get_iso_localename() should be consistent\n> across all versions, that is HEAD like back-patched.\n> >\n>\n> Fair enough. I have changed such that get_iso_localename is the same\n> in HEAD as it is backbranch patches. I have attached backbranch\n> patches for the ease of verification.\n>\n\nI have verified/tested the latest patches for all versions and didn't find\nany problem.\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, May 6, 2020 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think that the definition of get_iso_localename() should be consistent across all versions, that is HEAD like back-patched.\n>\n\nFair enough.  I have changed such that get_iso_localename is the same\nin HEAD as it is backbranch patches.  I have attached backbranch\npatches for the ease of verification.I have verified/tested the latest patches for all versions and didn't find any problem.-- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 May 2020 09:54:51 +0530", "msg_from": "davinder singh <davindersingh2692@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, May 6, 2020 at 11:01 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Wed, May 6, 2020 at 6:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, May 6, 2020 at 4:19 AM Juan José Santamaría Flecha\n>> >\n>> > I think that the definition of get_iso_localename() should be consistent across all versions, that is HEAD like back-patched.\n>> >\n>>\n>> Fair enough. I have changed such that get_iso_localename is the same\n>> in HEAD as it is backbranch patches. I have attached backbranch\n>> patches for the ease of verification.\n>\n>\n> LGTM, and I see no regression in the manual SQL tests, so no further comments from my part.\n>\n\nThanks, Juan and Davinder for verifying the latest patches. I think\nthis patch is ready to commit unless someone else has any comments. I\nwill commit and backpatch this early next week (probably on Monday)\nunless I see more comments.\n\nTo summarize, this is a longstanding issue of Windows build (NLS\nenabled builds) for Visual Studio 2015 and later releases. Visual\nStudio 2015 and later versions should still be able to do the same as\nVisual Studio 2012, but the declaration of locale_name is missing in\n_locale_t, causing the code compilation to fail, hence this patch\nfalls back\ninstead on to enumerating all system locales by using\nEnumSystemLocalesEx to find the required locale name. If the input\nargument is in Unix-style then we can get ISO Locale name directly by\nusing GetLocaleInfoEx() with LCType as LOCALE_SNAME.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 11:21:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Thanks, Juan and Davinder for verifying the latest patches. I think\n> this patch is ready to commit unless someone else has any comments. I\n> will commit and backpatch this early next week (probably on Monday)\n> unless I see more comments.\n\nMonday is a back-branch release wrap day. If you push a back-patched\nchange on that day (or immediately before it), it had better be a security\nfix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 02:27:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, May 7, 2020 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Thanks, Juan and Davinder for verifying the latest patches. I think\n> > this patch is ready to commit unless someone else has any comments. I\n> > will commit and backpatch this early next week (probably on Monday)\n> > unless I see more comments.\n>\n> Monday is a back-branch release wrap day. If you push a back-patched\n> change on that day (or immediately before it), it had better be a security\n> fix.\n>\n\nOkay. I'll wait in that case and will push it after that.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 14:16:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, May 7, 2020 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Thanks, Juan and Davinder for verifying the latest patches. I think\n> > this patch is ready to commit unless someone else has any comments. I\n> > will commit and backpatch this early next week (probably on Monday)\n> > unless I see more comments.\n>\n> Monday is a back-branch release wrap day.\n>\n\nHow can I get the information about release wrap day? The minor\nrelease dates are mentioned on the website [1], but this information\nis not available. Do we keep it some-fixed number of days before\nminor release?\n\n\n[1] - https://www.postgresql.org/developer/roadmap/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 16:10:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, May 7, 2020 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Monday is a back-branch release wrap day.\n\n> How can I get the information about release wrap day? The minor\n> release dates are mentioned on the website [1], but this information\n> is not available. Do we keep it some-fixed number of days before\n> minor release?\n\nYes, we've been using the same release schedule for years. The\nactual tarball wrap is always on a Monday --- if I'm doing it,\nas is usually the case, I try to get it done circa 2100-2300 UTC.\nThere's a \"quiet period\" where we discourage nonessential commits\nboth before (starting perhaps on the Saturday) and after (until\nthe releases are tagged in git, about 24 hours after wrap).\nThe delay till public announcement on the Thursday is so the\npackagers can produce their builds. Likewise, the reason for\na wrap-to-tag delay is in case the packagers find anything that\nforces a re-wrap, which has happened a few times.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 09:21:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, May 7, 2020 at 6:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Thu, May 7, 2020 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Monday is a back-branch release wrap day.\n>\n> > How can I get the information about release wrap day? The minor\n> > release dates are mentioned on the website [1], but this information\n> > is not available. Do we keep it some-fixed number of days before\n> > minor release?\n>\n> Yes, we've been using the same release schedule for years. The\n> actual tarball wrap is always on a Monday --- if I'm doing it,\n> as is usually the case, I try to get it done circa 2100-2300 UTC.\n> There's a \"quiet period\" where we discourage nonessential commits\n> both before (starting perhaps on the Saturday) and after (until\n> the releases are tagged in git, about 24 hours after wrap).\n> The delay till public announcement on the Thursday is so the\n> packagers can produce their builds. Likewise, the reason for\n> a wrap-to-tag delay is in case the packagers find anything that\n> forces a re-wrap, which has happened a few times.\n>\n\nNow that branches are tagged, I would like to commit and backpatch\nthis patch tomorrow unless there are any more comments/objections.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 19:14:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Now that branches are tagged, I would like to commit and backpatch\n> this patch tomorrow unless there are any more comments/objections.\n\nThe \"quiet period\" is over as soon as the tags appear in git.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 10:04:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Wed, May 13, 2020 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Now that branches are tagged, I would like to commit and backpatch\n> > this patch tomorrow unless there are any more comments/objections.\n>\n> The \"quiet period\" is over as soon as the tags appear in git.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 May 2020 14:18:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em qui., 14 de mai. de 2020 às 05:49, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Wed, May 13, 2020 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > Now that branches are tagged, I would like to commit and backpatch\n> > > this patch tomorrow unless there are any more comments/objections.\n> >\n> > The \"quiet period\" is over as soon as the tags appear in git.\n> >\n>\n> Pushed.\n>\nThank you for the commit.\n\nregards,\nRanier Vilela\n\nEm qui., 14 de mai. de 2020 às 05:49, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Wed, May 13, 2020 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Now that branches are tagged, I would like to commit and backpatch\n> > this patch tomorrow unless there are any more comments/objections.\n>\n> The \"quiet period\" is over as soon as the tags appear in git.\n>\n\nPushed.Thank you for the commit.regards,Ranier Vilela", "msg_date": "Thu, 14 May 2020 06:06:53 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, May 14, 2020 at 11:07 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em qui., 14 de mai. de 2020 às 05:49, Amit Kapila <amit.kapila16@gmail.com>\n> escreveu:\n>\n>> On Wed, May 13, 2020 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Amit Kapila <amit.kapila16@gmail.com> writes:\n>> > > Now that branches are tagged, I would like to commit and backpatch\n>> > > this patch tomorrow unless there are any more comments/objections.\n>> >\n>> > The \"quiet period\" is over as soon as the tags appear in git.\n>> >\n>>\n>> Pushed.\n>>\n> Thank you for the commit.\n>\n\nGreat. Thanks to everyone involved.\n\nOn Thu, May 14, 2020 at 11:07 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em qui., 14 de mai. de 2020 às 05:49, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Wed, May 13, 2020 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Now that branches are tagged, I would like to commit and backpatch\n> > this patch tomorrow unless there are any more comments/objections.\n>\n> The \"quiet period\" is over as soon as the tags appear in git.\n>\n\nPushed.Thank you for the commit.Great. Thanks to everyone involved.", "msg_date": "Thu, 14 May 2020 11:12:03 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" } ]
[ { "msg_contents": "Hi,\n\nRight now, pg_checksums cannot check a base backup directory taken by\npg_basebackup:\n\ninitdb -k data > /dev/null\npg_ctl -D data -l logfile start > /dev/null\npg_basebackup -D data_backup\npg_checksums -D data_backup \npg_checksums: error: cluster must be shut down\n\nSo users need to start and then stop postgres on the base backup\ndirectory in order to run pg_checksums on it. This is due to this check\nin pg_checksums.c:\n\n if (ControlFile->state != DB_SHUTDOWNED &&\n ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n {\n pg_log_error(\"cluster must be shut down\");\n\nI think we can allow checking of base backups if we make sure\nbackup_label exists in the data directory or am I missing something?\nI think we need to have similar checks about pages changed during base\nbackup, so this patch ignores checksum failures between the checkpoint\nLSN and (as a reasonable upper bound) the last LSN of the last existing\ntransaction log file. If no xlog files exist (the --wal-method=none\ncase), the last LSN of the checkpoint WAL segment is taken.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz", "msg_date": "Mon, 06 Apr 2020 13:26:17 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": true, "msg_subject": "[patch] Fix pg_checksums to allow checking of offline base backup\n directories" }, { "msg_contents": "On Mon, Apr 06, 2020 at 01:26:17PM +0200, Michael Banck wrote:\n> I think we can allow checking of base backups if we make sure\n> backup_label exists in the data directory or am I missing something?\n> I think we need to have similar checks about pages changed during base\n> backup, so this patch ignores checksum failures between the checkpoint\n> LSN and (as a reasonable upper bound) the last LSN of the last existing\n> transaction log file. If no xlog files exist (the --wal-method=none\n> case), the last LSN of the checkpoint WAL segment is taken.\n\nHave you considered that backup_label files can exist in the data\ndirectory of a live cluster? That's not the case with pg_basebackup\nor non-exclusive backups with the SQL interface, but that's possible\nwith the SQL interface and an exclusive backup running.\n\nFWIW, my take on this matter is that you should consider checksum\nverification as one step to check the sanity of a base backup, meaning\nthat you have to restore the base backup first, then let it reach its\nconsistent LSN position, and finally stop the cluster cleanly to make\nsure that everything is safely flushed on disk and consistent.\nAttempting to verify checksums from a raw base backup would most\nlikely lead to false positives, and my guess is that your patch has\nissues in this area. Hint at quick glance: the code path setting\ninsertLimitLSN where you actually don't use any APIs from\nxlogreader.h.\n--\nMichael", "msg_date": "Tue, 7 Apr 2020 17:07:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch] Fix pg_checksums to allow checking of offline base\n backup directories" }, { "msg_contents": "Hi,\n\nAm Dienstag, den 07.04.2020, 17:07 +0900 schrieb Michael Paquier:\n> On Mon, Apr 06, 2020 at 01:26:17PM +0200, Michael Banck wrote:\n> > I think we can allow checking of base backups if we make sure\n> > backup_label exists in the data directory or am I missing something?\n> > I think we need to have similar checks about pages changed during base\n> > backup, so this patch ignores checksum failures between the checkpoint\n> > LSN and (as a reasonable upper bound) the last LSN of the last existing\n> > transaction log file. If no xlog files exist (the --wal-method=none\n> > case), the last LSN of the checkpoint WAL segment is taken.\n> \n> Have you considered that backup_label files can exist in the data\n> directory of a live cluster? That's not the case with pg_basebackup\n> or non-exclusive backups with the SQL interface, but that's possible\n> with the SQL interface and an exclusive backup running.\n\nI see, that's what I was missing. I think it is unfortunate that\npg_control does not record an ongoing (base)backup in the state or\nelsewhere. Maybe one could look at the `BACKUP METHOD' field in\nbackup_label, which is (always?) `pg_start_backup' for the non-exclusive \nbackup and `streamed' for pg_basebackup.\n\n> FWIW, my take on this matter is that you should consider checksum\n> verification as one step to check the sanity of a base backup, meaning\n> that you have to restore the base backup first, then let it reach its\n> consistent LSN position, and finally stop the cluster cleanly to make\n> sure that everything is safely flushed on disk and consistent.\n\nThat's a full restore and it should certainly be encouraged that\norganizations do full restore tests regularly, but (not only) if you\nhave lots of big instances, that is often not the case.\n\nSo I think making it easier to check plain base backups would be\nhelpful, even if some part of recently changed data might not get\nchecked.\n\n> Attempting to verify checksums from a raw base backup would most\n> likely lead to false positives, and my guess is that your patch has\n> issues in this area. Hint at quick glance: the code path setting\n> insertLimitLSN where you actually don't use any APIs from\n> xlogreader.h.\n\nI evaluated using xlogreader to fetch the BACKUP STOP position from the\nWAL but then discarded that for now as possibly being overkill and went\nthe route of a slightly larger upper bound by taking the following WAL\nsegment and not the BACKUP STOP position. But I can take a look at\nimplementing the more fine-grained method if needed.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Tue, 07 Apr 2020 11:02:23 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": true, "msg_subject": "Re: [patch] Fix pg_checksums to allow checking of offline base\n backup directories" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16346\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 12.2\nOperating system: Ubuntu 18.04\nDescription: \n\nWhen using pg_upgrade on a database with the following contents:\r\nCREATE FUNCTION public.test_event_trigger() RETURNS event_trigger\r\n LANGUAGE plpgsql\r\n AS $$\r\nBEGIN\r\n RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\r\nEND\r\n$$;\r\n\r\nCREATE EVENT TRIGGER regress_event_trigger3 ON ddl_command_start\r\n EXECUTE PROCEDURE public.test_event_trigger();\r\n\r\nCOMMENT ON EVENT TRIGGER regress_event_trigger3 IS 'test comment';\r\n\r\nI get:\r\nRestoring global objects in the new cluster ok\r\nRestoring database schemas in the new cluster\r\n postgres \r\n*failure*\r\n\r\nConsult the last few lines of \"pg_upgrade_dump_14174.log\" for\r\nthe probable cause of the failure.\r\nFailure, exiting\r\n\r\npg_upgrade_dump_14174.log contains:\r\ncommand: \"/src/postgres/tmp_install/usr/local/pgsql/bin/pg_restore\" --host\n/src/postgres --port 50432 --username postgres --clean --create\n--exit-on-error --verbose --dbname template1 \"pg_upgrade_dump_14174.custom\"\n>> \"pg_upgrade_dump_14174.log\" 2>&1\r\npg_restore: connecting to database for restore\r\npg_restore: dropping DATABASE PROPERTIES postgres\r\npg_restore: dropping DATABASE postgres\r\npg_restore: creating DATABASE \"postgres\"\r\npg_restore: connecting to new database \"postgres\"\r\npg_restore: connecting to database \"postgres\" as user \"postgres\"\r\npg_restore: creating COMMENT \"DATABASE \"postgres\"\"\r\npg_restore: creating DATABASE PROPERTIES \"postgres\"\r\npg_restore: connecting to new database \"postgres\"\r\npg_restore: connecting to database \"postgres\" as user \"postgres\"\r\npg_restore: creating pg_largeobject \"pg_largeobject\"\r\npg_restore: creating FUNCTION \"public.test_event_trigger()\"\r\npg_restore: creating COMMENT \"EVENT TRIGGER \"regress_event_trigger3\"\"\r\npg_restore: while PROCESSING TOC:\r\npg_restore: from TOC entry 3705; 0 0 COMMENT EVENT TRIGGER\n\"regress_event_trigger3\" postgres\r\npg_restore: error: could not execute query: ERROR: event trigger\n\"regress_event_trigger3\" does not exist\r\nCommand was: COMMENT ON EVENT TRIGGER \"regress_event_trigger3\" IS 'test\ncomment';\r\n\r\nIt looks like the commit 4c40b27b broke this.", "msg_date": "Mon, 06 Apr 2020 15:00:00 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16346: pg_upgrade fails on a trigger with a comment" }, { "msg_contents": "As you have mentioned, I have verified that indeed commit 4c40b27b broke\nthis.\n\nIn this particular commit moves restoration of materialized views and event\ntriggers to the last phase. Perhaps, comments should also be moved to this\nphase as there may comments on either of these types of objects.\n\nAttached is a patch that resolves this issue. I've verified that it resolve\nthe upgrade (and restore issue) introduced by 4c40b27b. I'll test this\npatch in a little more detail tomorrow.\n\nOn Mon, Apr 6, 2020 at 8:26 PM PG Bug reporting form <noreply@postgresql.org>\nwrote:\n\n> The following bug has been logged on the website:\n>\n> Bug reference: 16346\n> Logged by: Alexander Lakhin\n> Email address: exclusion@gmail.com\n> PostgreSQL version: 12.2\n> Operating system: Ubuntu 18.04\n> Description:\n>\n> When using pg_upgrade on a database with the following contents:\n> CREATE FUNCTION public.test_event_trigger() RETURNS event_trigger\n> LANGUAGE plpgsql\n> AS $$\n> BEGIN\n> RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\n> END\n> $$;\n>\n> CREATE EVENT TRIGGER regress_event_trigger3 ON ddl_command_start\n> EXECUTE PROCEDURE public.test_event_trigger();\n>\n> COMMENT ON EVENT TRIGGER regress_event_trigger3 IS 'test comment';\n>\n> I get:\n> Restoring global objects in the new cluster ok\n> Restoring database schemas in the new cluster\n> postgres\n> *failure*\n>\n> Consult the last few lines of \"pg_upgrade_dump_14174.log\" for\n> the probable cause of the failure.\n> Failure, exiting\n>\n> pg_upgrade_dump_14174.log contains:\n> command: \"/src/postgres/tmp_install/usr/local/pgsql/bin/pg_restore\" --host\n> /src/postgres --port 50432 --username postgres --clean --create\n> --exit-on-error --verbose --dbname template1 \"pg_upgrade_dump_14174.custom\"\n> >> \"pg_upgrade_dump_14174.log\" 2>&1\n> pg_restore: connecting to database for restore\n> pg_restore: dropping DATABASE PROPERTIES postgres\n> pg_restore: dropping DATABASE postgres\n> pg_restore: creating DATABASE \"postgres\"\n> pg_restore: connecting to new database \"postgres\"\n> pg_restore: connecting to database \"postgres\" as user \"postgres\"\n> pg_restore: creating COMMENT \"DATABASE \"postgres\"\"\n> pg_restore: creating DATABASE PROPERTIES \"postgres\"\n> pg_restore: connecting to new database \"postgres\"\n> pg_restore: connecting to database \"postgres\" as user \"postgres\"\n> pg_restore: creating pg_largeobject \"pg_largeobject\"\n> pg_restore: creating FUNCTION \"public.test_event_trigger()\"\n> pg_restore: creating COMMENT \"EVENT TRIGGER \"regress_event_trigger3\"\"\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 3705; 0 0 COMMENT EVENT TRIGGER\n> \"regress_event_trigger3\" postgres\n> pg_restore: error: could not execute query: ERROR: event trigger\n> \"regress_event_trigger3\" does not exist\n> Command was: COMMENT ON EVENT TRIGGER \"regress_event_trigger3\" IS 'test\n> comment';\n>\n> It looks like the commit 4c40b27b broke this.\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus", "msg_date": "Wed, 8 Apr 2020 01:10:54 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16346: pg_upgrade fails on a trigger with a comment" }, { "msg_contents": "I have tested the patch in a little more detail.\n(1) Verified that it fixes the bug\n(2) Ran regression tests; all are passing.\n\nTo recap, the attached patch moves restoration of comments to the\nRESTORE_PASS_POST_ACL. This ensures that comments are\nrestored in a PASS when essentially all required objects are created\nincluding event triggers and materialized views (and any other db\nobjects).\n\nThis patch is good from my side.\n\nOn Wed, Apr 8, 2020 at 1:10 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n> As you have mentioned, I have verified that indeed commit 4c40b27b broke\n> this.\n>\n> In this particular commit moves restoration of materialized views and\n> event triggers to the last phase. Perhaps, comments should also be moved to\n> this phase as there may comments on either of these types of objects.\n>\n> Attached is a patch that resolves this issue. I've verified that it\n> resolve the upgrade (and restore issue) introduced by 4c40b27b. I'll test\n> this patch in a little more detail tomorrow.\n>\n> On Mon, Apr 6, 2020 at 8:26 PM PG Bug reporting form <\n> noreply@postgresql.org> wrote:\n>\n>> The following bug has been logged on the website:\n>>\n>> Bug reference: 16346\n>> Logged by: Alexander Lakhin\n>> Email address: exclusion@gmail.com\n>> PostgreSQL version: 12.2\n>> Operating system: Ubuntu 18.04\n>> Description:\n>>\n>> When using pg_upgrade on a database with the following contents:\n>> CREATE FUNCTION public.test_event_trigger() RETURNS event_trigger\n>> LANGUAGE plpgsql\n>> AS $$\n>> BEGIN\n>> RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\n>> END\n>> $$;\n>>\n>> CREATE EVENT TRIGGER regress_event_trigger3 ON ddl_command_start\n>> EXECUTE PROCEDURE public.test_event_trigger();\n>>\n>> COMMENT ON EVENT TRIGGER regress_event_trigger3 IS 'test comment';\n>>\n>> I get:\n>> Restoring global objects in the new cluster ok\n>> Restoring database schemas in the new cluster\n>> postgres\n>> *failure*\n>>\n>> Consult the last few lines of \"pg_upgrade_dump_14174.log\" for\n>> the probable cause of the failure.\n>> Failure, exiting\n>>\n>> pg_upgrade_dump_14174.log contains:\n>> command: \"/src/postgres/tmp_install/usr/local/pgsql/bin/pg_restore\" --host\n>> /src/postgres --port 50432 --username postgres --clean --create\n>> --exit-on-error --verbose --dbname template1\n>> \"pg_upgrade_dump_14174.custom\"\n>> >> \"pg_upgrade_dump_14174.log\" 2>&1\n>> pg_restore: connecting to database for restore\n>> pg_restore: dropping DATABASE PROPERTIES postgres\n>> pg_restore: dropping DATABASE postgres\n>> pg_restore: creating DATABASE \"postgres\"\n>> pg_restore: connecting to new database \"postgres\"\n>> pg_restore: connecting to database \"postgres\" as user \"postgres\"\n>> pg_restore: creating COMMENT \"DATABASE \"postgres\"\"\n>> pg_restore: creating DATABASE PROPERTIES \"postgres\"\n>> pg_restore: connecting to new database \"postgres\"\n>> pg_restore: connecting to database \"postgres\" as user \"postgres\"\n>> pg_restore: creating pg_largeobject \"pg_largeobject\"\n>> pg_restore: creating FUNCTION \"public.test_event_trigger()\"\n>> pg_restore: creating COMMENT \"EVENT TRIGGER \"regress_event_trigger3\"\"\n>> pg_restore: while PROCESSING TOC:\n>> pg_restore: from TOC entry 3705; 0 0 COMMENT EVENT TRIGGER\n>> \"regress_event_trigger3\" postgres\n>> pg_restore: error: could not execute query: ERROR: event trigger\n>> \"regress_event_trigger3\" does not exist\n>> Command was: COMMENT ON EVENT TRIGGER \"regress_event_trigger3\" IS 'test\n>> comment';\n>>\n>> It looks like the commit 4c40b27b broke this.\n>>\n>>\n>\n> --\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n> CELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\n> SKYPE: engineeredvirus\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nI have tested the patch in a little more detail. (1) Verified that it fixes the bug(2) Ran regression tests; all are passing.To recap, the attached patch moves restoration of comments to the RESTORE_PASS_POST_ACL. This ensures that comments are restored in a PASS when essentially all required objects are created including event triggers and materialized views (and any other dbobjects).This patch is good from my side.On Wed, Apr 8, 2020 at 1:10 AM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:As you have mentioned, I have verified that indeed commit 4c40b27b broke this. In this particular commit moves restoration of materialized views and event triggers to the last phase. Perhaps, comments should also be moved to this phase as there may comments on either of these types of objects.Attached is a patch that resolves this issue. I've verified that it resolve the upgrade (and restore issue) introduced by 4c40b27b. I'll test this patch in a little more detail tomorrow.On Mon, Apr 6, 2020 at 8:26 PM PG Bug reporting form <noreply@postgresql.org> wrote:The following bug has been logged on the website:\n\nBug reference:      16346\nLogged by:          Alexander Lakhin\nEmail address:      exclusion@gmail.com\nPostgreSQL version: 12.2\nOperating system:   Ubuntu 18.04\nDescription:        \n\nWhen using pg_upgrade on a database with the following contents:\nCREATE FUNCTION public.test_event_trigger() RETURNS event_trigger\n    LANGUAGE plpgsql\n    AS $$\nBEGIN\n    RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\nEND\n$$;\n\nCREATE EVENT TRIGGER regress_event_trigger3 ON ddl_command_start\n   EXECUTE PROCEDURE public.test_event_trigger();\n\nCOMMENT ON EVENT TRIGGER regress_event_trigger3 IS 'test comment';\n\nI get:\nRestoring global objects in the new cluster                 ok\nRestoring database schemas in the new cluster\n  postgres                                                  \n*failure*\n\nConsult the last few lines of \"pg_upgrade_dump_14174.log\" for\nthe probable cause of the failure.\nFailure, exiting\n\npg_upgrade_dump_14174.log contains:\ncommand: \"/src/postgres/tmp_install/usr/local/pgsql/bin/pg_restore\" --host\n/src/postgres --port 50432 --username postgres --clean --create\n--exit-on-error --verbose --dbname template1 \"pg_upgrade_dump_14174.custom\"\n>> \"pg_upgrade_dump_14174.log\" 2>&1\npg_restore: connecting to database for restore\npg_restore: dropping DATABASE PROPERTIES postgres\npg_restore: dropping DATABASE postgres\npg_restore: creating DATABASE \"postgres\"\npg_restore: connecting to new database \"postgres\"\npg_restore: connecting to database \"postgres\" as user \"postgres\"\npg_restore: creating COMMENT \"DATABASE \"postgres\"\"\npg_restore: creating DATABASE PROPERTIES \"postgres\"\npg_restore: connecting to new database \"postgres\"\npg_restore: connecting to database \"postgres\" as user \"postgres\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating FUNCTION \"public.test_event_trigger()\"\npg_restore: creating COMMENT \"EVENT TRIGGER \"regress_event_trigger3\"\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3705; 0 0 COMMENT EVENT TRIGGER\n\"regress_event_trigger3\" postgres\npg_restore: error: could not execute query: ERROR:  event trigger\n\"regress_event_trigger3\" does not exist\nCommand was: COMMENT ON EVENT TRIGGER \"regress_event_trigger3\" IS 'test\ncomment';\n\nIt looks like the commit 4c40b27b broke this.\n\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Wed, 8 Apr 2020 12:21:32 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16346: pg_upgrade fails on a trigger with a comment" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> To recap, the attached patch moves restoration of comments to the\n> RESTORE_PASS_POST_ACL. This ensures that comments are\n> restored in a PASS when essentially all required objects are created\n> including event triggers and materialized views (and any other db\n> objects).\n\nThis is surely not a good idea as it stands, because it delays restore\nof *all* object comments to the very end. That's not nice for parallel\nrestores, and it also has large impact on pg_dump's behavior in cases\nthat have nothing to do with event triggers; which could cause unforeseen\nproblems.\n\nThe right way is to postpone only event trigger comments. I fixed it\nthat way and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 11:34:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16346: pg_upgrade fails on a trigger with a comment" } ]
[ { "msg_contents": "rr is a tool that makes gdb much more useful by supporting recording\nand replaying of the program being debugged. I highly recommend trying\nrr if you're somebody that regularly uses gdb to debug Postgres. rr\nimplements a gdbserver under the hood, so it's very easy to start\nusing once you're already familiar with gdb.\n\nI have written a Wiki page on how to use rr to record and replay\nPostgres executions:\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Apr 2020 10:38:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Using the rr debugging tool to debug Postgres" }, { "msg_contents": "On Mon, 6 Apr 2020 10:38:31 -0700\nPeter Geoghegan <pg@bowt.ie> wrote:\n\n> rr is a tool that makes gdb much more useful by supporting recording\n> and replaying of the program being debugged. I highly recommend trying\n> rr if you're somebody that regularly uses gdb to debug Postgres. rr\n> implements a gdbserver under the hood, so it's very easy to start\n> using once you're already familiar with gdb.\n> \n> I have written a Wiki page on how to use rr to record and replay\n> Postgres executions:\n> \n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework\n\nThank you Peter!\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:36:22 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Using the rr debugging tool to debug Postgres" }, { "msg_contents": "On Tue, Apr 7, 2020 at 3:36 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> Thank you Peter!\n\nNo problem! I'm just glad that we have a straightforward workflow for this now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:02:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Using the rr debugging tool to debug Postgres" } ]
[ { "msg_contents": "Hi,\n\nI got an error when I was trying to insert a circle using the syntax \n(the 3rd one) specified in the latest document.\n\nhttps://www.postgresql.org/docs/current/datatype-geometric.html#DATATYPE-CIRCLE\n< ( x , y ) , r >\n( ( x , y ) , r )\n   ( x , y ) , r\n     x , y   , r\n\nHere is how to reproduce it.\n\nCREATE TABLE tbl_circle(id serial PRIMARY KEY, a circle);\nINSERT INTO tbl_circle(a) VALUES('( 1 , 1 ) , 5'::circle );\n\nERROR:  invalid input syntax for type circle: \"( 1 , 1 ) , 5\"\nLINE 1: INSERT INTO tbl_circle(a) VALUES('( 1 , 1 ) , 5'::circle );\n\nI made a little change in the \"circle_in\" function, and then I can enter \na circle using the 3rd way.\n\nINSERT INTO tbl_circle(a) VALUES('( 1 , 1 ) , 5'::circle );\nINSERT 0 1\n\nThe fix does generate the same output as the other three ways.\n\nINSERT INTO tbl_circle(a) VALUES( '< ( 1 , 1 ) , 5 >'::circle );\nINSERT INTO tbl_circle(a) VALUES( '( ( 1 , 1 ) , 5 )'::circle );\nINSERT INTO tbl_circle(a) VALUES( '1 , 1 , 5'::circle );\n\nselect * from tbl_circle;\n  id |     a\n----+-----------\n   1 | <(1,1),5>\n   2 | <(1,1),5>\n   3 | <(1,1),5>\n   4 | <(1,1),5>\n(4 rows)\n\nSor far, no error found during the \"make check\".\n\nThe patch based on tag \"REL_12_2\" is attached.\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca", "msg_date": "Mon, 6 Apr 2020 14:12:38 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "ERROR: invalid input syntax for type circle" }, { "msg_contents": "David Zhang <david.zhang@highgo.ca> writes:\n> I got an error when I was trying to insert a circle using the syntax \n> (the 3rd one) specified in the latest document.\n\nHm. Presumably, that has never worked, and we've had no complaints\nto date. I'm halfway inclined to treat it as a documentation bug\nand remove the claim that it works.\n\n> The patch based on tag \"REL_12_2\" is attached.\n\nThis patch looks extremely dangerous to me, because it'll allow \"s\"\nto get incremented past the ending nul character ... and then the\ncode will proceed to keep scanning, which at best is useless and\nat worst will end in a core dump.\n\nWhat actually looks wrong to me in this code is the initial bit\n\n if ((*s == LDELIM_C) || (*s == LDELIM))\n {\n depth++;\n cp = (s + 1);\n while (isspace((unsigned char) *cp))\n cp++;\n if (*cp == LDELIM)\n s = cp;\n }\n\nIf the first test triggers but it doesn't then find a following\nparen, then it's incremented depth without moving s, which seems\ncertain to end badly. Perhaps the correct fix is like\n\n if (*s == LDELIM_C)\n depth++, s++;\n else if (*s == LDELIM)\n {\n /* If there are two left parens, consume the first one */\n cp = (s + 1);\n while (isspace((unsigned char) *cp))\n cp++;\n if (*cp == LDELIM)\n depth++, s = cp;\n }\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Apr 2020 18:16:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid input syntax for type circle" }, { "msg_contents": "Hi Tom,\n\nThanks for the review.\n\nGenerated a new patch v2 (attached) following your suggestion and \nperformed the same test again. The test results looks good including the \n\"make check\".\n\nOn 2020-04-06 3:16 p.m., Tom Lane wrote:\n> David Zhang <david.zhang@highgo.ca> writes:\n>> I got an error when I was trying to insert a circle using the syntax\n>> (the 3rd one) specified in the latest document.\n> Hm. Presumably, that has never worked, and we've had no complaints\n> to date. I'm halfway inclined to treat it as a documentation bug\n> and remove the claim that it works.\n>\n>> The patch based on tag \"REL_12_2\" is attached.\n> This patch looks extremely dangerous to me, because it'll allow \"s\"\n> to get incremented past the ending nul character ... and then the\n> code will proceed to keep scanning, which at best is useless and\n> at worst will end in a core dump.\n>\n> What actually looks wrong to me in this code is the initial bit\n>\n> if ((*s == LDELIM_C) || (*s == LDELIM))\n> {\n> depth++;\n> cp = (s + 1);\n> while (isspace((unsigned char) *cp))\n> cp++;\n> if (*cp == LDELIM)\n> s = cp;\n> }\n>\n> If the first test triggers but it doesn't then find a following\n> paren, then it's incremented depth without moving s, which seems\n> certain to end badly. Perhaps the correct fix is like\n>\n> if (*s == LDELIM_C)\n> depth++, s++;\n> else if (*s == LDELIM)\n> {\n> /* If there are two left parens, consume the first one */\n> cp = (s + 1);\n> while (isspace((unsigned char) *cp))\n> cp++;\n> if (*cp == LDELIM)\n> depth++, s = cp;\n> }\n>\n> \t\t\tregards, tom lane\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca", "msg_date": "Mon, 6 Apr 2020 17:44:05 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid input syntax for type circle" }, { "msg_contents": "David Zhang <david.zhang@highgo.ca> writes:\n> Generated a new patch v2 (attached) following your suggestion and \n> performed the same test again. The test results looks good including the \n> \"make check\".\n\nPushed, with some work on the regression tests so this doesn't get\nbusted again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 20:51:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid input syntax for type circle" } ]
[ { "msg_contents": "Hi al\r\n\r\nIn getDefaultACLs function, some PQExpBuffer are not destroy\r\n\r\nFile: src/bin/pg_dump/pg_dump.c\r\nDefaultACLInfo *\r\ngetDefaultACLs(Archive *fout, int *numDefaultACLs)\r\n{\r\n......\r\n\tif (fout->remoteVersion >= 90600)\r\n\t{\r\n\t\tPQExpBuffer acl_subquery = createPQExpBuffer(); // *** acl_subquery not destroyed ***\r\n\t\tPQExpBuffer racl_subquery = createPQExpBuffer(); // *** racl_subquery not destroyed ***\r\n\t\tPQExpBuffer initacl_subquery = createPQExpBuffer(); // *** initacl_subquery not destroyed ***\r\n\t\tPQExpBuffer initracl_subquery = createPQExpBuffer(); // *** initracl_subquery not destroyed ***\r\n\r\n\t\tbuildACLQueries(acl_subquery, racl_subquery, initacl_subquery,\r\n\t\t\t\t\t\tinitracl_subquery, \"defaclacl\", \"defaclrole\",\r\n\t\t\t\t\t\t\"CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\\\"char\\\"\",\r\n\t\t\t\t\t\tdopt->binary_upgrade);\r\n\r\n\t\tappendPQExpBuffer(query, \"SELECT d.oid, d.tableoid, \"\r\n\t\t\t\t\t\t \"(%s d.defaclrole) AS defaclrole, \"\r\n\t\t\t\t\t\t \"d.defaclnamespace, \"\r\n\t\t\t\t\t\t \"d.defaclobjtype, \"\r\n\t\t\t\t\t\t \"%s AS defaclacl, \"\r\n\t\t\t\t\t\t \"%s AS rdefaclacl, \"\r\n\t\t\t\t\t\t \"%s AS initdefaclacl, \"\r\n\t\t\t\t\t\t \"%s AS initrdefaclacl \"\r\n\t\t\t\t\t\t \"FROM pg_default_acl d \"\r\n\t\t\t\t\t\t \"LEFT JOIN pg_init_privs pip ON \"\r\n\t\t\t\t\t\t \"(d.oid = pip.objoid \"\r\n\t\t\t\t\t\t \"AND pip.classoid = 'pg_default_acl'::regclass \"\r\n\t\t\t\t\t\t \"AND pip.objsubid = 0) \",\r\n\t\t\t\t\t\t username_subquery,\r\n\t\t\t\t\t\t acl_subquery->data,\r\n\t\t\t\t\t\t racl_subquery->data,\r\n\t\t\t\t\t\t initacl_subquery->data,\r\n\t\t\t\t\t\t initracl_subquery->data);\r\n\t}\r\n......\r\n\r\nHere is a patch.\r\n\r\nBest Regards!", "msg_date": "Tue, 7 Apr 2020 02:42:40 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[patch] some PQExpBuffer are not destroyed in pg_dump" }, { "msg_contents": "On Tue, 7 Apr 2020 at 11:42, Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n>\n> Hi al\n>\n> In getDefaultACLs function, some PQExpBuffer are not destroy\n>\n\nYes, it looks like an oversight. It's related to the commit\ne2090d9d20d809 which is back-patched to 9.6.\n\nThe patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:51:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [patch] some PQExpBuffer are not destroyed in pg_dump" }, { "msg_contents": "On Mon, Apr 13, 2020 at 04:51:06PM +0900, Masahiko Sawada wrote:\n> On Tue, 7 Apr 2020 at 11:42, Zhang, Jie <zhangjie2@cn.fujitsu.com> wrote:\n>> In getDefaultACLs function, some PQExpBuffer are not destroy\n> \n> Yes, it looks like an oversight. It's related to the commit\n> e2090d9d20d809 which is back-patched to 9.6.\n> \n> The patch looks good to me.\n\nIndeed. Any code path of pg_dump calling buildACLQueries() clears up\nthings, and I think that it is a better practice to clean up properly\nPQExpBuffer stuff even if there is always the argument that pg_dump\nis a tool running in a \"short\"-term context. So I will backpatch that\nunless there are any objections from others.\n\nThe part I am actually rather amazed of here is that I don't recall\nseeing Coverity complaining about leaks after this commit. Perhaps it\njust got lost.\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 10:11:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch] some PQExpBuffer are not destroyed in pg_dump" }, { "msg_contents": "On Tue, Apr 14, 2020 at 10:11:56AM +0900, Michael Paquier wrote:\n> Indeed. Any code path of pg_dump calling buildACLQueries() clears up\n> things, and I think that it is a better practice to clean up properly\n> PQExpBuffer stuff even if there is always the argument that pg_dump\n> is a tool running in a \"short\"-term context. So I will backpatch that\n> unless there are any objections from others.\n\nAnd done as of 8f4ee44.\n--\nMichael", "msg_date": "Wed, 15 Apr 2020 16:05:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [patch] some PQExpBuffer are not destroyed in pg_dump" } ]
[ { "msg_contents": "Do we allow such a bool parameter value? This seems puzzling to me.\n\n\npostgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\nCREATE TABLE\npostgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\nCREATE TABLE\npostgres=# \\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain | | \nAccess method: heap\nOptions: autovacuum_enabled=tr\n\npostgres=# \\d+ t2\n Table \"public.t2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain | | \nAccess method: heap\nOptions: autovacuum_enabled=fa\n\n\nI am try to fix in bug_boolrelopt.patch\n\n\nWenjing", "msg_date": "Tue, 7 Apr 2020 17:30:03 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wjzeng2012@gmail.com>", "msg_from_op": true, "msg_subject": "[bug] Wrong bool value parameter" }, { "msg_contents": "On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:\n\n> Do we allow such a bool parameter value? This seems puzzling to me.\n>\n>\n> postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\n> CREATE TABLE\n> postgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\n> CREATE TABLE\n> postgres=# \\d+ t1\n> Table \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats\n> target | Description\n>\n> --------+---------+-----------+----------+---------+---------+--------------+-------------\n> c1 | integer | | | | plain |\n> |\n> Access method: heap\n> Options: autovacuum_enabled=tr\n>\n> [don't post to multiple mailing lists]\n\nI'm not sure it is a bug. It certainly can be an improvement. Code as is\ndoes not cause issues although I concur with you that it is at least a\nstrange syntax. It is like this at least since 2009 (commit ba748f7a11e).\nI'm not sure parse_bool* is the right place to fix it because it could\nbreak code. IMHO the problem is that parse_one_reloption() is using the\nvalue provided by user; it should test those (abbreviation) conditions and\nstore \"true\" (for example) as bool value.\n\nRegards,\n\n>\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:Do we allow such a bool parameter value? This seems puzzling to me.postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');CREATE TABLEpostgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');CREATE TABLEpostgres=# \\d+ t1                                    Table \"public.t1\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              | Access method: heapOptions: autovacuum_enabled=tr[don't post to multiple mailing lists]I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.Regards,-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Apr 2020 08:58:23 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "On Tue, 7 Apr 2020 at 20:58, Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:\n>>\n>> Do we allow such a bool parameter value? This seems puzzling to me.\n>>\n>>\n>> postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\n>> CREATE TABLE\n>> postgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\n>> CREATE TABLE\n>> postgres=# \\d+ t1\n>> Table \"public.t1\"\n>> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n>> --------+---------+-----------+----------+---------+---------+--------------+-------------\n>> c1 | integer | | | | plain | |\n>> Access method: heap\n>> Options: autovacuum_enabled=tr\n>>\n> [don't post to multiple mailing lists]\n>\n> I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.\n>\n\nThe document[1] states:\n\nBoolean: Values can be written as on, off, true, false, yes, no, 1, 0\n(all case-insensitive) or any unambiguous prefix of one of these.\n\nGiven that PostgreSQL treats such values as boolean values it seems to\nme that it's a normal behavior.\n\n[1] https://www.postgresql.org/docs/devel/config-setting.html\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 23:35:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "> 2020年4月7日 下午10:35,Masahiko Sawada <masahiko.sawada@2ndquadrant.com> 写道:\n> \n> On Tue, 7 Apr 2020 at 20:58, Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n>> \n>> On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:\n>>> \n>>> Do we allow such a bool parameter value? This seems puzzling to me.\n>>> \n>>> \n>>> postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\n>>> CREATE TABLE\n>>> postgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\n>>> CREATE TABLE\n>>> postgres=# \\d+ t1\n>>> Table \"public.t1\"\n>>> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n>>> --------+---------+-----------+----------+---------+---------+--------------+-------------\n>>> c1 | integer | | | | plain | |\n>>> Access method: heap\n>>> Options: autovacuum_enabled=tr\n>>> \n>> [don't post to multiple mailing lists]\n>> \n>> I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.\n>> \n> \n> The document[1] states:\n> \n> Boolean: Values can be written as on, off, true, false, yes, no, 1, 0\n> (all case-insensitive) or any unambiguous prefix of one of these.\n> \n> Given that PostgreSQL treats such values as boolean values it seems to\n> me that it's a normal behavior.\n> \n> [1] https://www.postgresql.org/docs/devel/config-setting.html\n\nWhy do table parameters of a bool type have different rules than data types of a Boolean type?\n\n\npostgres=# create table test_bool_type(a bool);\nCREATE TABLE\npostgres=# insert into test_bool_type values(true);\nINSERT 0 1\npostgres=# insert into test_bool_type values(false);\nINSERT 0 1\npostgres=# insert into test_bool_type values('false');\nINSERT 0 1\npostgres=# insert into test_bool_type values('t');\nINSERT 0 1\npostgres=# insert into test_bool_type values('f');\nINSERT 0 1\n\npostgres=# insert into test_bool_type values('tr');\nERROR: invalid input syntax for type boolean: \"tr\"\nLINE 1: insert into test_bool_type values('tr');\n ^\npostgres=# insert into test_bool_type values('fa');\nERROR: invalid input syntax for type boolean: \"fa\"\nLINE 1: insert into test_bool_type values('fa');\n ^\npostgres=# insert into test_bool_type values('fals');\nERROR: invalid input syntax for type boolean: \"fals\"\nLINE 1: insert into test_bool_type values('fals');\n \n\n> \n> Regards,\n> \n> -- \n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n2020年4月7日 下午10:35,Masahiko Sawada <masahiko.sawada@2ndquadrant.com> 写道:On Tue, 7 Apr 2020 at 20:58, Euler Taveira<euler.taveira@2ndquadrant.com> wrote:On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:Do we allow such a bool parameter value? This seems puzzling to me.postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');CREATE TABLEpostgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');CREATE TABLEpostgres=# \\d+ t1                                    Table \"public.t1\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description--------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              |Access method: heapOptions: autovacuum_enabled=tr[don't post to multiple mailing lists]I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.The document[1] states:Boolean: Values can be written as on, off, true, false, yes, no, 1, 0(all case-insensitive) or any unambiguous prefix of one of these.Given that PostgreSQL treats such values as boolean values it seems tome that it's a normal behavior.[1] https://www.postgresql.org/docs/devel/config-setting.htmlWhy do table parameters of a bool type have different rules than data types of a Boolean type?postgres=# create table test_bool_type(a bool);CREATE TABLEpostgres=# insert into test_bool_type values(true);INSERT 0 1postgres=# insert into test_bool_type values(false);INSERT 0 1postgres=# insert into test_bool_type values('false');INSERT 0 1postgres=# insert into test_bool_type values('t');INSERT 0 1postgres=# insert into test_bool_type values('f');INSERT 0 1postgres=# insert into test_bool_type values('tr');ERROR:  invalid input syntax for type boolean: \"tr\"LINE 1: insert into test_bool_type values('tr');                                          ^postgres=# insert into test_bool_type values('fa');ERROR:  invalid input syntax for type boolean: \"fa\"LINE 1: insert into test_bool_type values('fa');                                          ^postgres=# insert into test_bool_type values('fals');ERROR:  invalid input syntax for type boolean: \"fals\"LINE 1: insert into test_bool_type values('fals');                     Regards,-- Masahiko Sawada            http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 8 Apr 2020 15:00:20 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "On Wed, 8 Apr 2020 at 16:00, wenjing <wjzeng2012@gmail.com> wrote:\n>\n>\n>\n> 2020年4月7日 下午10:35,Masahiko Sawada <masahiko.sawada@2ndquadrant.com> 写道:\n>\n> On Tue, 7 Apr 2020 at 20:58, Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n>\n>\n> On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:\n>\n>\n> Do we allow such a bool parameter value? This seems puzzling to me.\n>\n>\n> postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\n> CREATE TABLE\n> postgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\n> CREATE TABLE\n> postgres=# \\d+ t1\n> Table \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+--------------+-------------\n> c1 | integer | | | | plain | |\n> Access method: heap\n> Options: autovacuum_enabled=tr\n>\n> [don't post to multiple mailing lists]\n>\n> I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.\n>\n>\n> The document[1] states:\n>\n> Boolean: Values can be written as on, off, true, false, yes, no, 1, 0\n> (all case-insensitive) or any unambiguous prefix of one of these.\n>\n> Given that PostgreSQL treats such values as boolean values it seems to\n> me that it's a normal behavior.\n>\n> [1] https://www.postgresql.org/docs/devel/config-setting.html\n>\n>\n> Why do table parameters of a bool type have different rules than data types of a Boolean type?\n>\n>\n> postgres=# create table test_bool_type(a bool);\n> CREATE TABLE\n> postgres=# insert into test_bool_type values(true);\n> INSERT 0 1\n> postgres=# insert into test_bool_type values(false);\n> INSERT 0 1\n> postgres=# insert into test_bool_type values('false');\n> INSERT 0 1\n> postgres=# insert into test_bool_type values('t');\n> INSERT 0 1\n> postgres=# insert into test_bool_type values('f');\n> INSERT 0 1\n>\n> postgres=# insert into test_bool_type values('tr');\n> ERROR: invalid input syntax for type boolean: \"tr\"\n> LINE 1: insert into test_bool_type values('tr');\n> ^\n> postgres=# insert into test_bool_type values('fa');\n> ERROR: invalid input syntax for type boolean: \"fa\"\n> LINE 1: insert into test_bool_type values('fa');\n> ^\n> postgres=# insert into test_bool_type values('fals');\n> ERROR: invalid input syntax for type boolean: \"fals\"\n> LINE 1: insert into test_bool_type values('fals');\n>\n\nHmm that seems strange. In my environment, both 'tr' and 'fa' are\naccepted at least with the current HEAD\n\npostgres(1:52514)=# insert into test_bool_type values('tr');\nINSERT 0 1\npostgres(1:52514)=# insert into test_bool_type values('fa');\nINSERT 0 1\npostgres(1:52514)=# insert into test_bool_type values('fals');\nINSERT 0 1\n\nIIUC both bool of SQL data type and bool of GUC parameter type are\nusing the same function parse_bool_with_len() to parse the input\nvalue. The behavior can vary depending on the environment?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 21:25:59 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> On Wed, 8 Apr 2020 at 16:00, wenjing <wjzeng2012@gmail.com> wrote:\n>> Why do table parameters of a bool type have different rules than data types of a Boolean type?\n>> postgres=# insert into test_bool_type values('fals');\n>> ERROR: invalid input syntax for type boolean: \"fals\"\n>> LINE 1: insert into test_bool_type values('fals');\n\n> Hmm that seems strange. In my environment, both 'tr' and 'fa' are\n> accepted at least with the current HEAD\n\nYeah, it works for me too:\n\nregression=# select 'fa'::bool;\n bool \n------\n f\n(1 row)\n\nregression=# select 'fals'::bool;\n bool \n------\n f\n(1 row)\n\n> IIUC both bool of SQL data type and bool of GUC parameter type are\n> using the same function parse_bool_with_len() to parse the input\n> value. The behavior can vary depending on the environment?\n\nparse_bool_with_len is not locale-sensitive for ASCII input.\nConceivably its case folding could vary for non-ASCII, but that's\nnot relevant here.\n\nI am suspicious that the OP is not using community Postgres.\nThis seems like the kind of thing that EDB might've hacked\nfor better Oracle compatibility, for example.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 09:45:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "> 2020年4月8日 21:45,Tom Lane <tgl@sss.pgh.pa.us> 写道:\n> \n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n>> On Wed, 8 Apr 2020 at 16:00, wenjing <wjzeng2012@gmail.com> wrote:\n>>> Why do table parameters of a bool type have different rules than data types of a Boolean type?\n>>> postgres=# insert into test_bool_type values('fals');\n>>> ERROR: invalid input syntax for type boolean: \"fals\"\n>>> LINE 1: insert into test_bool_type values('fals');\n> \n>> Hmm that seems strange. In my environment, both 'tr' and 'fa' are\n>> accepted at least with the current HEAD\n> \n> Yeah, it works for me too:\n> \n> regression=# select 'fa'::bool;\n> bool \n> ------\n> f\n> (1 row)\n> \n> regression=# select 'fals'::bool;\n> bool \n> ------\n> f\n> (1 row)\n> \n>> IIUC both bool of SQL data type and bool of GUC parameter type are\n>> using the same function parse_bool_with_len() to parse the input\n>> value. The behavior can vary depending on the environment?\n\n> \n> parse_bool_with_len is not locale-sensitive for ASCII input.\n> Conceivably its case folding could vary for non-ASCII, but that's\n> not relevant here.\n> \n> I am suspicious that the OP is not using community Postgres.\n> This seems like the kind of thing that EDB might've hacked\n> for better Oracle compatibility, for example.\nSorry, you're right. I used the modified code and got the wrong result.\n\n> \n> \t\t\tregards, tom lane\n\n\n2020年4月8日 21:45,Tom Lane <tgl@sss.pgh.pa.us> 写道:Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:On Wed, 8 Apr 2020 at 16:00, wenjing <wjzeng2012@gmail.com> wrote:Why do table parameters of a bool type have different rules than data types of a Boolean type?postgres=# insert into test_bool_type values('fals');ERROR:  invalid input syntax for type boolean: \"fals\"LINE 1: insert into test_bool_type values('fals');Hmm that seems strange. In my environment, both 'tr' and 'fa' areaccepted at least with the current HEADYeah, it works for me too:regression=# select 'fa'::bool; bool ------ f(1 row)regression=# select 'fals'::bool; bool ------ f(1 row)IIUC both bool of SQL data type and bool of GUC parameter type areusing the same function parse_bool_with_len() to parse the inputvalue. The behavior can vary depending on the environment?parse_bool_with_len is not locale-sensitive for ASCII input.Conceivably its case folding could vary for non-ASCII, but that'snot relevant here.I am suspicious that the OP is not using community Postgres.This seems like the kind of thing that EDB might've hackedfor better Oracle compatibility, for example.Sorry, you're right. I used the modified code and got the wrong result. regards, tom lane", "msg_date": "Sat, 11 Apr 2020 22:54:38 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "\n\n> 2020年4月7日 22:35,Masahiko Sawada <masahiko.sawada@2ndquadrant.com> 写道:\n> \n> On Tue, 7 Apr 2020 at 20:58, Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n>> \n>> On Tue, 7 Apr 2020 at 06:30, 曾文旌 <wjzeng2012@gmail.com> wrote:\n>>> \n>>> Do we allow such a bool parameter value? This seems puzzling to me.\n>>> \n>>> \n>>> postgres=# create table t1(c1 int) with(autovacuum_enabled ='tr');\n>>> CREATE TABLE\n>>> postgres=# create table t2(c1 int) with(autovacuum_enabled ='fa');\n>>> CREATE TABLE\n>>> postgres=# \\d+ t1\n>>> Table \"public.t1\"\n>>> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n>>> --------+---------+-----------+----------+---------+---------+--------------+-------------\n>>> c1 | integer | | | | plain | |\n>>> Access method: heap\n>>> Options: autovacuum_enabled=tr\n>>> \n>> [don't post to multiple mailing lists]\n>> \n>> I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.\nIt seems difficult to store a new bool value in parse_one_reloption. This is a string stored with ”autovacuum_enabled =“.\nany other ideas?\n\n>> \n> \n> The document[1] states:\n> \n> Boolean: Values can be written as on, off, true, false, yes, no, 1, 0\n> (all case-insensitive) or any unambiguous prefix of one of these.\n> \n> Given that PostgreSQL treats such values as boolean values it seems to\n> me that it's a normal behavior.\n> \n> [1] https://www.postgresql.org/docs/devel/config-setting.html\n> \n> Regards,\n> \n> -- \n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 23:05:17 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" }, { "msg_contents": "wenjing <wjzeng2012@gmail.com> writes:\n> 2020年4月7日 22:35,Masahiko Sawada <masahiko.sawada@2ndquadrant.com> 写道:\n>>> I'm not sure it is a bug. It certainly can be an improvement. Code as is does not cause issues although I concur with you that it is at least a strange syntax. It is like this at least since 2009 (commit ba748f7a11e). I'm not sure parse_bool* is the right place to fix it because it could break code. IMHO the problem is that parse_one_reloption() is using the value provided by user; it should test those (abbreviation) conditions and store \"true\" (for example) as bool value.\n\n> It seems difficult to store a new bool value in parse_one_reloption. This is a string stored with ”autovacuum_enabled =“.\n> any other ideas?\n\nI don't think we should touch this. If the user chose to write the value\nin a specific way, they might've had a reason for that. There's little\nreason for us to override it, certainly not enough to justify introducing\na lot of new mechanism just to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 11:25:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Wrong bool value parameter" } ]
[ { "msg_contents": "This is just a placeholder thread for an open item that I'm adding to\nthe Open Items list. We can make a decision later.\n\nNow that we have Disk-based Hash Aggregation, there are a lot more\nsituations where the planner can choose HashAgg. The\nenable_hashagg_disk GUC, if set to true, chooses HashAgg based on\ncosting. If false, it only generates a HashAgg path if it thinks it\nwill fit in work_mem, similar to the old behavior (though it wlil now\nspill to disk if the planner was wrong about it fitting in work_mem).\nThe current default is true.\n\nI expect this to be a win in a lot of cases, obviously. But as with any\nplanner change, it will be wrong sometimes. We may want to be\nconservative and set the default to false depending on the experience\nduring beta. I'm inclined to leave it as true for now though, because\nthat will give us better information upon which to base any decision.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 07 Apr 2020 11:20:46 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Apr 07, 2020 at 11:20:46AM -0700, Jeff Davis wrote:\n> The enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n> costing. If false, it only generates a HashAgg path if it thinks it will fit\n> in work_mem, similar to the old behavior (though it wlil now spill to disk if\n> the planner was wrong about it fitting in work_mem). The current default is\n> true.\n\nAre there any other GUCs that behave like that ? It's confusing to me when I\nsee \"Disk Usage: ... kB\", despite setting it to \"disable\", and without the\nusual disable_cost. I realize that postgres chose the plan on the hypothesis\nthat it would *not* exceed work_mem, and that spilling to disk is considered\npreferable to ignoring the setting, and that \"going back\" to planning phase\nisn't a possibility.\n\ntemplate1=# explain (analyze, costs off, summary off) SELECT a, COUNT(1) FROM generate_series(1,999999) a GROUP BY 1 ;\n HashAggregate (actual time=1370.945..2877.250 rows=999999 loops=1)\n Group Key: a\n Peak Memory Usage: 5017 kB\n Disk Usage: 22992 kB\n HashAgg Batches: 84\n -> Function Scan on generate_series a (actual time=314.507..741.517 rows=999999 loops=1)\n\nA previous version of the docs said this, which I thought was confusing, and you removed it.\nBut I guess this is the behavior it was trying to .. explain.\n\n+ <term><varname>enable_hashagg_disk</varname> (<type>boolean</type>)\n+ ... This only affects the planner choice;\n+ execution time may still require using disk-based hash\n+ aggregation. The default is <literal>on</literal>.\n\nI suggest that should be reworded and then re-introduced, unless there's some\nfurther behavior change allowing the previous behavior of\nmight-exceed-work-mem.\n\n\"This setting determines whether the planner will elect to use a hash plan\nwhich it expects will exceed work_mem and spill to disk. During execution,\nhash nodes which exceed work_mem will spill to disk even if this setting is\ndisabled. To avoid spilling to disk, either increase work_mem (or set\nenable_hashagg=off).\"\n\nFor sure the release notes should recommend re-calibrating work_mem.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Apr 2020 17:39:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Apr 07, 2020 at 05:39:01PM -0500, Justin Pryzby wrote:\n>On Tue, Apr 07, 2020 at 11:20:46AM -0700, Jeff Davis wrote:\n>> The enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n>> costing. If false, it only generates a HashAgg path if it thinks it will fit\n>> in work_mem, similar to the old behavior (though it wlil now spill to disk if\n>> the planner was wrong about it fitting in work_mem). The current default is\n>> true.\n>\n>Are there any other GUCs that behave like that ? It's confusing to me when I\n>see \"Disk Usage: ... kB\", despite setting it to \"disable\", and without the\n>usual disable_cost. I realize that postgres chose the plan on the hypothesis\n>that it would *not* exceed work_mem, and that spilling to disk is considered\n>preferable to ignoring the setting, and that \"going back\" to planning phase\n>isn't a possibility.\n>\n\nIt it really any different from our enable_* GUCs? Even if you do e.g.\nenable_sort=off, we may still do a sort. Same for enable_groupagg etc.\n\n>template1=# explain (analyze, costs off, summary off) SELECT a, COUNT(1) FROM generate_series(1,999999) a GROUP BY 1 ;\n> HashAggregate (actual time=1370.945..2877.250 rows=999999 loops=1)\n> Group Key: a\n> Peak Memory Usage: 5017 kB\n> Disk Usage: 22992 kB\n> HashAgg Batches: 84\n> -> Function Scan on generate_series a (actual time=314.507..741.517 rows=999999 loops=1)\n>\n>A previous version of the docs said this, which I thought was confusing, and you removed it.\n>But I guess this is the behavior it was trying to .. explain.\n>\n>+ <term><varname>enable_hashagg_disk</varname> (<type>boolean</type>)\n>+ ... This only affects the planner choice;\n>+ execution time may still require using disk-based hash\n>+ aggregation. The default is <literal>on</literal>.\n>\n>I suggest that should be reworded and then re-introduced, unless there's some\n>further behavior change allowing the previous behavior of\n>might-exceed-work-mem.\n>\n\nYeah, it would be good to mention this is a best-effort setting.\n\n>\"This setting determines whether the planner will elect to use a hash plan\n>which it expects will exceed work_mem and spill to disk. During execution,\n>hash nodes which exceed work_mem will spill to disk even if this setting is\n>disabled. To avoid spilling to disk, either increase work_mem (or set\n>enable_hashagg=off).\"\n>\n>For sure the release notes should recommend re-calibrating work_mem.\n>\n\nI don't follow. Why would the recalibrating be needed?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 13:48:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Apr 09, 2020 at 01:48:55PM +0200, Tomas Vondra wrote:\n> On Tue, Apr 07, 2020 at 05:39:01PM -0500, Justin Pryzby wrote:\n> > On Tue, Apr 07, 2020 at 11:20:46AM -0700, Jeff Davis wrote:\n> > > The enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n> > > costing. If false, it only generates a HashAgg path if it thinks it will fit\n> > > in work_mem, similar to the old behavior (though it wlil now spill to disk if\n> > > the planner was wrong about it fitting in work_mem). The current default is\n> > > true.\n> > \n> > Are there any other GUCs that behave like that ? It's confusing to me when I\n> > see \"Disk Usage: ... kB\", despite setting it to \"disable\", and without the\n> > usual disable_cost. I realize that postgres chose the plan on the hypothesis\n> > that it would *not* exceed work_mem, and that spilling to disk is considered\n> > preferable to ignoring the setting, and that \"going back\" to planning phase\n> > isn't a possibility.\n> \n> It it really any different from our enable_* GUCs? Even if you do e.g.\n> enable_sort=off, we may still do a sort. Same for enable_groupagg etc.\n\nThose show that the GUC was disabled by showing disable_cost. That's what's\ndifferent about this one.\n\nAlso.. there's no such thing as enable_groupagg? Unless I've been missing out\non something.\n\n> > \"This setting determines whether the planner will elect to use a hash plan\n> > which it expects will exceed work_mem and spill to disk. During execution,\n> > hash nodes which exceed work_mem will spill to disk even if this setting is\n> > disabled. To avoid spilling to disk, either increase work_mem (or set\n> > enable_hashagg=off).\"\n> > \n> > For sure the release notes should recommend re-calibrating work_mem.\n> \n> I don't follow. Why would the recalibrating be needed?\n\nBecause HashAgg plans which used to run fine (because they weren't prevented\nfrom overflowing work_mem) might now run poorly after spilling to disk (because\nof overflowing work_mem).\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Apr 2020 12:24:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-04-09 at 12:24 -0500, Justin Pryzby wrote:\n> Also.. there's no such thing as enable_groupagg? Unless I've been\n> missing out\n> on something.\n\nI thought about adding that, and went so far as to make a patch. But it\ndidn't seem right to me -- the grouping isn't what takes the time, it's\nthe sorting. So what would the point of such a GUC be? To disable\nGroupAgg when the input data is already sorted? Or a strange way to\ndisable Sort?\n\n> Because HashAgg plans which used to run fine (because they weren't\n> prevented\n> from overflowing work_mem) might now run poorly after spilling to\n> disk (because\n> of overflowing work_mem).\n\nIt's probably worth a mention in the release notes, but I wouldn't word\nit too strongly. Typically the performance difference is not a lot if\nthe workload still fits in system memory.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 09 Apr 2020 11:25:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Apr 9, 2020 at 7:49 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> It it really any different from our enable_* GUCs? Even if you do e.g.\n> enable_sort=off, we may still do a sort. Same for enable_groupagg etc.\n\nI think it's actually pretty different. All of the other enable_* GUCs\ndisable an entire type of plan node, except for cases where that would\notherwise result in planning failure. This just disables a portion of\nthe planning logic for a certain kind of node, without actually\ndisabling the whole node type. I'm not sure that's a bad idea, but it\ndefinitely seems to be inconsistent with what we've done in the past.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:26:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-04-09 at 15:26 -0400, Robert Haas wrote:\n> I think it's actually pretty different. All of the other enable_*\n> GUCs\n> disable an entire type of plan node, except for cases where that\n> would\n> otherwise result in planning failure. This just disables a portion of\n> the planning logic for a certain kind of node, without actually\n> disabling the whole node type. I'm not sure that's a bad idea, but it\n> definitely seems to be inconsistent with what we've done in the past.\n\nThe patch adds two GUCs. Both are slightly weird, to be honest, but let\nme explain the reasoning. I am open to other suggestions.\n\n1. enable_hashagg_disk (default true):\n\nThis is essentially there just to get some of the old behavior back, to\ngive people an escape hatch if they see bad plans while we are tweaking\nthe costing. The old behavior was weird, so this GUC is also weird.\n\nPerhaps we can make this a compatibility GUC that we eventually drop? I\ndon't necessarily think this GUC would make sense, say, 5 versions from\nnow. I'm just trying to be conservative because I know that, even if\nthe plans are faster for 90% of people, the other 10% will be unhappy\nand want a way to work around it.\n\n2. enable_groupingsets_hash_disk (default false):\n\nThis is about how we choose which grouping sets to hash and which to\nsort when generating mixed mode paths.\n\nEven before this patch, there are quite a few paths that could be\ngenerated. It tries to estimate the size of each grouping set's hash\ntable, and then see how many it can fit in work_mem (knapsack), while\nalso taking advantage of any path keys, etc.\n\nWith Disk-based Hash Aggregation, in principle we can generate paths\nrepresenting any combination of hashing and sorting for the grouping\nsets. But that would be overkill (and grow to a huge number of paths if\nwe have more than a handful of grouping sets). So I think the existing\nplanner logic for grouping sets is fine for now. We might come up with\na better approach later.\n\nBut that created a testing problem, because if the planner estimates\ncorrectly, no hashed grouping sets will spill, and the spilling code\nwon't be exercised. This GUC makes the planner disregard which grouping\nsets' hash tables will fit, making it much easier to exercise the\nspilling code. Is there a better way I should be testing this code\npath?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 09 Apr 2020 13:02:07 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> 2. enable_groupingsets_hash_disk (default false):\n>\n> This is about how we choose which grouping sets to hash and which to\n> sort when generating mixed mode paths.\n>\n> Even before this patch, there are quite a few paths that could be\n> generated. It tries to estimate the size of each grouping set's hash\n> table, and then see how many it can fit in work_mem (knapsack), while\n> also taking advantage of any path keys, etc.\n>\n> With Disk-based Hash Aggregation, in principle we can generate paths\n> representing any combination of hashing and sorting for the grouping\n> sets. But that would be overkill (and grow to a huge number of paths if\n> we have more than a handful of grouping sets). So I think the existing\n> planner logic for grouping sets is fine for now. We might come up with\n> a better approach later.\n>\n> But that created a testing problem, because if the planner estimates\n> correctly, no hashed grouping sets will spill, and the spilling code\n> won't be exercised. This GUC makes the planner disregard which grouping\n> sets' hash tables will fit, making it much easier to exercise the\n> spilling code. Is there a better way I should be testing this code\n> path?\n>\n>\nSo, I was catching up on email and noticed the last email in this\nthread.\n\nI think I am not fully understanding what enable_groupingsets_hash_disk\ndoes. Is it only for testing?\n\nUsing the tests you added to src/test/regress/sql/groupingsets.sql, I\ndid get a plan that looks like hashagg is spilling to disk (goes through\nhashagg_spill_tuple() code path and has number of batches reported in\nExplain) in a MixedAgg plan for a grouping sets query even with\nenable_groupingsets_hash_disk set to false. You don't have the exact\nquery I tried (below) in the test suite, but it is basically what is\nalready there, so I must be missing something.\n\nset enable_hashagg_disk = true;\nSET enable_groupingsets_hash_disk = false;\nSET work_mem='64kB';\nset enable_hashagg = true;\nset jit_above_cost = 0;\ndrop table if exists gs_hash_1;\ncreate table gs_hash_1 as\nselect g1000, g100, g10, sum(g::numeric), count(*), max(g::text) from\n (select g%1000 as g1000, g%100 as g100, g%10 as g10, g\n from generate_series(0,199999) g) s\ngroup by cube (g1000,g100,g10);\n\nexplain (analyze, costs off, timing off)\nselect g1000, g100, g10\nfrom gs_hash_1 group by cube (g1000,g100,g10);\n\n QUERY PLAN\n--------------------------------------------------------------\n MixedAggregate (actual rows=9648 loops=1)\n Hash Key: g10\n Hash Key: g10, g1000\n Hash Key: g100\n Hash Key: g100, g10\n Group Key: g1000, g100, g10\n Group Key: g1000, g100\n Group Key: g1000\n Group Key: ()\n Peak Memory Usage: 233 kB\n Disk Usage: 1600 kB\n HashAgg Batches: 2333\n -> Sort (actual rows=4211 loops=1)\n Sort Key: g1000, g100, g10\n Sort Method: external merge Disk: 384kB\n -> Seq Scan on gs_hash_1 (actual rows=4211 loops=1)\n\nAnyway, when I throw in the stats trick that is used in join_hash.sql:\n\nalter table gs_hash_1 set (autovacuum_enabled = 'false');\nupdate pg_class set reltuples = 10 where relname = 'gs_hash_1';\n\nI get a MixedAgg plan that doesn't have any Sort below and uses much\nmore disk.\n\n QUERY PLAN\n----------------------------------------------------------\n MixedAggregate (actual rows=4211 loops=1)\n Hash Key: g1000, g100, g10\n Hash Key: g1000, g100\n Hash Key: g1000\n Hash Key: g100, g10\n Hash Key: g100\n Hash Key: g10, g1000\n Hash Key: g10\n Group Key: ()\n Peak Memory Usage: 405 kB\n Disk Usage: 59712 kB\n HashAgg Batches: 4209\n -> Seq Scan on gs_hash_1 (actual rows=200000 loops=1)\n\nI'm not sure if this is more what you were looking for--or maybe I am\nmisunderstanding the guc.\n\n-- \nMelanie Plageman\n\nOn Thu, Apr 9, 2020 at 1:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n2. enable_groupingsets_hash_disk (default false):\n\nThis is about how we choose which grouping sets to hash and which to\nsort when generating mixed mode paths.\n\nEven before this patch, there are quite a few paths that could be\ngenerated. It tries to estimate the size of each grouping set's hash\ntable, and then see how many it can fit in work_mem (knapsack), while\nalso taking advantage of any path keys, etc.\n\nWith Disk-based Hash Aggregation, in principle we can generate paths\nrepresenting any combination of hashing and sorting for the grouping\nsets. But that would be overkill (and grow to a huge number of paths if\nwe have more than a handful of grouping sets). So I think the existing\nplanner logic for grouping sets is fine for now. We might come up with\na better approach later.\n\nBut that created a testing problem, because if the planner estimates\ncorrectly, no hashed grouping sets will spill, and the spilling code\nwon't be exercised. This GUC makes the planner disregard which grouping\nsets' hash tables will fit, making it much easier to exercise the\nspilling code. Is there a better way I should be testing this code\npath?\nSo, I was catching up on email and noticed the last email in thisthread.I think I am not fully understanding what enable_groupingsets_hash_diskdoes. Is it only for testing?Using the tests you added to src/test/regress/sql/groupingsets.sql, Idid get a plan that looks like hashagg is spilling to disk (goes throughhashagg_spill_tuple() code path and has number of batches reported inExplain) in a MixedAgg plan for a grouping sets query even withenable_groupingsets_hash_disk set to false. You don't have the exactquery I tried (below) in the test suite, but it is basically what isalready there, so I must be missing something.set enable_hashagg_disk = true;SET enable_groupingsets_hash_disk = false;SET work_mem='64kB';set enable_hashagg = true;set jit_above_cost = 0;drop table if exists gs_hash_1;create table gs_hash_1 asselect g1000, g100, g10, sum(g::numeric), count(*), max(g::text) from  (select g%1000 as g1000, g%100 as g100, g%10 as g10, g   from generate_series(0,199999) g) sgroup by cube (g1000,g100,g10);explain (analyze, costs off, timing off) select g1000, g100, g10from gs_hash_1 group by cube (g1000,g100,g10);                          QUERY PLAN                          -------------------------------------------------------------- MixedAggregate (actual rows=9648 loops=1)   Hash Key: g10   Hash Key: g10, g1000   Hash Key: g100   Hash Key: g100, g10   Group Key: g1000, g100, g10   Group Key: g1000, g100   Group Key: g1000   Group Key: ()   Peak Memory Usage: 233 kB   Disk Usage: 1600 kB   HashAgg Batches: 2333   ->  Sort (actual rows=4211 loops=1)         Sort Key: g1000, g100, g10         Sort Method: external merge  Disk: 384kB         ->  Seq Scan on gs_hash_1 (actual rows=4211 loops=1)Anyway, when I throw in the stats trick that is used in join_hash.sql:alter table gs_hash_1 set (autovacuum_enabled = 'false');update pg_class set reltuples = 10 where relname = 'gs_hash_1';I get a MixedAgg plan that doesn't have any Sort below and uses muchmore disk.                        QUERY PLAN                        ---------------------------------------------------------- MixedAggregate (actual rows=4211 loops=1)   Hash Key: g1000, g100, g10   Hash Key: g1000, g100   Hash Key: g1000   Hash Key: g100, g10   Hash Key: g100   Hash Key: g10, g1000   Hash Key: g10   Group Key: ()   Peak Memory Usage: 405 kB   Disk Usage: 59712 kB   HashAgg Batches: 4209   ->  Seq Scan on gs_hash_1 (actual rows=200000 loops=1)I'm not sure if this is more what you were looking for--or maybe I ammisunderstanding the guc. -- Melanie Plageman", "msg_date": "Tue, 9 Jun 2020 18:20:13 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jun 09, 2020 at 06:20:13PM -0700, Melanie Plageman wrote:\n> On Thu, Apr 9, 2020 at 1:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> > 2. enable_groupingsets_hash_disk (default false):\n> >\n> > This is about how we choose which grouping sets to hash and which to\n> > sort when generating mixed mode paths.\n> >\n> > Even before this patch, there are quite a few paths that could be\n> > generated. It tries to estimate the size of each grouping set's hash\n> > table, and then see how many it can fit in work_mem (knapsack), while\n> > also taking advantage of any path keys, etc.\n> >\n> > With Disk-based Hash Aggregation, in principle we can generate paths\n> > representing any combination of hashing and sorting for the grouping\n> > sets. But that would be overkill (and grow to a huge number of paths if\n> > we have more than a handful of grouping sets). So I think the existing\n> > planner logic for grouping sets is fine for now. We might come up with\n> > a better approach later.\n> >\n> > But that created a testing problem, because if the planner estimates\n> > correctly, no hashed grouping sets will spill, and the spilling code\n> > won't be exercised. This GUC makes the planner disregard which grouping\n> > sets' hash tables will fit, making it much easier to exercise the\n> > spilling code. Is there a better way I should be testing this code\n> > path?\n>\n> So, I was catching up on email and noticed the last email in this\n> thread.\n> \n> I think I am not fully understanding what enable_groupingsets_hash_disk\n> does. Is it only for testing?\n\nIf so, it should be in category: \"Developer Options\".\n\n> Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> did get a plan that looks like hashagg is spilling to disk (goes through\n> hashagg_spill_tuple() code path and has number of batches reported in\n> Explain) in a MixedAgg plan for a grouping sets query even with\n> enable_groupingsets_hash_disk set to false.\n\n> I'm not sure if this is more what you were looking for--or maybe I am\n> misunderstanding the guc.\n\nThe behavior of the GUC is inconsistent with the other GUCs, which is\nconfusing. See also Robert's comments in this thread.\nhttps://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n\nThe old (pre-13) behavior was:\n - work_mem is the amount of RAM to which each query node tries to constrain\n itself, and the planner will reject a plan if it's expected to exceed that.\n ...But a chosen plan might exceed work_mem anyway.\n\nThe new behavior in v13 seems to be:\n - HashAgg now respects work_mem, but instead enable*hash_disk are\n opportunisitic. A node which is *expected* to spill to disk will be\n rejected.\n ...But at execution time, a node which exceeds work_mem will be spilled.\n\nIf someone sees a plan which spills to disk and wants to improve performance by\navoid spilling, they might SET enable_hashagg_disk=off, which might do what\nthey want (if the plan is rejected at plan time), or it might not, which I\nthink will be a surprise every time.\n\nIf someone agrees, I suggest to add this as an Opened Item.\n\nMaybe some combination of these would be an improvement:\n\n - change documentation to emphasize behavior;\n - change EXPLAIN ouput to make it obvious this isn't misbehaving;\n - rename the GUC to not start with enable_* (work_mem_exceed?)\n - rename the GUC *values* to something other than on/off. On/Planner?\n - change the GUC to behave like it sounds like it should, which means \"off\"\n would allow the pre-13 behavior of exceeding work_mem.\n - Maybe make it ternary, like:\n exceed_work_mem: {spill_disk, planner_reject, allow}\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 9 Jun 2020 21:15:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jun 9, 2020 at 7:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Jun 09, 2020 at 06:20:13PM -0700, Melanie Plageman wrote:\n> > On Thu, Apr 9, 2020 at 1:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > > 2. enable_groupingsets_hash_disk (default false):\n> > >\n> > > This is about how we choose which grouping sets to hash and which to\n> > > sort when generating mixed mode paths.\n> > >\n> > > Even before this patch, there are quite a few paths that could be\n> > > generated. It tries to estimate the size of each grouping set's hash\n> > > table, and then see how many it can fit in work_mem (knapsack), while\n> > > also taking advantage of any path keys, etc.\n> > >\n> > > With Disk-based Hash Aggregation, in principle we can generate paths\n> > > representing any combination of hashing and sorting for the grouping\n> > > sets. But that would be overkill (and grow to a huge number of paths if\n> > > we have more than a handful of grouping sets). So I think the existing\n> > > planner logic for grouping sets is fine for now. We might come up with\n> > > a better approach later.\n> > >\n> > > But that created a testing problem, because if the planner estimates\n> > > correctly, no hashed grouping sets will spill, and the spilling code\n> > > won't be exercised. This GUC makes the planner disregard which grouping\n> > > sets' hash tables will fit, making it much easier to exercise the\n> > > spilling code. Is there a better way I should be testing this code\n> > > path?\n> >\n> > So, I was catching up on email and noticed the last email in this\n> > thread.\n> >\n> > I think I am not fully understanding what enable_groupingsets_hash_disk\n> > does. Is it only for testing?\n>\n> If so, it should be in category: \"Developer Options\".\n>\n> > Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> > did get a plan that looks like hashagg is spilling to disk (goes through\n> > hashagg_spill_tuple() code path and has number of batches reported in\n> > Explain) in a MixedAgg plan for a grouping sets query even with\n> > enable_groupingsets_hash_disk set to false.\n>\n> > I'm not sure if this is more what you were looking for--or maybe I am\n> > misunderstanding the guc.\n>\n> The behavior of the GUC is inconsistent with the other GUCs, which is\n> confusing. See also Robert's comments in this thread.\n> https://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n>\n> The old (pre-13) behavior was:\n> - work_mem is the amount of RAM to which each query node tries to\n> constrain\n> itself, and the planner will reject a plan if it's expected to exceed\n> that.\n> ...But a chosen plan might exceed work_mem anyway.\n>\n> The new behavior in v13 seems to be:\n> - HashAgg now respects work_mem, but instead enable*hash_disk are\n> opportunisitic. A node which is *expected* to spill to disk will be\n> rejected.\n> ...But at execution time, a node which exceeds work_mem will be spilled.\n>\n> If someone sees a plan which spills to disk and wants to improve\n> performance by\n> avoid spilling, they might SET enable_hashagg_disk=off, which might do what\n> they want (if the plan is rejected at plan time), or it might not, which I\n> think will be a surprise every time.\n>\n>\nBut I thought that the enable_groupingsets_hash_disk GUC allows us to\ntest the following scenario:\n\nThe following is true:\n- planner thinks grouping sets' hashtables table would fit in memory\n (spilling is *not* expected)\n- user is okay with spilling\n- some grouping keys happen to be sortable and some hashable\n\nThe following happens:\n- Planner generates some HashAgg grouping sets paths\n- A MixedAgg plan is created\n- During execution of the MixedAgg plan, one or more grouping sets'\n hashtables would exceed work_mem, so the executor spills those tuples\n to disk instead of exceeding work_mem\n\nEspecially given the code and comment:\n /*\n * If we have sortable columns to work with (gd->rollups is non-empty)\n * and enable_groupingsets_hash_disk is disabled, don't generate\n * hash-based paths that will exceed work_mem.\n */\n if (!enable_groupingsets_hash_disk &&\n hashsize > work_mem * 1024L && gd->rollups)\n return; /* nope, won't fit */\n\nIf this is the scenario that the GUC is designed to test, it seems like\nyou could exercise it without the enable_groupingsets_hash_disk GUC by\nlying about the stats, no?\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 9, 2020 at 7:15 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Jun 09, 2020 at 06:20:13PM -0700, Melanie Plageman wrote:\n> On Thu, Apr 9, 2020 at 1:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> > 2. enable_groupingsets_hash_disk (default false):\n> >\n> > This is about how we choose which grouping sets to hash and which to\n> > sort when generating mixed mode paths.\n> >\n> > Even before this patch, there are quite a few paths that could be\n> > generated. It tries to estimate the size of each grouping set's hash\n> > table, and then see how many it can fit in work_mem (knapsack), while\n> > also taking advantage of any path keys, etc.\n> >\n> > With Disk-based Hash Aggregation, in principle we can generate paths\n> > representing any combination of hashing and sorting for the grouping\n> > sets. But that would be overkill (and grow to a huge number of paths if\n> > we have more than a handful of grouping sets). So I think the existing\n> > planner logic for grouping sets is fine for now. We might come up with\n> > a better approach later.\n> >\n> > But that created a testing problem, because if the planner estimates\n> > correctly, no hashed grouping sets will spill, and the spilling code\n> > won't be exercised. This GUC makes the planner disregard which grouping\n> > sets' hash tables will fit, making it much easier to exercise the\n> > spilling code. Is there a better way I should be testing this code\n> > path?\n>\n> So, I was catching up on email and noticed the last email in this\n> thread.\n> \n> I think I am not fully understanding what enable_groupingsets_hash_disk\n> does. Is it only for testing?\n\nIf so, it should be in category: \"Developer Options\".\n\n> Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> did get a plan that looks like hashagg is spilling to disk (goes through\n> hashagg_spill_tuple() code path and has number of batches reported in\n> Explain) in a MixedAgg plan for a grouping sets query even with\n> enable_groupingsets_hash_disk set to false.\n\n> I'm not sure if this is more what you were looking for--or maybe I am\n> misunderstanding the guc.\n\nThe behavior of the GUC is inconsistent with the other GUCs, which is\nconfusing.  See also Robert's comments in this thread.\nhttps://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n\nThe old (pre-13) behavior was:\n - work_mem is the amount of RAM to which each query node tries to constrain\n   itself, and the planner will reject a plan if it's expected to exceed that.\n   ...But a chosen plan might exceed work_mem anyway.\n\nThe new behavior in v13 seems to be:\n - HashAgg now respects work_mem, but instead enable*hash_disk are\n   opportunisitic.  A node which is *expected* to spill to disk will be\n   rejected.\n   ...But at execution time, a node which exceeds work_mem will be spilled.\n\nIf someone sees a plan which spills to disk and wants to improve performance by\navoid spilling, they might SET enable_hashagg_disk=off, which might do what\nthey want (if the plan is rejected at plan time), or it might not, which I\nthink will be a surprise every time.But I thought that the enable_groupingsets_hash_disk GUC allows us totest the following scenario:The following is true:- planner thinks grouping sets' hashtables table would fit in memory  (spilling is *not* expected)- user is okay with spilling- some grouping keys happen to be sortable and some hashableThe following happens:- Planner generates some HashAgg grouping sets paths- A MixedAgg plan is created - During execution of the MixedAgg plan, one or more grouping sets'  hashtables would exceed work_mem, so the executor spills those tuples  to disk instead of exceeding work_memEspecially given the code and comment:    /*    * If we have sortable columns to work with (gd->rollups is non-empty)    * and enable_groupingsets_hash_disk is disabled, don't generate    * hash-based paths that will exceed work_mem.    */    if (!enable_groupingsets_hash_disk &&            hashsize > work_mem * 1024L && gd->rollups)            return;\t\t\t\t/* nope, won't fit */If this is the scenario that the GUC is designed to test, it seems likeyou could exercise it without the enable_groupingsets_hash_disk GUC bylying about the stats, no?-- Melanie Plageman", "msg_date": "Wed, 10 Jun 2020 09:40:59 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 2020-06-09 at 18:20 -0700, Melanie Plageman wrote:\n> So, I was catching up on email and noticed the last email in this\n> thread.\n> \n> I think I am not fully understanding what\n> enable_groupingsets_hash_disk\n> does. Is it only for testing?\n\nIt's mostly for testing. I could imagine cases where it would be useful\nto force groupingsets to use the disk, but I mainly wanted the setting\nthere for testing the grouping sets hash disk code path.\n\n> Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> did get a plan that looks like hashagg is spilling to disk (goes\n> through\n\nI had something that worked as a test for a while, but then when I\ntweaked the costing, it started using the Sort path (therefore not\ntesting my grouping sets hash disk code at all) and a bug crept in. So\nI thought it would be best to have a more forceful knob.\n\nPerhaps I should just get rid of that GUC and use the stats trick?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 10 Jun 2020 10:39:08 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 2020-06-09 at 21:15 -0500, Justin Pryzby wrote:\n> The behavior of the GUC is inconsistent with the other GUCs, which is\n> confusing. See also Robert's comments in this thread.\n> \nhttps://www.postgresql.org/message-id/20200407223900.GT2228%40telsasoft.com\n\nenable_* GUCs are planner GUCs, so it would be confusing to me if they\naffected execution-time behavior.\n\nI think the point of confusion is that it's not enabling/disabling an\nentire execution node; it only \"disables\" HashAgg if it thinks it will\nspill. I agree that is a difference with the other GUCs, and could\ncause confusion.\n\nStepping back, I was trying to solve two problems with these GUCs:\n\n1. Testing the spilling of hashed grouping sets: I'm inclined to just\nget rid of enable_groupingsets_hash_disk and use Melanie's stats-\nhacking approach instead.\n\n2. Trying to provide an escape hatch for someone who experiences a\nperformance regression and wants something like the old behavior back.\nThere are two aspects of the old behavior that a user could potentially\nwant back:\n a. Don't choose HashAgg if it's expected to have more groups than fit\ninto a work_mem-sized hashtable.\n b. If executing HashAgg, and the hash table exceeds work_mem, just\nkeep going.\n\nThe behavior in v13 master is, by default, analagous to Sort or\nanything else that adapts at runtime to spill. If we had spillable\nHashAgg the whole time, we wouldn't be worried about #2 at all. But,\nout of conservatism, I am trying to accommodate users who want an\nescape hatch, at least for a release or two until users feel more\ncomfortable with disk-based HashAgg.\n\nSetting enable_hash_disk=false implements 2(a). This name apparently\ncauses confusion, but it's hard to come up with a better one because\nthe v12 behavior has nuance that's hard to express succinctly. I don't\nthink the names you suggested quite fit, but the idea to use a more\ninteresting GUC value might help express the behavior. Perhaps making\nenable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n\"reject\" is too definite for the planner, which is working with\nimperfect information.\n\nIn master, there is no explicit way to get 2(b), but you can just set\nwork_mem higher in a lot of cases. If enough people want 2(b), I can\nadd it easily. Perhaps hashagg_overflow=on|off, which would control\nexecution time behavior?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 10 Jun 2020 11:39:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 2020-04-07 at 11:20 -0700, Jeff Davis wrote:\n> Now that we have Disk-based Hash Aggregation, there are a lot more\n> situations where the planner can choose HashAgg. The\n> enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n> costing. If false, it only generates a HashAgg path if it thinks it\n> will fit in work_mem, similar to the old behavior (though it wlil now\n> spill to disk if the planner was wrong about it fitting in work_mem).\n> The current default is true.\n> \n> I expect this to be a win in a lot of cases, obviously. But as with\n> any\n> planner change, it will be wrong sometimes. We may want to be\n> conservative and set the default to false depending on the experience\n> during beta. I'm inclined to leave it as true for now though, because\n> that will give us better information upon which to base any decision.\n\nA compromise may be to multiply the disk costs for HashAgg by, e.g. a\n1.5 - 2X penalty. That would make the plan changes less abrupt, and may\nmitigate some of the concerns about I/O patterns that Tomas raised\nhere:\n\n\nhttps://www.postgresql.org/message-id/20200519151202.u2p2gpiawoaznsv2@development\n\nThe issues were improved a lot, but it will take us a while to really\ntune the IO behavior as well as Sort.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 10 Jun 2020 11:52:22 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 10, 2020 at 10:39 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Tue, 2020-06-09 at 18:20 -0700, Melanie Plageman wrote:\n> > So, I was catching up on email and noticed the last email in this\n> > thread.\n> >\n> > I think I am not fully understanding what\n> > enable_groupingsets_hash_disk\n> > does. Is it only for testing?\n>\n> It's mostly for testing. I could imagine cases where it would be useful\n> to force groupingsets to use the disk, but I mainly wanted the setting\n> there for testing the grouping sets hash disk code path.\n>\n> > Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> > did get a plan that looks like hashagg is spilling to disk (goes\n> > through\n>\n> I had something that worked as a test for a while, but then when I\n> tweaked the costing, it started using the Sort path (therefore not\n> testing my grouping sets hash disk code at all) and a bug crept in. So\n> I thought it would be best to have a more forceful knob.\n>\n> Perhaps I should just get rid of that GUC and use the stats trick?\n>\n>\nI like the idea of doing the stats trick. For extra security, you could\nthrow in that other trick that is used in groupingsets.sql and make some\nof the grouping columns unhashable and some unsortable so you know that\nyou will not pick only the Sort Path and do just a GroupAgg.\n\nThis slight modification of my previous example will probably yield\nconsistent results:\n\nset enable_hashagg_disk = true;\nSET enable_groupingsets_hash_disk = false;\nSET work_mem='64kB';\nSET enable_hashagg = true;\ndrop table if exists gs_hash_1;\ncreate table gs_hash_1 as\n select g%1000 as g1000, g%100 as g100, g%10 as g10, g,\n g::text::xid as g_unsortable, g::bit(4) as g_unhashable\n from generate_series(0,199999) g;\nanalyze gs_hash_1;\n\nalter table gs_hash_1 set (autovacuum_enabled = 'false');\nupdate pg_class set reltuples = 10 where relname = 'gs_hash_1';\n\nexplain (analyze, costs off, timing off)\nselect g1000, g100, g10\n from gs_hash_1\n group by grouping sets ((g1000,g100), (g10, g_unhashable), (g100,\ng_unsortable));\n\n QUERY PLAN\n----------------------------------------------------------------\n MixedAggregate (actual rows=201080 loops=1)\n Hash Key: g100, g_unsortable\n Group Key: g1000, g100\n Sort Key: g10, g_unhashable\n Group Key: g10, g_unhashable\n Peak Memory Usage: 109 kB\n Disk Usage: 13504 kB\n HashAgg Batches: 10111\n -> Sort (actual rows=200000 loops=1)\n Sort Key: g1000, g100\n Sort Method: external merge Disk: 9856kB\n -> Seq Scan on gs_hash_1 (actual rows=200000 loops=1)\n\nWhile we are on the topic of the tests, I was wondering if you had\nconsidered making a user defined type that had a lot of padding so that\nthe tests could use fewer rows. I did this for adaptive hashjoin and it\nhelped me with iteration time.\nI don't know if that would still be the kind of test you are looking for\nsince a user probably wouldn't have a couple hundred really fat\nuntoasted tuples, but, I just thought I would check if that would be\nuseful.\n\n-- \nMelanie Plageman\n\nOn Wed, Jun 10, 2020 at 10:39 AM Jeff Davis <pgsql@j-davis.com> wrote:On Tue, 2020-06-09 at 18:20 -0700, Melanie Plageman wrote:\n> So, I was catching up on email and noticed the last email in this\n> thread.\n> \n> I think I am not fully understanding what\n> enable_groupingsets_hash_disk\n> does. Is it only for testing?\n\nIt's mostly for testing. I could imagine cases where it would be useful\nto force groupingsets to use the disk, but I mainly wanted the setting\nthere for testing the grouping sets hash disk code path.\n\n> Using the tests you added to src/test/regress/sql/groupingsets.sql, I\n> did get a plan that looks like hashagg is spilling to disk (goes\n> through\n\nI had something that worked as a test for a while, but then when I\ntweaked the costing, it started using the Sort path (therefore not\ntesting my grouping sets hash disk code at all) and a bug crept in. So\nI thought it would be best to have a more forceful knob.\n\nPerhaps I should just get rid of that GUC and use the stats trick?\nI like the idea of doing the stats trick. For extra security, you couldthrow in that other trick that is used in groupingsets.sql and make someof the grouping columns unhashable and some unsortable so you know thatyou will not pick only the Sort Path and do just a GroupAgg.This slight modification of my previous example will probably yieldconsistent results:set enable_hashagg_disk = true;SET enable_groupingsets_hash_disk = false;SET work_mem='64kB';SET enable_hashagg = true;drop table if exists gs_hash_1;create table gs_hash_1 as  select g%1000 as g1000, g%100 as g100, g%10 as g10, g,           g::text::xid as g_unsortable, g::bit(4) as g_unhashable   from generate_series(0,199999) g;analyze gs_hash_1;alter table gs_hash_1 set (autovacuum_enabled = 'false');update pg_class set reltuples = 10 where relname = 'gs_hash_1';explain (analyze, costs off, timing off) select g1000, g100, g10  from gs_hash_1   group by grouping sets ((g1000,g100), (g10, g_unhashable), (g100, g_unsortable));                           QUERY PLAN                           ---------------------------------------------------------------- MixedAggregate (actual rows=201080 loops=1)   Hash Key: g100, g_unsortable   Group Key: g1000, g100   Sort Key: g10, g_unhashable     Group Key: g10, g_unhashable   Peak Memory Usage: 109 kB   Disk Usage: 13504 kB   HashAgg Batches: 10111   ->  Sort (actual rows=200000 loops=1)         Sort Key: g1000, g100         Sort Method: external merge  Disk: 9856kB         ->  Seq Scan on gs_hash_1 (actual rows=200000 loops=1)While we are on the topic of the tests, I was wondering if you hadconsidered making a user defined type that had a lot of padding so thatthe tests could use fewer rows. I did this for adaptive hashjoin and ithelped me with iteration time.I don't know if that would still be the kind of test you are looking forsince a user probably wouldn't have a couple hundred really fatuntoasted tuples, but, I just thought I would check if that would beuseful.-- Melanie Plageman", "msg_date": "Wed, 10 Jun 2020 17:48:17 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-06-10 at 11:39 -0700, Jeff Davis wrote:\n> 1. Testing the spilling of hashed grouping sets: I'm inclined to just\n> get rid of enable_groupingsets_hash_disk and use Melanie's stats-\n> hacking approach instead.\n\nFixed in 92c58fd9.\n\n> think the names you suggested quite fit, but the idea to use a more\n> interesting GUC value might help express the behavior. Perhaps making\n> enable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n> \"reject\" is too definite for the planner, which is working with\n> imperfect information.\n\nI renamed enable_hashagg_disk to hashagg_avoid_disk_plan, which I think\nsatisfies the concerns raised here. Also in 92c58fd9.\n\nThere is still the original topic of this thread, which is whether we\nneed to change the default value of this GUC, or penalize disk-based\nHashAgg in some way, to be more conservative about plan changes in v13.\nI think we can wait a little longer to make a decision there.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 11 Jun 2020 13:22:57 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 11, 2020 at 01:22:57PM -0700, Jeff Davis wrote:\n> On Wed, 2020-06-10 at 11:39 -0700, Jeff Davis wrote:\n> > 1. Testing the spilling of hashed grouping sets: I'm inclined to just\n> > get rid of enable_groupingsets_hash_disk and use Melanie's stats-\n> > hacking approach instead.\n> \n> Fixed in 92c58fd9.\n> \n> > think the names you suggested quite fit, but the idea to use a more\n> > interesting GUC value might help express the behavior. Perhaps making\n> > enable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n> > \"reject\" is too definite for the planner, which is working with\n> > imperfect information.\n> \n> I renamed enable_hashagg_disk to hashagg_avoid_disk_plan, which I think\n> satisfies the concerns raised here. Also in 92c58fd9.\n\nThanks for considering :)\n\nI saw you updated the Open Items page, but put the items into \"Older Bugs /\nFixed\".\n\nI moved them underneath \"Resolved\" since they're all new in v13.\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34995&oldid=34994\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 11 Jun 2020 21:45:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 9 Apr 2020 at 13:24, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Apr 09, 2020 at 01:48:55PM +0200, Tomas Vondra wrote:\n>\n> > It it really any different from our enable_* GUCs? Even if you do e.g.\n> > enable_sort=off, we may still do a sort. Same for enable_groupagg etc.\n>\n> Those show that the GUC was disabled by showing disable_cost. That's\n> what's\n> different about this one.\n>\n\nFwiw in the past this was seen not so much as a positive thing but a bug to\nbe fixed. We've talked about carrying a boolean \"disabled plan\" flag which\nwould be treated as a large cost penalty but not actually be added to the\ncost in the plan.\n\nThe problems with the disable_cost in the cost are (at least):\n\n1) It causes the resulting costs to be useless for comparing the plan costs\nwith other plans.\n\n2) It can cause other planning decisions to be distorted in strange\nnon-linear ways.\n\n\n-- \ngreg\n\nOn Thu, 9 Apr 2020 at 13:24, Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Apr 09, 2020 at 01:48:55PM +0200, Tomas Vondra wrote:\n> It it really any different from our enable_* GUCs? Even if you do e.g.\n> enable_sort=off, we may still do a sort. Same for enable_groupagg etc.\n\nThose show that the GUC was disabled by showing disable_cost.  That's what's\ndifferent about this one.Fwiw in the past this was seen not so much as a positive thing but a bug to be fixed. We've talked about carrying a boolean \"disabled plan\" flag which would be treated as a large cost penalty but not actually be added to the cost in the plan.The problems with the disable_cost in the cost are (at least):1) It causes the resulting costs to be useless for comparing the plan costs with other plans.2) It can cause other planning decisions to be distorted in strange non-linear ways. -- greg", "msg_date": "Thu, 11 Jun 2020 23:35:19 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 11, 2020 at 01:22:57PM -0700, Jeff Davis wrote:\n> > think the names you suggested quite fit, but the idea to use a more\n> > interesting GUC value might help express the behavior. Perhaps making\n> > enable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n> > \"reject\" is too definite for the planner, which is working with\n> > imperfect information.\n> \n> I renamed enable_hashagg_disk to hashagg_avoid_disk_plan, which I think\n> satisfies the concerns raised here. Also in 92c58fd9.\n\nI think this should be re-arranged to be in alphabetical order\nhttps://www.postgresql.org/docs/devel/runtime-config-query.html\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 20 Jun 2020 17:04:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 10, 2020 at 2:39 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The behavior in v13 master is, by default, analagous to Sort or\n> anything else that adapts at runtime to spill. If we had spillable\n> HashAgg the whole time, we wouldn't be worried about #2 at all. But,\n> out of conservatism, I am trying to accommodate users who want an\n> escape hatch, at least for a release or two until users feel more\n> comfortable with disk-based HashAgg.\n>\n> Setting enable_hash_disk=false implements 2(a). This name apparently\n> causes confusion, but it's hard to come up with a better one because\n> the v12 behavior has nuance that's hard to express succinctly. I don't\n> think the names you suggested quite fit, but the idea to use a more\n> interesting GUC value might help express the behavior. Perhaps making\n> enable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n> \"reject\" is too definite for the planner, which is working with\n> imperfect information.\n>\n> In master, there is no explicit way to get 2(b), but you can just set\n> work_mem higher in a lot of cases. If enough people want 2(b), I can\n> add it easily. Perhaps hashagg_overflow=on|off, which would control\n> execution time behavior?\n\nPlanner GUCs are a pretty blunt instrument for solving problems that\nusers may have with planner features. There's no guarantee that the\nexperience a user has with one query will be the same as the\nexperience they have with another query, or even that you couldn't\nhave a single query which contains two different nodes where the\noptimal behavior is different for one than it is for the other. In the\nfirst case, changing the value of the GUC on a per-query basis is\npretty painful; in the second case, even that is not good enough. So,\nas Tom has said before, the only really good choice in a case like\nthis is for the planner to figure out the right things automatically;\nanything that boils down to a user-provided hint pretty well sucks.\n\nSo I feel like the really important thing here is to fix the cases\nthat don't come out well with default settings. If we can't do that,\nthen the feature is half-baked and maybe should not have been\ncommitted in the first place. If we can, then we don't really need the\nGUC, let alone multiple GUCs. I understand that some of the reason you\nadded these was out of paranoia, and I get that: it's hard to be sure\nthat any feature of this complexity isn't going to have some rough\npatches, especially given how defective work_mem is as a model in\ngeneral. Still, we don't want to end up with 50 planner GUCs enabling\nand disabling individual bits of various planner nodes, or at least I\ndon't think we do, so I'm very skeptical of the idea that we need 2\njust for this feature. That doesn't feel scalable. I think the right\nnumber is 0 or 1, and if it's 1, very few people should be changing\nthe default. If anything else is the case, then IMHO the feature isn't\nready to ship.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:52:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 22, 2020 at 10:52:37AM -0400, Robert Haas wrote:\n> On Wed, Jun 10, 2020 at 2:39 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > The behavior in v13 master is, by default, analagous to Sort or\n> > anything else that adapts at runtime to spill. If we had spillable\n> > HashAgg the whole time, we wouldn't be worried about #2 at all. But,\n> > out of conservatism, I am trying to accommodate users who want an\n> > escape hatch, at least for a release or two until users feel more\n> > comfortable with disk-based HashAgg.\n> >\n> > Setting enable_hash_disk=false implements 2(a). This name apparently\n> > causes confusion, but it's hard to come up with a better one because\n> > the v12 behavior has nuance that's hard to express succinctly. I don't\n> > think the names you suggested quite fit, but the idea to use a more\n> > interesting GUC value might help express the behavior. Perhaps making\n> > enable_hashagg a ternary \"enable_hashagg=on|off|avoid_disk\"? The word\n> > \"reject\" is too definite for the planner, which is working with\n> > imperfect information.\n> >\n> > In master, there is no explicit way to get 2(b), but you can just set\n> > work_mem higher in a lot of cases. If enough people want 2(b), I can\n> > add it easily. Perhaps hashagg_overflow=on|off, which would control\n> > execution time behavior?\n> \n> don't think we do, so I'm very skeptical of the idea that we need 2\n> just for this feature. That doesn't feel scalable. I think the right\n> number is 0 or 1, and if it's 1, very few people should be changing\n> the default. If anything else is the case, then IMHO the feature isn't\n> ready to ship.\n\nThis was addressed in 92c58fd94801dd5c81ee20e26c5bb71ad64552a8\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34994&oldid=34993\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:06:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 22, 2020 at 11:06 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This was addressed in 92c58fd94801dd5c81ee20e26c5bb71ad64552a8\n> https://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34994&oldid=34993\n\nI mean, that's fine, but I am trying to make a more general point\nabout priorities. Getting the GUCs right is a lot less important than\ngetting the feature right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jun 2020 11:17:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, 2020-06-22 at 10:52 -0400, Robert Haas wrote:\n> So I feel like the really important thing here is to fix the cases\n> that don't come out well with default settings.\n\n...with the caveat that perfection is not something to expect from our\nplanner.\n\n> If we can't do that,\n> then the feature is half-baked and maybe should not have been\n> committed in the first place.\n\nHashAgg started out half-baked at the dawn of time, and stayed that way\nthrough version 12. Disk-based HashAgg was designed to fix it.\n\nOther major planner features generally offer a way to turn them off\n(e.g. parallelism, JIT), and we don't call those half-baked.\n\nI agree that the single GUC added in v13 (hashagg_avoid_disk_plan) is\nweird because it's half of a disable switch. But it's not weird because\nof my changes in v13; it's weird because the planner behavior in v12\nwas weird. I hope not many people need to set it, and I hope we can\nremove it soon.\n\nIf you think we will never be able to remove the GUC, then we should\nthink a little harder about whether we really need it. I am open to\nthat discussion, but I don't think the presence of this GUC implies\nthat disk-based hashagg is half-baked.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:30:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, 2020-06-22 at 11:17 -0400, Robert Haas wrote:\n> I mean, that's fine, but I am trying to make a more general point\n> about priorities. Getting the GUCs right is a lot less important than\n> getting the feature right.\n\nWhat about the feature you are worried that we're getting wrong?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:44:47 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 22, 2020 at 1:30 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2020-06-22 at 10:52 -0400, Robert Haas wrote:\n> > So I feel like the really important thing here is to fix the cases\n> > that don't come out well with default settings.\n>\n> ...with the caveat that perfection is not something to expect from our\n> planner.\n\n+1.\n\n> > If we can't do that,\n> > then the feature is half-baked and maybe should not have been\n> > committed in the first place.\n>\n> HashAgg started out half-baked at the dawn of time, and stayed that way\n> through version 12. Disk-based HashAgg was designed to fix it.\n>\n> Other major planner features generally offer a way to turn them off\n> (e.g. parallelism, JIT), and we don't call those half-baked.\n\nSure, and I'm not calling this half-baked either, but there is a\ndifference. JIT and parallelism are discrete features to a far greater\nextent than this is. I think we can explain to people the pros and\ncons of those things and ask them to make an intelligent choice about\nwhether they want them. You can say things like \"well, JIT is liable\nto make your queries run faster once they get going, but it adds to\nthe startup time and creates a dependency on LLVM\" and the user can\ndecide whether they want that or not. At least to me, something like\nthis isn't so easy to consider as a separate feature. As you say:\n\n> I agree that the single GUC added in v13 (hashagg_avoid_disk_plan) is\n> weird because it's half of a disable switch. But it's not weird because\n> of my changes in v13; it's weird because the planner behavior in v12\n> was weird. I hope not many people need to set it, and I hope we can\n> remove it soon.\n\nThe weirdness is the problem here, at least for me. Generally, I don't\nlike GUCs of the form give_me_the_old_strange_behavior=true, because\neither they tend to be either unnecessary (because nobody wants the\nold strange behavior) or hard to eliminate (because the new behavior\nis also strange and is not categorically better).\n\n> If you think we will never be able to remove the GUC, then we should\n> think a little harder about whether we really need it. I am open to\n> that discussion, but I don't think the presence of this GUC implies\n> that disk-based hashagg is half-baked.\n\nI don't think it necessarily implies that either. I do however have\nsome concerns about people using the GUC as a crutch. I am slightly\nworried that this is going to have hard-to-fix problems and that we'll\nbe stuck with the GUC for that reason. Now if that is the case, is\nremoving the GUC any better? Maybe not. These decisions are hard, and\nI am not trying to pretend like I have all the answers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:28:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, 2020-06-22 at 15:28 -0400, Robert Haas wrote:\n> The weirdness is the problem here, at least for me. Generally, I\n> don't\n> like GUCs of the form give_me_the_old_strange_behavior=true\n\nI agree with all of that in general.\n\n> I don't think it necessarily implies that either. I do however have\n> some concerns about people using the GUC as a crutch.\n\nAnother way of looking at it is that the weird behavior is already\nthere in v12, so there are already users relying on this weird behavior\nas a crutch for some other planner mistake. The question is whether we\nwant to:\n\n(a) take the weird behavior away now as a consequence of implementing\ndisk-based HashAgg; or\n(b) support the weird behavior forever; or\n(c) introduce a GUC now to help transition away from the weird behavior\n\nThe danger with (c) is that it gives users more time to become more\nreliant on the weird behavior; and worse, a GUC could be seen as an\nendorsement of the weird behavior rather than a path to eliminating it.\nSo we could intend to do (c) and end up with (b). We can mitigate this\nwith documentation warnings, perhaps.\n\n> I am slightly\n> worried that this is going to have hard-to-fix problems and that\n> we'll\n> be stuck with the GUC for that reason. \n\nWithout the GUC, it's basically a normal cost-based decision, with all\nof the good and bad that comes with that.\n\n> Now if that is the case, is\n> removing the GUC any better? Maybe not. These decisions are hard, and\n> I am not trying to pretend like I have all the answers.\n\nI agree that there is no easy answer.\n\nMy philosophy here is: if a user does experience a plan regression due\nto my change, would it be reasonable to tell them that we don't have\nany escape hatch or transition period at all? That would be a tough\nsell for such a common plan type.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 13:23:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 23 Jun 2020 at 08:24, Jeff Davis <pgsql@j-davis.com> wrote:\n> Another way of looking at it is that the weird behavior is already\n> there in v12, so there are already users relying on this weird behavior\n> as a crutch for some other planner mistake. The question is whether we\n> want to:\n>\n> (a) take the weird behavior away now as a consequence of implementing\n> disk-based HashAgg; or\n> (b) support the weird behavior forever; or\n> (c) introduce a GUC now to help transition away from the weird behavior\n>\n> The danger with (c) is that it gives users more time to become more\n> reliant on the weird behavior; and worse, a GUC could be seen as an\n> endorsement of the weird behavior rather than a path to eliminating it.\n> So we could intend to do (c) and end up with (b). We can mitigate this\n> with documentation warnings, perhaps.\n\nSo, I have a few thoughts on this subject. I understand both problem\ncases have been mentioned before on this thread, but just to reiterate\nthe two problem cases that we really would rather people didn't hit.\n\nThey are:\n\n1. Statistics underestimation can cause hashagg to be selected. The\nexecutor will spill to disk in PG13. Users may find performance\nsuffers as previously the query may have just overshot work_mem\nwithout causing any OOM issues. Their I/O performance might be\nterrible.\n2. We might now choose to hash aggregate where pre PG13, we didn't\nchoose that because the hash table was estimated to be bigger than\nwork_mem. Hash agg might not be the best plan for the job.\n\nFor #1. We know users are forced to run smaller work_mems than they\nmight like as they need to allow for that random moment where all\nbackends happen to be doing that 5-way hash join all at the same time.\nIt seems reasonable that someone might want the old behaviour. They\nmay well be sitting on a timebomb that's about to OOM, but it would be\nsad if someone's upgrade to PG13 was blocked on this, especially if\nit's just due to some query that runs once per month but needs to\nperform quickly.\n\nFor #2. This seems like a very legitimate requirement to me. If a\nuser is unhappy that PG13 now hashaggs where before it sorted and\ngroup aggregated, but they're unhappy, not because there's some issue\nwith hashagg spilling, but because that causes the node above the agg\nto becomes a Hash Join rather than a Merge Join and that's bad for\nsome existing reason. Our planner doing the wrong thing based on\neither; lack of, inaccurate or out-of-date statistics is not Jeff's\nfault. Having the ability to switch off a certain planner feature is\njust following along with what we do today for many other node types.\n\nAs for GUCs to try to help the group of users who, *I'm certain*, will\nhave problems with PG13's plan choice. I think the overloaded\nenable_hashagg option is a really nice compromise. We don't really\nhave any other executor node type that has multiple GUCs controlling\nits behaviour, so I believe it would be nice to keep it that way.\n\nHow about:\n\nenable_hashagg = \"on\" -- enables hashagg allowing it to freely spill\nto disk as it pleases.\nenable_hashagg = \"trynospill\" -- Planner will only choose hash_agg if\nit thinks it won't spill (pre PG13 planner behaviour)\nenable_hashagg = \"neverspill\" -- executor will *never* spill to disk\nand can still OOM (NOT RECOMMENDED, but does give pre PG13 planner and\nexecutor behaviour)\nenable_hashagg = \"off\" -- planner does not consider hash agg, ever.\nSame as what PG12 did for this setting.\n\nNow, it's a bit weird to have \"neverspill\" as this is controlling\nwhat's done in the executor from a planner GUC. Likely we can just\nwork around that by having a new \"allowhashspill\" bool field in the\n\"Agg\" struct that's set by the planner, say during createplan that\ncontrols if nodeAgg.c is allowed to spill or not. That'll also allow\nPREPAREd plans to continue to do what they had planned to do already.\n\nThe thing I like about doing it this way is that:\n\na) it does not add any new GUCs\nb) it semi hides the weird values that we really wish nobody would\never have to set in a GUC that people have become used it just\nallowing the values \"on\" and \"off\".\n\nThe thing I don't quite like about this idea is:\na) I wish the planner was perfect and we didn't need to do this.\nb) It's a bit weird to overload a GUC that has a very booleanish name\nto not be bool.\n\nHowever, I also think it's pretty lightweight to support this. I\nimagine a dozen lines of docs and likely about half a dozen lines per\nGUC option in the planner.\n\nAnd in the future, when our planner is perfect*, we can easily just\nremove the enum values from the GUC that we no longer want to support.\n\nDavid\n\n* Yes I know that will never happen.\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:11:57 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 02:11:57PM +1200, David Rowley wrote:\n> On Tue, 23 Jun 2020 at 08:24, Jeff Davis <pgsql@j-davis.com> wrote:\n> > Another way of looking at it is that the weird behavior is already\n> > there in v12, so there are already users relying on this weird behavior\n> > as a crutch for some other planner mistake. The question is whether we\n> > want to:\n\nYea - \"behavior change\" is a scenario for which it's hard to anticipate well\nall the range of consequences.\n\n> How about:\n> \n> enable_hashagg = \"on\" -- enables hashagg allowing it to freely spill\n> to disk as it pleases.\n> enable_hashagg = \"trynospill\" -- Planner will only choose hash_agg if\n> it thinks it won't spill (pre PG13 planner behaviour)\n> enable_hashagg = \"neverspill\" -- executor will *never* spill to disk\n> and can still OOM (NOT RECOMMENDED, but does give pre PG13 planner and\n> executor behaviour)\n> enable_hashagg = \"off\" -- planner does not consider hash agg, ever.\n> Same as what PG12 did for this setting.\n\n+1\n\nI like that this allows the new behavior as an *option* one *can* use rather\nthan a \"behavior change\" which is imposed on users and which users then *have*\nto accomodate in postgres.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 23 Jun 2020 22:14:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 02:11:57PM +1200, David Rowley wrote:\n> On Tue, 23 Jun 2020 at 08:24, Jeff Davis <pgsql@j-davis.com> wrote:\n> > Another way of looking at it is that the weird behavior is already\n> > there in v12, so there are already users relying on this weird behavior\n> > as a crutch for some other planner mistake. The question is whether we\n> > want to:\n> >\n> > (a) take the weird behavior away now as a consequence of implementing\n> > disk-based HashAgg; or\n> > (b) support the weird behavior forever; or\n> > (c) introduce a GUC now to help transition away from the weird behavior\n> >\n> > The danger with (c) is that it gives users more time to become more\n> > reliant on the weird behavior; and worse, a GUC could be seen as an\n> > endorsement of the weird behavior rather than a path to eliminating it.\n> > So we could intend to do (c) and end up with (b). We can mitigate this\n> > with documentation warnings, perhaps.\n> \n> So, I have a few thoughts on this subject. I understand both problem\n> cases have been mentioned before on this thread, but just to reiterate\n> the two problem cases that we really would rather people didn't hit.\n\nI appreciated this summary since I wasn't fully following the issues.\n\n> As for GUCs to try to help the group of users who, *I'm certain*, will\n> have problems with PG13's plan choice. I think the overloaded\n> enable_hashagg option is a really nice compromise. We don't really\n> have any other executor node type that has multiple GUCs controlling\n> its behaviour, so I believe it would be nice to keep it that way.\n\nSo, in trying to anticipate how users will be affected by an API change,\nI try to look at similar cases where we already have this behavior, and\nhow users react to this. Looking at the available join methods, I think\nwe have one. We currently support:\n\n\t* nested loop with sequential scan\n\t* nested loop with index scan\n\t* hash join\n\t* merge join\n\nIt would seem merge join has almost the same complexities as the new\nhash join code, since it can spill to disk doing sorts for merge joins,\nand adjusting work_mem is the only way to control that spill to disk. I\ndon't remember anyone complaining about spills to disk during merge\njoin, so I am unclear why we would need a such control for hash join.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 05:06:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 24 Jun 2020 at 21:06, Bruce Momjian <bruce@momjian.us> wrote:\n> I\n> don't remember anyone complaining about spills to disk during merge\n> join, so I am unclear why we would need a such control for hash join.\n\nHash aggregate, you mean? The reason is that upgrading to PG13 can\ncause a performance regression for an underestimated ndistinct on the\nGROUP BY clause and cause hash aggregate to spill to disk where it\npreviously did everything in RAM. Sure, that behaviour was never\nwhat we wanted to happen, Jeff has fixed that now, but the fact\nremains that this does happen in the real world quite often and people\noften get away with it, likey because work_mem is generally set to\nsome very conservative value. Of course, there's also a bunch of\npeople that have been bitten by OOM due to this too. The \"neverspill\"\nwouldn't be for those people. Certainly, it's possible that we just\ntell these people to increase work_mem for this query, that way they\ncan set it to something reasonable and still get spilling if it's\nreally needed to save them from OOM, but the problem there is that\nit's not very easy to go and set work_mem for a certain query.\n\nFWIW, I wish that I wasn't suggesting we do this, but I am because it\nseems simple enough to implement and it removes a pretty big roadblock\nthat might exist for a small subset of people wishing to upgrade to\nPG13. It seems lightweight enough to maintain, at least until we\ninvent some better management of how many executor nodes we can have\nallocating work_mem at once.\n\nThe suggestion I made was just based on asking myself the following\nset of questions:\n\nSince Hash Aggregate has been able to overflow work_mem since day 1,\nand now that we've completely changed that fact in PG13, is that\nlikely to upset anyone? If so, should we go to the trouble of giving\nthose people a way of getting the old behaviour back? If we do want to\nhelp those people, what's the best way to make those options available\nto them in a way that we can remove the special options with the least\npain in some future version of PostgreSQL?\n\nI'd certainly be interested in hearing how other people would answer\nthose question.\n\nDavid\n\n\n", "msg_date": "Thu, 25 Jun 2020 00:24:29 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 05:06:28AM -0400, Bruce Momjian wrote:\n> On Wed, Jun 24, 2020 at 02:11:57PM +1200, David Rowley wrote:\n> > On Tue, 23 Jun 2020 at 08:24, Jeff Davis <pgsql@j-davis.com> wrote:\n> > > Another way of looking at it is that the weird behavior is already\n> > > there in v12, so there are already users relying on this weird behavior\n> > > as a crutch for some other planner mistake. The question is whether we\n> > > want to:\n> > >\n> > > (a) take the weird behavior away now as a consequence of implementing\n> > > disk-based HashAgg; or\n> > > (b) support the weird behavior forever; or\n> > > (c) introduce a GUC now to help transition away from the weird behavior\n> > >\n> > > The danger with (c) is that it gives users more time to become more\n> > > reliant on the weird behavior; and worse, a GUC could be seen as an\n> > > endorsement of the weird behavior rather than a path to eliminating it.\n> > > So we could intend to do (c) and end up with (b). We can mitigate this\n> > > with documentation warnings, perhaps.\n> > \n> > So, I have a few thoughts on this subject. I understand both problem\n> > cases have been mentioned before on this thread, but just to reiterate\n> > the two problem cases that we really would rather people didn't hit.\n> \n> I appreciated this summary since I wasn't fully following the issues.\n> \n> > As for GUCs to try to help the group of users who, *I'm certain*, will\n> > have problems with PG13's plan choice. I think the overloaded\n> > enable_hashagg option is a really nice compromise. We don't really\n> > have any other executor node type that has multiple GUCs controlling\n> > its behaviour, so I believe it would be nice to keep it that way.\n...\n> It would seem merge join has almost the same complexities as the new\n> hash join code, since it can spill to disk doing sorts for merge joins,\n> and adjusting work_mem is the only way to control that spill to disk. I\n> don't remember anyone complaining about spills to disk during merge\n> join, so I am unclear why we would need a such control for hash join.\n\nIt loooks like merge join was new in 8.3. I don't think that's a good analogy,\nsince the old behavior was still available with enable_mergejoin=off.\n\nI think a better analogy would be if we now changed sort nodes beneath merge\njoin to use at most 0.5*work_mem, with no way of going back to using\n1.0*work_mem.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 24 Jun 2020 07:38:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 07:38:43AM -0500, Justin Pryzby wrote:\n> On Wed, Jun 24, 2020 at 05:06:28AM -0400, Bruce Momjian wrote:\n> > It would seem merge join has almost the same complexities as the new\n> > hash join code, since it can spill to disk doing sorts for merge joins,\n> > and adjusting work_mem is the only way to control that spill to disk. I\n> > don't remember anyone complaining about spills to disk during merge\n> > join, so I am unclear why we would need a such control for hash join.\n> \n> It loooks like merge join was new in 8.3. I don't think that's a good analogy,\n> since the old behavior was still available with enable_mergejoin=off.\n\nUh, we don't gurantee backward compatibility in the optimizer. You can\nturn off hashagg if you want. That doesn't get you to PG 13 behavior,\nbut we don't gurantee that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:08:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:24:29AM +1200, David Rowley wrote:\n> On Wed, 24 Jun 2020 at 21:06, Bruce Momjian <bruce@momjian.us> wrote:\n> > I\n> > don't remember anyone complaining about spills to disk during merge\n> > join, so I am unclear why we would need a such control for hash join.\n> \n> Hash aggregate, you mean? The reason is that upgrading to PG13 can\n\nYes, sorry.\n\n> cause a performance regression for an underestimated ndistinct on the\n> GROUP BY clause and cause hash aggregate to spill to disk where it\n> previously did everything in RAM. Sure, that behaviour was never\n> what we wanted to happen, Jeff has fixed that now, but the fact\n> remains that this does happen in the real world quite often and people\n> often get away with it, likey because work_mem is generally set to\n> some very conservative value. Of course, there's also a bunch of\n> people that have been bitten by OOM due to this too. The \"neverspill\"\n> wouldn't be for those people. Certainly, it's possible that we just\n> tell these people to increase work_mem for this query, that way they\n> can set it to something reasonable and still get spilling if it's\n> really needed to save them from OOM, but the problem there is that\n> it's not very easy to go and set work_mem for a certain query.\n\nWell, my point is that merge join works that way, and no one has needed\na knob to avoid mergejoin if it is going to spill to disk. If they are\nadjusting work_mem to prevent spill of merge join, they can do the same\nfor hash agg. We just need to document this in the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:12:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Jun 24, 2020 at 05:06:28AM -0400, Bruce Momjian wrote:\n>> It would seem merge join has almost the same complexities as the new\n>> hash join code, since it can spill to disk doing sorts for merge joins,\n>> and adjusting work_mem is the only way to control that spill to disk. I\n>> don't remember anyone complaining about spills to disk during merge\n>> join, so I am unclear why we would need a such control for hash join.\n\n> It loooks like merge join was new in 8.3. I don't think that's a good analogy,\n> since the old behavior was still available with enable_mergejoin=off.\n\nUh, what? A look into our git history shows immediately that\nnodeMergejoin.c has been there since the initial code import in 1996.\n\nI tend to agree with Bruce that it's not very obvious that we need\nanother GUC knob here ... especially not one as ugly as this.\nI'm especially against the \"neverspill\" option, because that makes a\nsingle GUC that affects both the planner and executor independently.\n\nIf we feel we need something to let people have the v12 behavior\nback, let's have\n(1) enable_hashagg on/off --- controls planner, same as it ever was\n(2) enable_hashagg_spill on/off --- controls executor by disabling spill\n\nBut I'm not really convinced that we need (2).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:29:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 01:29:56PM -0400, Tom Lane wrote:\n>Justin Pryzby <pryzby@telsasoft.com> writes:\n>> On Wed, Jun 24, 2020 at 05:06:28AM -0400, Bruce Momjian wrote:\n>>> It would seem merge join has almost the same complexities as the new\n>>> hash join code, since it can spill to disk doing sorts for merge joins,\n>>> and adjusting work_mem is the only way to control that spill to disk. I\n>>> don't remember anyone complaining about spills to disk during merge\n>>> join, so I am unclear why we would need a such control for hash join.\n>\n>> It loooks like merge join was new in 8.3. I don't think that's a good analogy,\n>> since the old behavior was still available with enable_mergejoin=off.\n>\n>Uh, what? A look into our git history shows immediately that\n>nodeMergejoin.c has been there since the initial code import in 1996.\n>\n>I tend to agree with Bruce that it's not very obvious that we need\n>another GUC knob here ... especially not one as ugly as this.\n>I'm especially against the \"neverspill\" option, because that makes a\n>single GUC that affects both the planner and executor independently.\n>\n>If we feel we need something to let people have the v12 behavior\n>back, let's have\n>(1) enable_hashagg on/off --- controls planner, same as it ever was\n>(2) enable_hashagg_spill on/off --- controls executor by disabling spill\n>\n\nWhat if a user specifies\n\n enable_hashagg = on\n enable_hashagg_spill = off\n\nand the estimates say the hashagg would need to spill to disk. Should\nthat disable the query (in which case the second GUC affects both\nexecutor and planner) or run it (in which case we knowingly ignore\nwork_mem, which seems wrong).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 20:32:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Jun 24, 2020 at 01:29:56PM -0400, Tom Lane wrote:\n>> If we feel we need something to let people have the v12 behavior\n>> back, let's have\n>> (1) enable_hashagg on/off --- controls planner, same as it ever was\n>> (2) enable_hashagg_spill on/off --- controls executor by disabling spill\n\n> What if a user specifies\n> enable_hashagg = on\n> enable_hashagg_spill = off\n\nIt would probably be reasonable for the planner to behave as it did\npre-v13, that is not choose hashagg if it estimates that work_mem\nwould be exceeded. (So, okay, that means enable_hashagg_spill\naffects both planner and executor ... but ISTM it's just one\nbehavior not two.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:40:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-24 14:11:57 +1200, David Rowley wrote:\n> 1. Statistics underestimation can cause hashagg to be selected. The\n> executor will spill to disk in PG13. Users may find performance\n> suffers as previously the query may have just overshot work_mem\n> without causing any OOM issues. Their I/O performance might be\n> terrible.\n\n> 2. We might now choose to hash aggregate where pre PG13, we didn't\n> choose that because the hash table was estimated to be bigger than\n> work_mem. Hash agg might not be the best plan for the job.\n\n> For #1. We know users are forced to run smaller work_mems than they\n> might like as they need to allow for that random moment where all\n> backends happen to be doing that 5-way hash join all at the same time.\n> It seems reasonable that someone might want the old behaviour. They\n> may well be sitting on a timebomb that's about to OOM, but it would be\n> sad if someone's upgrade to PG13 was blocked on this, especially if\n> it's just due to some query that runs once per month but needs to\n> perform quickly.\n\nI'm quite concerned about this one. I think this isn't just going to hit\nwhen the planner mis-estimates ndistinct, but also when transition\nvalues use a bit more space. We'll now start spilling in cases the\n< v13 planner did everything right.\n\nThat's great for cases where we'd otherwise OOM, but for a lot of other\ncases where there actually is more than sufficient RAM to overrun\nwork_mem by a single-digit factor, it can cause a pretty massive\nincrease of IO over < v13.\n\n\nFWIW, my gut feeling is that we'll end up have to separate the\n\"execution time\" spilling from using plain work mem, because it'll\ntrigger spilling too often. E.g. if the plan isn't expected to spill,\nonly spill at 10 x work_mem or something like that. Or we'll need\nbetter management of temp file data when there's plenty memory\navailable.\n\n\n> For #2. This seems like a very legitimate requirement to me. If a\n> user is unhappy that PG13 now hashaggs where before it sorted and\n> group aggregated, but they're unhappy, not because there's some issue\n> with hashagg spilling, but because that causes the node above the agg\n> to becomes a Hash Join rather than a Merge Join and that's bad for\n> some existing reason. Our planner doing the wrong thing based on\n> either; lack of, inaccurate or out-of-date statistics is not Jeff's\n> fault. Having the ability to switch off a certain planner feature is\n> just following along with what we do today for many other node types.\n\nThis one concerns me a bit less, fwiw. There's a lot more \"pressure\" in\nthe planner to choose hash agg or sorted agg, compared to e.g. a bunch\nof aggregate states taking up a bit more space (can't estimate that at\nall for ma.\n\n\n> As for GUCs to try to help the group of users who, *I'm certain*, will\n> have problems with PG13's plan choice. I think the overloaded\n> enable_hashagg option is a really nice compromise. We don't really\n> have any other executor node type that has multiple GUCs controlling\n> its behaviour, so I believe it would be nice to keep it that way.\n> \n> How about:\n> \n> enable_hashagg = \"on\" -- enables hashagg allowing it to freely spill\n> to disk as it pleases.\n> enable_hashagg = \"trynospill\" -- Planner will only choose hash_agg if\n> it thinks it won't spill (pre PG13 planner behaviour)\n> enable_hashagg = \"neverspill\" -- executor will *never* spill to disk\n> and can still OOM (NOT RECOMMENDED, but does give pre PG13 planner and\n> executor behaviour)\n> enable_hashagg = \"off\" -- planner does not consider hash agg, ever.\n> Same as what PG12 did for this setting.\n> \n> Now, it's a bit weird to have \"neverspill\" as this is controlling\n> what's done in the executor from a planner GUC. Likely we can just\n> work around that by having a new \"allowhashspill\" bool field in the\n> \"Agg\" struct that's set by the planner, say during createplan that\n> controls if nodeAgg.c is allowed to spill or not. That'll also allow\n> PREPAREd plans to continue to do what they had planned to do already.\n> \n> The thing I like about doing it this way is that:\n> \n> a) it does not add any new GUCs\n> b) it semi hides the weird values that we really wish nobody would\n> ever have to set in a GUC that people have become used it just\n> allowing the values \"on\" and \"off\".\n> \n> The thing I don't quite like about this idea is:\n> a) I wish the planner was perfect and we didn't need to do this.\n> b) It's a bit weird to overload a GUC that has a very booleanish name\n> to not be bool.\n> \n> However, I also think it's pretty lightweight to support this. I\n> imagine a dozen lines of docs and likely about half a dozen lines per\n> GUC option in the planner.\n\nThat'd work for me, but I honestly don't particularly care about the\nspecific naming, as long as we provide users an escape hatch from the\nincreased amount of IO.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:14:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-24 13:12:03 -0400, Bruce Momjian wrote:\n> Well, my point is that merge join works that way, and no one has needed\n> a knob to avoid mergejoin if it is going to spill to disk. If they are\n> adjusting work_mem to prevent spill of merge join, they can do the same\n> for hash agg. We just need to document this in the release notes.\n\nI don't think this is comparable. For starters, the IO indirectly\ntriggered by mergejoin actually leads to plenty people just straight out\ndisabling it. For lots of workloads there's never a valid reason to use\na mergejoin (and often the planner will never choose one). Secondly, the\nplanner has better information about estimating the memory usage for the\nto-be-sorted data than it has about the size of the transition\nvalues. And lastly, there's a difference between a long existing cause\nfor bad IO behaviour and one that's suddenly kicks in after a major\nversion upgrade, to which there's no escape hatch (it's rarely realistic\nto disable hash aggs, in contrast to merge joins).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:19:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n> FWIW, my gut feeling is that we'll end up have to separate the\n> \"execution time\" spilling from using plain work mem, because it'll\n> trigger spilling too often. E.g. if the plan isn't expected to spill,\n> only spill at 10 x work_mem or something like that. Or we'll need\n> better management of temp file data when there's plenty memory\n> available.\n\nSo, I don't think we can wire in a constant like 10x. That's really\nunprincipled and I think it's a bad idea. What we could do, though, is\nreplace the existing Boolean-valued GUC with a new GUC that controls\nthe size at which the aggregate spills. The default could be -1,\nmeaning work_mem, but a user could configure a larger value if desired\n(presumably, we would just treat a value smaller than work_mem as\nwork_mem, and document the same).\n\nI think that's actually pretty appealing. Separating the memory we\nplan to use from the memory we're willing to use before spilling seems\nlike a good idea in general, and I think we should probably also do it\nin other places - like sorts.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:28:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-24 14:40:50 -0400, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Wed, Jun 24, 2020 at 01:29:56PM -0400, Tom Lane wrote:\n> >> If we feel we need something to let people have the v12 behavior\n> >> back, let's have\n> >> (1) enable_hashagg on/off --- controls planner, same as it ever was\n> >> (2) enable_hashagg_spill on/off --- controls executor by disabling spill\n>\n> > What if a user specifies\n> > enable_hashagg = on\n> > enable_hashagg_spill = off\n>\n> It would probably be reasonable for the planner to behave as it did\n> pre-v13, that is not choose hashagg if it estimates that work_mem\n> would be exceeded. (So, okay, that means enable_hashagg_spill\n> affects both planner and executor ... but ISTM it's just one\n> behavior not two.)\n\nThere's two different reasons for spilling in the executor right now:\n\n1) The planner estimated that we'd need to spill, and that turns out to\n be true. There seems no reason to not spill in that case (as long as\n it's enabled/chosen in the planner).\n\n2) The planner didn't think we'd need to spill, but we end up using more\n than work_mem memory.\n\nnodeAgg.c already treats those separately:\n\nvoid\nhash_agg_set_limits(double hashentrysize, uint64 input_groups, int used_bits,\n\t\t\t\t\tSize *mem_limit, uint64 *ngroups_limit,\n\t\t\t\t\tint *num_partitions)\n{\n\tint\t\t\tnpartitions;\n\tSize\t\tpartition_mem;\n\n\t/* if not expected to spill, use all of work_mem */\n\tif (input_groups * hashentrysize < work_mem * 1024L)\n\t{\n\t\tif (num_partitions != NULL)\n\t\t\t*num_partitions = 0;\n\t\t*mem_limit = work_mem * 1024L;\n\t\t*ngroups_limit = *mem_limit / hashentrysize;\n\t\treturn;\n\t}\n\nWe can't sensibly disable spilling when chosen at plan time, because\nthat'd lead to *more* OOMS than in v12.\n\nISTM that we should have one option that controls whether 1) is done,\nand one that controls whether 2) is done. Even if the option for 2 is\noff, we still should spill when the option for 1) chooses a spilling\nplan. I don't think it makes sense for one of those options to\ninfluence the other implicitly.\n\nSo maybe enable_hashagg_spilling_plan for 1) and\nhashagg_spill_on_exhaust for 2).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:31:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-24 15:28:47 -0400, Robert Haas wrote:\n> On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > FWIW, my gut feeling is that we'll end up have to separate the\n> > \"execution time\" spilling from using plain work mem, because it'll\n> > trigger spilling too often. E.g. if the plan isn't expected to spill,\n> > only spill at 10 x work_mem or something like that. Or we'll need\n> > better management of temp file data when there's plenty memory\n> > available.\n> \n> So, I don't think we can wire in a constant like 10x. That's really\n> unprincipled and I think it's a bad idea. What we could do, though, is\n> replace the existing Boolean-valued GUC with a new GUC that controls\n> the size at which the aggregate spills. The default could be -1,\n> meaning work_mem, but a user could configure a larger value if desired\n> (presumably, we would just treat a value smaller than work_mem as\n> work_mem, and document the same).\n\nTo be clear, I wasn't actually thinking of hard-coding 10x, but having a\nconfig option that specifies a factor of work_mem. A factor seems better\nbecause it'll work reasonably for different values of work_mem, whereas\na concrete size wouldn't.\n\n\n> I think that's actually pretty appealing. Separating the memory we\n> plan to use from the memory we're willing to use before spilling seems\n> like a good idea in general, and I think we should probably also do it\n> in other places - like sorts.\n\nIndeed. And then perhaps we could eventually add some reporting /\nmonitoring infrastructure for the cases where plan time and execution\ntime memory estimate/usage widely differs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:36:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 12:36:24PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2020-06-24 15:28:47 -0400, Robert Haas wrote:\n>> On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n>> > FWIW, my gut feeling is that we'll end up have to separate the\n>> > \"execution time\" spilling from using plain work mem, because it'll\n>> > trigger spilling too often. E.g. if the plan isn't expected to spill,\n>> > only spill at 10 x work_mem or something like that. Or we'll need\n>> > better management of temp file data when there's plenty memory\n>> > available.\n>>\n>> So, I don't think we can wire in a constant like 10x. That's really\n>> unprincipled and I think it's a bad idea. What we could do, though, is\n>> replace the existing Boolean-valued GUC with a new GUC that controls\n>> the size at which the aggregate spills. The default could be -1,\n>> meaning work_mem, but a user could configure a larger value if desired\n>> (presumably, we would just treat a value smaller than work_mem as\n>> work_mem, and document the same).\n>\n>To be clear, I wasn't actually thinking of hard-coding 10x, but having a\n>config option that specifies a factor of work_mem. A factor seems better\n>because it'll work reasonably for different values of work_mem, whereas\n>a concrete size wouldn't.\n>\n\nI'm not quite convinced we need/should introduce a new memory limit.\nIt's true keping it equal to work_mem by default makes this less of an\nissue, but it's still another moving part the users will need to learn\nhow to use.\n\nBut if we do introduce a new limit, I very much think it should be a\nplain limit, not a factor. That just makes it even more complicated, and\nwe don't have any such limit yet.\n\n>\n>> I think that's actually pretty appealing. Separating the memory we\n>> plan to use from the memory we're willing to use before spilling seems\n>> like a good idea in general, and I think we should probably also do it\n>> in other places - like sorts.\n>\n>Indeed. And then perhaps we could eventually add some reporting /\n>monitoring infrastructure for the cases where plan time and execution\n>time memory estimate/usage widely differs.\n>\n\nI wouldn't mind something like that in general - not just for hashagg,\nbut for various other nodes.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 23:02:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 11:02:10PM +0200, Tomas Vondra wrote:\n> > Indeed. And then perhaps we could eventually add some reporting /\n> > monitoring infrastructure for the cases where plan time and execution\n> > time memory estimate/usage widely differs.\n> > \n> \n> I wouldn't mind something like that in general - not just for hashagg,\n> but for various other nodes.\n\nWell, other than worrying about problems with pre-13 queries, how is\nthis different from any other spill to disk when we exceed work_mem,\nlike sorts for merge join.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 19:15:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 12:19:00PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2020-06-24 13:12:03 -0400, Bruce Momjian wrote:\n> > Well, my point is that merge join works that way, and no one has needed\n> > a knob to avoid mergejoin if it is going to spill to disk. If they are\n> > adjusting work_mem to prevent spill of merge join, they can do the same\n> > for hash agg. We just need to document this in the release notes.\n> \n> I don't think this is comparable. For starters, the IO indirectly\n> triggered by mergejoin actually leads to plenty people just straight out\n> disabling it. For lots of workloads there's never a valid reason to use\n> a mergejoin (and often the planner will never choose one). Secondly, the\n> planner has better information about estimating the memory usage for the\n> to-be-sorted data than it has about the size of the transition\n> values. And lastly, there's a difference between a long existing cause\n> for bad IO behaviour and one that's suddenly kicks in after a major\n> version upgrade, to which there's no escape hatch (it's rarely realistic\n> to disable hash aggs, in contrast to merge joins).\n\nWell, this sounds like an issue of degree, rather than kind. It sure\nsounds like \"ignore work_mem for this join type, but not the other\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 19:18:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 07:18:10PM -0400, Bruce Momjian wrote:\n> On Wed, Jun 24, 2020 at 12:19:00PM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2020-06-24 13:12:03 -0400, Bruce Momjian wrote:\n> > > Well, my point is that merge join works that way, and no one has needed\n> > > a knob to avoid mergejoin if it is going to spill to disk. If they are\n> > > adjusting work_mem to prevent spill of merge join, they can do the same\n> > > for hash agg. We just need to document this in the release notes.\n> > \n> > I don't think this is comparable. For starters, the IO indirectly\n> > triggered by mergejoin actually leads to plenty people just straight out\n> > disabling it. For lots of workloads there's never a valid reason to use\n> > a mergejoin (and often the planner will never choose one). Secondly, the\n> > planner has better information about estimating the memory usage for the\n> > to-be-sorted data than it has about the size of the transition\n> > values. And lastly, there's a difference between a long existing cause\n> > for bad IO behaviour and one that's suddenly kicks in after a major\n> > version upgrade, to which there's no escape hatch (it's rarely realistic\n> > to disable hash aggs, in contrast to merge joins).\n> \n> Well, this sounds like an issue of degree, rather than kind. It sure\n> sounds like \"ignore work_mem for this join type, but not the other\".\n\nI think my main point is that work_mem was not being honored for\nhash-agg before, but now that PG 13 can do it, we are again allowing\nwork_mem not to apply in certain cases. I am wondering if our hard\nlimit for work_mem is the issue, and we should make that more flexible\nfor all uses.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 19:38:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, Jun 24, 2020 at 7:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I think my main point is that work_mem was not being honored for\n> hash-agg before, but now that PG 13 can do it, we are again allowing\n> work_mem not to apply in certain cases. I am wondering if our hard\n> limit for work_mem is the issue, and we should make that more flexible\n> for all uses.\n\nI mean, that's pretty much what we're talking about here, isn't it? It\nseems like in your previous two replies you were opposed to separating\nthe plan-type limit from the execution-time limit, but that idea is\nprecisely a way of being more flexible (and extending it to other plan\nnodes is a way of making it more flexible for more use cases).\n\nAs I think you know, if you have a system where the workload varies a\nlot, you may sometimes be using 0 copies of work_mem and at other\ntimes 1000 or more copies, so the value has to be chosen\nconservatively as a percentage of system memory, else you start\nswapping or the OOM killer gets involved. On the other hand, some plan\nnodes get a lot less efficient when the amount of memory available\nfalls below some threshold, so you can't just set this to a tiny value\nand forget about it. Because the first problem is so bad, most people\nset the value relatively conservatively and just live with the\nperformance consequences. But this also means that they have memory\nleft over most of the time, so the idea of letting a node burst above\nits work_mem allocation when something unexpected happens isn't crazy:\nas long as only a few nodes do that here and there, rather than, say,\nall the nodes doing it all at the same time, it's actually fine. If we\nhad a smarter system that could dole out more work_mem to nodes that\nwould really benefit from it and less to nodes where it isn't likely\nto make much difference, that would be similar in spirit but even\nbetter.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:46:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-06-24 at 15:28 -0400, Robert Haas wrote:\n> On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > FWIW, my gut feeling is that we'll end up have to separate the\n> > \"execution time\" spilling from using plain work mem, because it'll\n> > trigger spilling too often. E.g. if the plan isn't expected to\n> > spill,\n> > only spill at 10 x work_mem or something like that. Or we'll need\n> > better management of temp file data when there's plenty memory\n> > available.\n\n...\n\n> I think that's actually pretty appealing. Separating the memory we\n> plan to use from the memory we're willing to use before spilling\n> seems\n> like a good idea in general, and I think we should probably also do\n> it\n> in other places - like sorts.\n\nI'm trying to make sense of this. Let's say there are two GUCs:\nplanner_work_mem=16MB and executor_work_mem=32MB.\n\nAnd let's say a query comes along and generates a HashAgg path, and the\nplanner (correctly) thinks if you put all the groups in memory at once,\nit would be 24MB. Then the planner, using planner_work_mem, would think\nspilling was necessary, and generate a cost that involves spilling.\n\nThen it's going to generate a Sort+Group path, as well. And perhaps it\nestimates that sorting all of the tuples in memory would also take\n24MB, so it generates a cost that involves spilling to disk.\n\nBut it has to choose one of them. We've penalized plans at risk of\nspilling to disk, but what's the point? The planner needs to choose one\nof them, and both are at risk of spilling to disk.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 09:14:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-06-24 at 12:14 -0700, Andres Freund wrote:\n> E.g. if the plan isn't expected to spill,\n> only spill at 10 x work_mem or something like that.\n\nLet's say you have work_mem=32MB and a query that's expected to use\n16MB of memory. In reality, it uses 64MB of memory. So you are saying\nthis query would get to use all 64MB of memory, right?\n\nBut then you run ANALYZE. Now the query is (correctly) expected to use\n64MB of memory. Are you saying this query, executed again with better\nstats, would only get to use 32MB of memory, and therefore run slower?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 09:24:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-06-25 09:24:52 -0700, Jeff Davis wrote:\n> On Wed, 2020-06-24 at 12:14 -0700, Andres Freund wrote:\n> > E.g. if the plan isn't expected to spill,\n> > only spill at 10 x work_mem or something like that.\n> \n> Let's say you have work_mem=32MB and a query that's expected to use\n> 16MB of memory. In reality, it uses 64MB of memory. So you are saying\n> this query would get to use all 64MB of memory, right?\n> \n> But then you run ANALYZE. Now the query is (correctly) expected to use\n> 64MB of memory. Are you saying this query, executed again with better\n> stats, would only get to use 32MB of memory, and therefore run slower?\n\nYes. I think that's ok, because it was taken into account from a costing\nperspective int he second case.\n\n\n", "msg_date": "Thu, 25 Jun 2020 09:37:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-06-24 at 12:31 -0700, Andres Freund wrote:\n> nodeAgg.c already treats those separately:\n> \n> void\n> hash_agg_set_limits(double hashentrysize, uint64 input_groups, int\n> used_bits,\n> \t\t\t\t\tSize *mem_limit, uint64\n> *ngroups_limit,\n> \t\t\t\t\tint *num_partitions)\n> {\n> \tint\t\t\tnpartitions;\n> \tSize\t\tpartition_mem;\n> \n> \t/* if not expected to spill, use all of work_mem */\n> \tif (input_groups * hashentrysize < work_mem * 1024L)\n> \t{\n> \t\tif (num_partitions != NULL)\n> \t\t\t*num_partitions = 0;\n> \t\t*mem_limit = work_mem * 1024L;\n> \t\t*ngroups_limit = *mem_limit / hashentrysize;\n> \t\treturn;\n> \t}\n\nThe reason this code exists is to decide how much of work_mem to set\naside for spilling (each spill partition needs an IO buffer).\n\nThe alternative would be to fix the number of partitions before\nprocessing a batch, which didn't seem ideal. Or, we could just ignore\nthe memory required for IO buffers, like HashJoin.\n\nGranted, this is an example where an underestimate can give an\nadvantage, but I don't think we want to extend the concept into other\nareas.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 09:42:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-06-25 at 11:46 -0400, Robert Haas wrote:\n> Because the first problem is so bad, most people\n> set the value relatively conservatively and just live with the\n> performance consequences. But this also means that they have memory\n> left over most of the time, so the idea of letting a node burst above\n> its work_mem allocation when something unexpected happens isn't\n> crazy:\n> as long as only a few nodes do that here and there, rather than, say,\n> all the nodes doing it all at the same time, it's actually fine.\n\nUnexpected things (meaning underestimates) are not independent. All the\nqueries are based on the same stats, so if you have a lot of similar\nqueries, they will all get the same underestimate at once, and all be\nsurprised when they need to spill at once, and then all decide they are\nentitled to ignore work_mem at once.\n\n> If we\n> had a smarter system that could dole out more work_mem to nodes that\n> would really benefit from it and less to nodes where it isn't likely\n> to make much difference, that would be similar in spirit but even\n> better.\n\nThat sounds more useful and probably not too hard to implement in a\ncrude form. Just have a shared counter in memory representing GB. If a\nnode is about to spill, it could try to decrement the counter by N, and\nif it succeeds, it gets to exceed work_mem by N more GB.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 10:15:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:46:54AM -0400, Robert Haas wrote:\n> On Wed, Jun 24, 2020 at 7:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think my main point is that work_mem was not being honored for\n> > hash-agg before, but now that PG 13 can do it, we are again allowing\n> > work_mem not to apply in certain cases. I am wondering if our hard\n> > limit for work_mem is the issue, and we should make that more flexible\n> > for all uses.\n> \n> I mean, that's pretty much what we're talking about here, isn't it? It\n> seems like in your previous two replies you were opposed to separating\n> the plan-type limit from the execution-time limit, but that idea is\n> precisely a way of being more flexible (and extending it to other plan\n> nodes is a way of making it more flexible for more use cases).\n\nI think it is was Tom who was complaining about plan vs. execution time\ncontrol.\n\n> As I think you know, if you have a system where the workload varies a\n> lot, you may sometimes be using 0 copies of work_mem and at other\n> times 1000 or more copies, so the value has to be chosen\n> conservatively as a percentage of system memory, else you start\n> swapping or the OOM killer gets involved. On the other hand, some plan\n> nodes get a lot less efficient when the amount of memory available\n> falls below some threshold, so you can't just set this to a tiny value\n> and forget about it. Because the first problem is so bad, most people\n> set the value relatively conservatively and just live with the\n> performance consequences. But this also means that they have memory\n> left over most of the time, so the idea of letting a node burst above\n> its work_mem allocation when something unexpected happens isn't crazy:\n> as long as only a few nodes do that here and there, rather than, say,\n> all the nodes doing it all at the same time, it's actually fine. If we\n> had a smarter system that could dole out more work_mem to nodes that\n> would really benefit from it and less to nodes where it isn't likely\n> to make much difference, that would be similar in spirit but even\n> better.\n\nI think the issue is that in PG 13 work_mem controls sorts and hashes\nwith a new hard limit for hash aggregation:\n\n\thttps://www.postgresql.org/docs/12/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY\n\t\n\tSort operations are used for ORDER BY, DISTINCT, and merge joins. Hash\n\ttables are used in hash joins, hash-based aggregation, and hash-based\n\tprocessing of IN subqueries.\n\nIn pre-PG 13, we \"controlled\" it by avoiding hash-based aggregation if\nwas expected it to exceed work_mem, but if we assumed it would be less\nthan work_mem and it was more, we exceeded work_mem allocation for that\nnode. In PG 13, we \"limit\" memory to work_mem and spill to disk if we\nexceed it.\n\nWe should really have always documented that hash agg could exceed\nwork_mem for misestimation, and if we add a hash_agg work_mem\nmisestimation bypass setting we should document this setting in work_mem\nas well.\n\nBut then the question is why do we allow this bypass only for hash agg? \nShould work_mem have a settings for ORDER BY, merge join, hash join, and\nhash agg, e.g.:\n\n\twork_mem = 'order_by=10MB, hash_join=20MB, hash_agg=100MB'\n\nYeah, crazy syntax, but you get the idea. I understand some nodes are\nmore sensitive to disk spill than others, so shouldn't we be controlling\nthis at the work_mem level, rather than for a specific node type like\nhash agg? We could allow for misestimation over allocation of hash agg\nwork_mem by splitting up the hash agg values:\n\n\twork_mem = 'order_by=10MB, hash_join=20MB, hash_agg=100MB hash_agg_max=200MB'\n\nbut _avoiding_ hash agg if it is estimated to exceed work mem and spill\nto disk is not something to logically control at the work mem level,\nwhich leads so something like David Rowley suggested, but with different\nnames:\n\n\tenable_hashagg = on | soft | avoid | off\n\nwhere 'on' and 'off' are the current PG 13 behavior, 'soft' means to\ntreat work_mem as a soft limit and allow it to exceed work mem for\nmisestimation, and 'avoid' means to avoid hash agg if it is estimated to\nexceed work mem. Both 'soft' and 'avoid' don't spill to disk.\n\nDavid's original terms of \"trynospill\" and \"neverspill\" were focused on\nspilling, not on its interaction with work_mem, and I found that\nconfusing.\n\nFrankly, if it took me this long to get my head around this, I am\nunclear how many people will understand this tuning feature enough to\nactually use it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 13:17:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-06-25 at 09:37 -0700, Andres Freund wrote:\n> > Let's say you have work_mem=32MB and a query that's expected to use\n> > 16MB of memory. In reality, it uses 64MB of memory. So you are\n> > saying\n> > this query would get to use all 64MB of memory, right?\n> > \n> > But then you run ANALYZE. Now the query is (correctly) expected to\n> > use\n> > 64MB of memory. Are you saying this query, executed again with\n> > better\n> > stats, would only get to use 32MB of memory, and therefore run\n> > slower?\n> \n> Yes. I think that's ok, because it was taken into account from a\n> costing\n> perspective int he second case.\n\nWhat do you mean by \"taken into account\"?\n\nThere are only two possible paths: HashAgg and Sort+Group, and we need\nto pick one. If the planner expects one to spill, it is likely to\nexpect the other to spill. If one spills in the executor, then the\nother is likely to spill, too. (I'm ignoring the case with a lot of\ntuples and few groups because that doesn't seem relevant.)\n\nImagine that there was only one path available to choose. Would you\nsuggest the same thing, that unexpected spills can exceed work_mem but\nexpected spills can't?\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 10:44:42 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 1:15 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Unexpected things (meaning underestimates) are not independent. All the\n> queries are based on the same stats, so if you have a lot of similar\n> queries, they will all get the same underestimate at once, and all be\n> surprised when they need to spill at once, and then all decide they are\n> entitled to ignore work_mem at once.\n\nYeah, that's a risk. But what is proposed is a configuration setting,\nso people can adjust it depending on what they think is likely to\nhappen in their environment.\n\n> That sounds more useful and probably not too hard to implement in a\n> crude form. Just have a shared counter in memory representing GB. If a\n> node is about to spill, it could try to decrement the counter by N, and\n> if it succeeds, it gets to exceed work_mem by N more GB.\n\nThat's a neat idea, although GB seems too coarse.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 13:47:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-06-25 at 13:17 -0400, Bruce Momjian wrote:\n> Frankly, if it took me this long to get my head around this, I am\n> unclear how many people will understand this tuning feature enough to\n> actually use it.\n\nThe way I think about it is that v13 HashAgg is much more consistent\nwith the way we do everything else: the planner costs it (including any\nspilling that is expected), and the executor executes it (including any\nspilling that is required to obey work_mem).\n\nIn earlier versions, HashAgg was weird. If we add GUCs to get that\nweird behavior back, then the GUCs will necessarily be weird; and\ntherefore hard to document.\n\nI would feel more comfortable with some kind of GUC escape hatch (or\ntwo). GROUP BY is just too common, and I don't think we can ignore the\npotential for users experiencing a regression of some type (even if, in\nprinciple, the v13 version is better).\n\nIf we have the GUCs there, then at least if someone comes to the\nmailing list with a problem, we can offer them a temporary solution,\nand have time to try to avoid the problem in a future release (tweaking\nestimates, cost model, defaults, etc.).\n\nOne idea is to have undocumented GUCs. That way we don't have to\nsupport them forever, and we are more likely to hear problem reports.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:02:30 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-06-24 at 13:29 -0400, Tom Lane wrote:\n> If we feel we need something to let people have the v12 behavior\n> back, let's have\n> (1) enable_hashagg on/off --- controls planner, same as it ever was\n> (2) enable_hashagg_spill on/off --- controls executor by disabling\n> spill\n> \n> But I'm not really convinced that we need (2).\n\nIf we're not going to have a planner GUC, one alternative is to just\npenalize the disk costs of HashAgg for a release or two. It would only\naffect the cost of HashAgg paths that are expected to spill, which\nweren't even generated in previous releases.\n\nIn other words, multiply the disk costs by enough that the planner will\nusually not choose HashAgg if expected to spill unless the average\ngroup size is quite large (i.e. there are a lot more tuples than\ngroups, but still enough groups to spill).\n\nAs we learn more and optimize more, we can reduce or eliminate the\npenalty in a future release. I'm not sure exactly what the penalty\nwould be, though.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:10:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-25 10:44:42 -0700, Jeff Davis wrote:\n> There are only two possible paths: HashAgg and Sort+Group, and we need\n> to pick one. If the planner expects one to spill, it is likely to\n> expect the other to spill. If one spills in the executor, then the\n> other is likely to spill, too. (I'm ignoring the case with a lot of\n> tuples and few groups because that doesn't seem relevant.)\n\nThere's also ordered index scan + Group. Which will often be vastly\nbetter than Sort+Group, but still slower than HashAgg.\n\n\n> Imagine that there was only one path available to choose. Would you\n> suggest the same thing, that unexpected spills can exceed work_mem but\n> expected spills can't?\n\nI'm not saying what I propose is perfect, but I've yet to hear a better\nproposal. Given that there *are* different ways to implement\naggregation, and that we use expected costs to choose, I think the\nassumed costs are relevant.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:16:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:02:30AM -0700, Jeff Davis wrote:\n> If we have the GUCs there, then at least if someone comes to the\n> mailing list with a problem, we can offer them a temporary solution,\n> and have time to try to avoid the problem in a future release (tweaking\n> estimates, cost model, defaults, etc.).\n> \n> One idea is to have undocumented GUCs. That way we don't have to\n> support them forever, and we are more likely to hear problem reports.\n\nUh, our track record of adding GUCs just in case is not good, and\nremoving them is even harder. Undocumented sounds interesting but then\nhow do we even document when we remove it? I don't think we want to go\nthere. Oracle has done that, and I don't think the user experience is\ngood.\n\nMaybe we should just continue though beta, add an incompatibility item\nto the PG 13 release notes, and see what feedback we get. We know\nincreasing work_mem gets us the exceed work_mem behavior, but that\naffects other nodes too, and I can't think of a way to avoid if spill is\npredicted except to disable hash agg for that query.\n\nI am still trying to get my head around why the spill is going to be so\nmuch work to adjust for hash agg than our other spillable nodes. What\nare people doing for those cases already? Do we have an real-world\nqueries that are a problem in PG 13 for this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 14:25:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:10:58AM -0700, Jeff Davis wrote:\n> On Wed, 2020-06-24 at 13:29 -0400, Tom Lane wrote:\n> > If we feel we need something to let people have the v12 behavior\n> > back, let's have\n> > (1) enable_hashagg on/off --- controls planner, same as it ever was\n> > (2) enable_hashagg_spill on/off --- controls executor by disabling\n> > spill\n> > \n> > But I'm not really convinced that we need (2).\n> \n> If we're not going to have a planner GUC, one alternative is to just\n> penalize the disk costs of HashAgg for a release or two. It would only\n> affect the cost of HashAgg paths that are expected to spill, which\n> weren't even generated in previous releases.\n> \n> In other words, multiply the disk costs by enough that the planner will\n> usually not choose HashAgg if expected to spill unless the average\n> group size is quite large (i.e. there are a lot more tuples than\n> groups, but still enough groups to spill).\n\nWell, the big question is whether this costing is actually more accurate\nthan what we have now. What I am hearing is that spilling hash agg is\nexpensive, so whatever we can do to reflect the actual costs seems like\na win. If it can be done, it certainly seems better than a cost setting\nfew people will use.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:24:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 03:24:42PM -0400, Bruce Momjian wrote:\n> On Thu, Jun 25, 2020 at 11:10:58AM -0700, Jeff Davis wrote:\n> > On Wed, 2020-06-24 at 13:29 -0400, Tom Lane wrote:\n> > > If we feel we need something to let people have the v12 behavior\n> > > back, let's have\n> > > (1) enable_hashagg on/off --- controls planner, same as it ever was\n> > > (2) enable_hashagg_spill on/off --- controls executor by disabling\n> > > spill\n> > > \n> > > But I'm not really convinced that we need (2).\n> > \n> > If we're not going to have a planner GUC, one alternative is to just\n> > penalize the disk costs of HashAgg for a release or two. It would only\n> > affect the cost of HashAgg paths that are expected to spill, which\n> > weren't even generated in previous releases.\n> > \n> > In other words, multiply the disk costs by enough that the planner will\n> > usually not choose HashAgg if expected to spill unless the average\n> > group size is quite large (i.e. there are a lot more tuples than\n> > groups, but still enough groups to spill).\n> \n> Well, the big question is whether this costing is actually more accurate\n> than what we have now. What I am hearing is that spilling hash agg is\n> expensive, so whatever we can do to reflect the actual costs seems like\n> a win. If it can be done, it certainly seems better than a cost setting\n> few people will use.\n\nIt is my understanding that spill of sorts is mostly read sequentially,\nwhile hash reads are random. Is that right? Is that not being costed\nproperly?\n\nThat doesn't fix the misestimation case, but increasing work mem does\nallow pre-PG 13 behavior there.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:56:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi,\n\nOn 2020-06-25 14:25:12 -0400, Bruce Momjian wrote:\n> I am still trying to get my head around why the spill is going to be so\n> much work to adjust for hash agg than our other spillable nodes.\n\nAggregates are the classical case used to process large amounts of\ndata. For larger amounts of data sorted input (be it via explicit sort\nor ordered index scan) isn't an attractive option. IOW hash-agg is the\ncommon case. There's also fewer stats for halfway accurately estimating\nthe number of groups and the size of the transition state - a sort /\nhash join doesn't have an equivalent to the variably sized transition\nvalue.\n\n\n> What are people doing for those cases already? Do we have an\n> real-world queries that are a problem in PG 13 for this?\n\nI don't know about real world, but it's pretty easy to come up with\nexamples.\n\nquery:\nSELECT a, array_agg(b) FROM (SELECT generate_series(1, 10000)) a(a), (SELECT generate_series(1, 10000)) b(b) GROUP BY a HAVING array_length(array_agg(b), 1) = 0;\n\nwork_mem = 4MB\n\n12 18470.012 ms\nHEAD 44635.210 ms\n\nHEAD causes ~2.8GB of file IO, 12 doesn't cause any. If you're IO\nbandwidth constrained, this could be quite bad.\n\nObviously this is contrived, and a pretty extreme case. But if you\nimagine this happening on a system where disk IO isn't super fast\n(e.g. just about any cloud provider).\n\nAn even more extreme version of the above is this:\n\n\nquery: SELECT a, array_agg(b) FROM (SELECT generate_series(1, 50000)) a(a), (SELECT generate_series(1, 10000)) b(b) GROUP BY a HAVING array_length(array_agg(b), 1) = 0;\n\nwork_mem = 16MB\n12 81598.965 ms\nHEAD 210772.360 ms\n\ntemporary tablespace on magnetic disk (raid 0 of two 7.2k server\nspinning disks)\n\n12 81136.530 ms\nHEAD 225182.560 ms\n\nThe disks are busy in some periods, but still keep up. If I however make\nthe transition state a bit bigger:\n\nquery: SELECT a, array_agg(b), count(c), max(d),max(e) FROM (SELECT generate_series(1, 10000)) a(a), (SELECT generate_series(1, 5000)::text, repeat(random()::text, 10), repeat(random()::text, 10), repeat(random()::text, 10)) b(b,c,d,e) GROUP BY a HAVING array_length(array_agg(b), 1) = 0;\n\n12\t28164.865 ms\n\nfast ssd:\nHEAD 92520.680 ms\n\nmagnetic:\nHEAD 183968.538 ms\n\n(no reads, there's plenty enough memory. Just writes because the age /\namount thresholds for dirty data are reached)\n\nIn the magnetic case we're IO bottlenecked nearly the whole time.\n\n\nJust to be clear: I think this is a completely over-the-top example. But\nI do think it shows the problem to some degree at least.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Jun 2020 13:36:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-06-25 at 15:56 -0400, Bruce Momjian wrote:\n> It is my understanding that spill of sorts is mostly read\n> sequentially,\n> while hash reads are random. Is that right? Is that not being\n> costed\n> properly?\n\nI don't think there's a major problem with the cost model, but it could\nprobably use some tweaking.\n\nHash writes are random. The hash reads should be mostly sequential (for\nlarge partitions it will be 128-block extents, or 1MB). The cost model\nassumes 50% sequential and 50% random.\n\nSorts are written sequentially and read randomly, but there's\nprefetching to keep the reads from being too random. The cost model\nassumes 75% sequential and 25% random.\n\nOverall, the IO pattern is better for Sort, but not dramatically so.\nTomas Vondra did some nice analysis here:\n\n\nhttps://www.postgresql.org/message-id/20200525021045.dilgcsmgiu4l5jpa@development\n\nThat resulted in getting the prealloc and projection patches in.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 14:28:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jun-25, Andres Freund wrote:\n\n> > What are people doing for those cases already? Do we have an\n> > real-world queries that are a problem in PG 13 for this?\n> \n> I don't know about real world, but it's pretty easy to come up with\n> examples.\n> \n> query:\n> SELECT a, array_agg(b) FROM (SELECT generate_series(1, 10000)) a(a), (SELECT generate_series(1, 10000)) b(b) GROUP BY a HAVING array_length(array_agg(b), 1) = 0;\n> \n> work_mem = 4MB\n> \n> 12 18470.012 ms\n> HEAD 44635.210 ms\n> \n> HEAD causes ~2.8GB of file IO, 12 doesn't cause any. If you're IO\n> bandwidth constrained, this could be quite bad.\n\n... however, you can pretty much get the previous performance back by\nincreasing work_mem. I just tried your example here, and I get 32\nseconds of runtime for work_mem 4MB, and 13.5 seconds for work_mem 1GB\n(this one spills about 800 MB); if I increase that again to 1.7GB I get\nno spilling and 9 seconds of runtime. (For comparison, 12 takes 15.7\nseconds regardless of work_mem).\n\nMy point here is that maybe we don't need to offer a GUC to explicitly\nturn spilling off; it seems sufficient to let users change work_mem so\nthat spilling will naturally not occur. Why do we need more?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Jun 2020 18:44:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Hi, \n\nOn June 25, 2020 3:44:22 PM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2020-Jun-25, Andres Freund wrote:\n>\n>> > What are people doing for those cases already? Do we have an\n>> > real-world queries that are a problem in PG 13 for this?\n>> \n>> I don't know about real world, but it's pretty easy to come up with\n>> examples.\n>> \n>> query:\n>> SELECT a, array_agg(b) FROM (SELECT generate_series(1, 10000)) a(a),\n>(SELECT generate_series(1, 10000)) b(b) GROUP BY a HAVING\n>array_length(array_agg(b), 1) = 0;\n>> \n>> work_mem = 4MB\n>> \n>> 12 18470.012 ms\n>> HEAD 44635.210 ms\n>> \n>> HEAD causes ~2.8GB of file IO, 12 doesn't cause any. If you're IO\n>> bandwidth constrained, this could be quite bad.\n>\n>... however, you can pretty much get the previous performance back by\n>increasing work_mem. I just tried your example here, and I get 32\n>seconds of runtime for work_mem 4MB, and 13.5 seconds for work_mem 1GB\n>(this one spills about 800 MB); if I increase that again to 1.7GB I get\n>no spilling and 9 seconds of runtime. (For comparison, 12 takes 15.7\n>seconds regardless of work_mem).\n>\n>My point here is that maybe we don't need to offer a GUC to explicitly\n>turn spilling off; it seems sufficient to let users change work_mem so\n>that spilling will naturally not occur. Why do we need more?\n\nThat's not really a useful escape hatch, because I'll often lead to other nodes using more memory.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:48:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jun-25, Andres Freund wrote:\n\n> >My point here is that maybe we don't need to offer a GUC to explicitly\n> >turn spilling off; it seems sufficient to let users change work_mem so\n> >that spilling will naturally not occur. Why do we need more?\n> \n> That's not really a useful escape hatch, because I'll often lead to\n> other nodes using more memory.\n\nAh -- other nodes in the same query -- you're right, that's not good.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Jun 2020 18:58:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 02:28:02PM -0700, Jeff Davis wrote:\n>On Thu, 2020-06-25 at 15:56 -0400, Bruce Momjian wrote:\n>> It is my understanding that spill of sorts is mostly read\n>> sequentially,\n>> while hash reads are random. Is that right? Is that not being\n>> costed\n>> properly?\n>\n>I don't think there's a major problem with the cost model, but it could\n>probably use some tweaking.\n>\n>Hash writes are random. The hash reads should be mostly sequential (for\n>large partitions it will be 128-block extents, or 1MB). The cost model\n>assumes 50% sequential and 50% random.\n>\n\nThe important bit here is that while the logical writes are random,\nthose are effectively combined in page cache and the physical writes are\npretty sequential. So I think the cost model is fairly reasonable.\n\nNote: Judging by iosnoop stats shared in the thread linked by Jeff.\n\n>Sorts are written sequentially and read randomly, but there's\n>prefetching to keep the reads from being too random. The cost model\n>assumes 75% sequential and 25% random.\n>\n>Overall, the IO pattern is better for Sort, but not dramatically so.\n>Tomas Vondra did some nice analysis here:\n>\n>\n>https://www.postgresql.org/message-id/20200525021045.dilgcsmgiu4l5jpa@development\n>\n>That resulted in getting the prealloc and projection patches in.\n>\n>Regards,\n>\tJeff Davis\n>\n>\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 01:11:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:16:23AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2020-06-25 10:44:42 -0700, Jeff Davis wrote:\n>> There are only two possible paths: HashAgg and Sort+Group, and we need\n>> to pick one. If the planner expects one to spill, it is likely to\n>> expect the other to spill. If one spills in the executor, then the\n>> other is likely to spill, too. (I'm ignoring the case with a lot of\n>> tuples and few groups because that doesn't seem relevant.)\n>\n>There's also ordered index scan + Group. Which will often be vastly\n>better than Sort+Group, but still slower than HashAgg.\n>\n>\n>> Imagine that there was only one path available to choose. Would you\n>> suggest the same thing, that unexpected spills can exceed work_mem but\n>> expected spills can't?\n>\n>I'm not saying what I propose is perfect, but I've yet to hear a better\n>proposal. Given that there *are* different ways to implement\n>aggregation, and that we use expected costs to choose, I think the\n>assumed costs are relevant.\n>\n\nI share Jeff's opinion that this is quite counter-intuitive and we'll\nhave a hard time explaining it to users.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 01:18:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 09:42:33AM -0700, Jeff Davis wrote:\n>On Wed, 2020-06-24 at 12:31 -0700, Andres Freund wrote:\n>> nodeAgg.c already treats those separately:\n>>\n>> void\n>> hash_agg_set_limits(double hashentrysize, uint64 input_groups, int\n>> used_bits,\n>> \t\t\t\t\tSize *mem_limit, uint64\n>> *ngroups_limit,\n>> \t\t\t\t\tint *num_partitions)\n>> {\n>> \tint\t\t\tnpartitions;\n>> \tSize\t\tpartition_mem;\n>>\n>> \t/* if not expected to spill, use all of work_mem */\n>> \tif (input_groups * hashentrysize < work_mem * 1024L)\n>> \t{\n>> \t\tif (num_partitions != NULL)\n>> \t\t\t*num_partitions = 0;\n>> \t\t*mem_limit = work_mem * 1024L;\n>> \t\t*ngroups_limit = *mem_limit / hashentrysize;\n>> \t\treturn;\n>> \t}\n>\n>The reason this code exists is to decide how much of work_mem to set\n>aside for spilling (each spill partition needs an IO buffer).\n>\n>The alternative would be to fix the number of partitions before\n>processing a batch, which didn't seem ideal. Or, we could just ignore\n>the memory required for IO buffers, like HashJoin.\n>\n\nI think the conclusion from the recent HashJoin discussions is that not\naccounting for BufFiles is an issue, and we want to fix it. So repeating\nthat for HashAgg would be a mistake, IMHO.\n\n>Granted, this is an example where an underestimate can give an\n>advantage, but I don't think we want to extend the concept into other\n>areas.\n>\n\nI agree.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 01:23:16 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 01:17:56PM -0400, Bruce Momjian wrote:\n>On Thu, Jun 25, 2020 at 11:46:54AM -0400, Robert Haas wrote:\n>> On Wed, Jun 24, 2020 at 7:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> > I think my main point is that work_mem was not being honored for\n>> > hash-agg before, but now that PG 13 can do it, we are again allowing\n>> > work_mem not to apply in certain cases. I am wondering if our hard\n>> > limit for work_mem is the issue, and we should make that more flexible\n>> > for all uses.\n>>\n>> I mean, that's pretty much what we're talking about here, isn't it? It\n>> seems like in your previous two replies you were opposed to separating\n>> the plan-type limit from the execution-time limit, but that idea is\n>> precisely a way of being more flexible (and extending it to other plan\n>> nodes is a way of making it more flexible for more use cases).\n>\n>I think it is was Tom who was complaining about plan vs. execution time\n>control.\n>\n>> As I think you know, if you have a system where the workload varies a\n>> lot, you may sometimes be using 0 copies of work_mem and at other\n>> times 1000 or more copies, so the value has to be chosen\n>> conservatively as a percentage of system memory, else you start\n>> swapping or the OOM killer gets involved. On the other hand, some plan\n>> nodes get a lot less efficient when the amount of memory available\n>> falls below some threshold, so you can't just set this to a tiny value\n>> and forget about it. Because the first problem is so bad, most people\n>> set the value relatively conservatively and just live with the\n>> performance consequences. But this also means that they have memory\n>> left over most of the time, so the idea of letting a node burst above\n>> its work_mem allocation when something unexpected happens isn't crazy:\n>> as long as only a few nodes do that here and there, rather than, say,\n>> all the nodes doing it all at the same time, it's actually fine. If we\n>> had a smarter system that could dole out more work_mem to nodes that\n>> would really benefit from it and less to nodes where it isn't likely\n>> to make much difference, that would be similar in spirit but even\n>> better.\n>\n>I think the issue is that in PG 13 work_mem controls sorts and hashes\n>with a new hard limit for hash aggregation:\n>\n>\thttps://www.postgresql.org/docs/12/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY\n>\t\n>\tSort operations are used for ORDER BY, DISTINCT, and merge joins. Hash\n>\ttables are used in hash joins, hash-based aggregation, and hash-based\n>\tprocessing of IN subqueries.\n>\n>In pre-PG 13, we \"controlled\" it by avoiding hash-based aggregation if\n>was expected it to exceed work_mem, but if we assumed it would be less\n>than work_mem and it was more, we exceeded work_mem allocation for that\n>node. In PG 13, we \"limit\" memory to work_mem and spill to disk if we\n>exceed it.\n>\n>We should really have always documented that hash agg could exceed\n>work_mem for misestimation, and if we add a hash_agg work_mem\n>misestimation bypass setting we should document this setting in work_mem\n>as well.\n>\n\nI don't think that would change anything, really. For the users the\nconsequences would be still exactly the same, and they wouldn't even be\nin position to check if they are affected.\n\nSo just documenting that hashagg does not respect work_mem at runtime\nwould be nice, but it would not make any difference for v13, just like\ndocumenting a bug is not really the same thing as fixing it.\n\n>But then the question is why do we allow this bypass only for hash agg?\n>Should work_mem have a settings for ORDER BY, merge join, hash join, and\n>hash agg, e.g.:\n>\n>\twork_mem = 'order_by=10MB, hash_join=20MB, hash_agg=100MB'\n>\n>Yeah, crazy syntax, but you get the idea. I understand some nodes are\n>more sensitive to disk spill than others, so shouldn't we be controlling\n>this at the work_mem level, rather than for a specific node type like\n>hash agg? We could allow for misestimation over allocation of hash agg\n>work_mem by splitting up the hash agg values:\n>\n>\twork_mem = 'order_by=10MB, hash_join=20MB, hash_agg=100MB hash_agg_max=200MB'\n>\n>but _avoiding_ hash agg if it is estimated to exceed work mem and spill\n>to disk is not something to logically control at the work mem level,\n>which leads so something like David Rowley suggested, but with different\n>names:\n>\n>\tenable_hashagg = on | soft | avoid | off\n>\n>where 'on' and 'off' are the current PG 13 behavior, 'soft' means to\n>treat work_mem as a soft limit and allow it to exceed work mem for\n>misestimation, and 'avoid' means to avoid hash agg if it is estimated to\n>exceed work mem. Both 'soft' and 'avoid' don't spill to disk.\n>\n>David's original terms of \"trynospill\" and \"neverspill\" were focused on\n>spilling, not on its interaction with work_mem, and I found that\n>confusing.\n>\n>Frankly, if it took me this long to get my head around this, I am\n>unclear how many people will understand this tuning feature enough to\n>actually use it.\n>\n\nYeah. I agree with Andres we this may be a real issue, and that adding\nsome sort of \"escape hatch\" for v13 would be good. But I'm not convinced\nadding a whole lot of new memory limits for every node that might spill\nis the way to go. What exactly would be our tuning advice to users? Of\ncourse, we could keep it set to work_mem by default, but we all know\nengineers - we can't resist tuning a know when we get one.\n\nI'm not saying it's not beneficial to use different limits for different\nnodes. Some nodes are less sensitive to the size (e.g. sorting often\ngets faster with smaller work_mem). But I think we should instead have a\nper-session limit, and the planner should \"distribute\" the memory to\ndifferent nodes. It's a hard problem, of course.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 01:53:57 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 01:53:57AM +0200, Tomas Vondra wrote:\n> I'm not saying it's not beneficial to use different limits for different\n> nodes. Some nodes are less sensitive to the size (e.g. sorting often\n> gets faster with smaller work_mem). But I think we should instead have a\n> per-session limit, and the planner should \"distribute\" the memory to\n> different nodes. It's a hard problem, of course.\n\nYeah, I am actually confused why we haven't developed a global memory\nallocation strategy and continue to use per-session work_mem.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 00:02:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 12:02:10AM -0400, Bruce Momjian wrote:\n>On Fri, Jun 26, 2020 at 01:53:57AM +0200, Tomas Vondra wrote:\n>> I'm not saying it's not beneficial to use different limits for different\n>> nodes. Some nodes are less sensitive to the size (e.g. sorting often\n>> gets faster with smaller work_mem). But I think we should instead have a\n>> per-session limit, and the planner should \"distribute\" the memory to\n>> different nodes. It's a hard problem, of course.\n>\n>Yeah, I am actually confused why we haven't developed a global memory\n>allocation strategy and continue to use per-session work_mem.\n>\n\nI think it's pretty hard problem, actually. One of the reasons is that\nthe costing of a node depends on the amount of memory available to the\nnode, but as we're building the plan bottom-up, we have no information\nabout the nodes above us. So we don't know if there are operations that\nwill need memory, how sensitive they are, etc.\n\nAnd so far the per-node limit served us pretty well, I think. So I'm not\nvery confused we don't have the per-session limit yet, TBH.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 16:44:14 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 04:44:14PM +0200, Tomas Vondra wrote:\n> On Fri, Jun 26, 2020 at 12:02:10AM -0400, Bruce Momjian wrote:\n> > On Fri, Jun 26, 2020 at 01:53:57AM +0200, Tomas Vondra wrote:\n> > > I'm not saying it's not beneficial to use different limits for different\n> > > nodes. Some nodes are less sensitive to the size (e.g. sorting often\n> > > gets faster with smaller work_mem). But I think we should instead have a\n> > > per-session limit, and the planner should \"distribute\" the memory to\n> > > different nodes. It's a hard problem, of course.\n> > \n> > Yeah, I am actually confused why we haven't developed a global memory\n> > allocation strategy and continue to use per-session work_mem.\n> > \n> \n> I think it's pretty hard problem, actually. One of the reasons is that\n\nYes, it is a hard problem, because it is balancing memory for shared\nbuffers, work_mem, and kernel buffers:\n\n\thttps://momjian.us/main/blogs/pgblog/2018.html#December_7_2018\n\nI think the big problem is that the work_mem value is not one value but\na floating value that is different per query and session, and concurrent\nsession activity.\n\n> the costing of a node depends on the amount of memory available to the\n> node, but as we're building the plan bottom-up, we have no information\n> about the nodes above us. So we don't know if there are operations that\n> will need memory, how sensitive they are, etc.\n> \n> And so far the per-node limit served us pretty well, I think. So I'm not\n> very confused we don't have the per-session limit yet, TBH.\n\nI was thinking more of being able to allocate a single value to be\nshared by all active sesions.\n\nAlso, doesn't this blog entry also show that spiling to disk for ORDER\nBY is similarly slow compared to hash aggs?\n\n\thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 12:37:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 12:37:26PM -0400, Bruce Momjian wrote:\n>On Fri, Jun 26, 2020 at 04:44:14PM +0200, Tomas Vondra wrote:\n>> On Fri, Jun 26, 2020 at 12:02:10AM -0400, Bruce Momjian wrote:\n>> > On Fri, Jun 26, 2020 at 01:53:57AM +0200, Tomas Vondra wrote:\n>> > > I'm not saying it's not beneficial to use different limits for different\n>> > > nodes. Some nodes are less sensitive to the size (e.g. sorting often\n>> > > gets faster with smaller work_mem). But I think we should instead have a\n>> > > per-session limit, and the planner should \"distribute\" the memory to\n>> > > different nodes. It's a hard problem, of course.\n>> >\n>> > Yeah, I am actually confused why we haven't developed a global memory\n>> > allocation strategy and continue to use per-session work_mem.\n>> >\n>>\n>> I think it's pretty hard problem, actually. One of the reasons is that\n>\n>Yes, it is a hard problem, because it is balancing memory for shared\n>buffers, work_mem, and kernel buffers:\n>\n>\thttps://momjian.us/main/blogs/pgblog/2018.html#December_7_2018\n>\n>I think the big problem is that the work_mem value is not one value but\n>a floating value that is different per query and session, and concurrent\n>session activity.\n>\n>> the costing of a node depends on the amount of memory available to the\n>> node, but as we're building the plan bottom-up, we have no information\n>> about the nodes above us. So we don't know if there are operations that\n>> will need memory, how sensitive they are, etc.\n>>\n>> And so far the per-node limit served us pretty well, I think. So I'm not\n>> very confused we don't have the per-session limit yet, TBH.\n>\n>I was thinking more of being able to allocate a single value to be\n>shared by all active sesions.\n>\n\nNot sure I understand. What \"single value\" do you mean?\n\nWasn't the idea was to replace work_mem with something like query_mem?\nThat'd be nice, but I think it's inherently circular - we don't know how\nto distribute this to different nodes until we know which nodes will\nneed a buffer, but the buffer size is important for costing (so we need\nit when constructing the paths).\n\nPlus then there's the question whether all nodes should get the same\nfraction, or less sensitive nodes should get smaller chunks, etc.\nUltimately this would be based on costing too, I think, but it makes it\nsoe much complex ...\n\n>Also, doesn't this blog entry also show that spiling to disk for ORDER\n>BY is similarly slow compared to hash aggs?\n>\n>\thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n>\n\nThe post does not mention hashagg at all, so I'm not sure how could it\nshow that? But I think you're right the spilling itself is not that far\naway, in most cases (thanks to the recent fixes made by Jeff).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 19:45:13 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 1:36 PM Andres Freund <andres@anarazel.de> wrote:\n> 12 28164.865 ms\n>\n> fast ssd:\n> HEAD 92520.680 ms\n>\n> magnetic:\n> HEAD 183968.538 ms\n>\n> (no reads, there's plenty enough memory. Just writes because the age /\n> amount thresholds for dirty data are reached)\n>\n> In the magnetic case we're IO bottlenecked nearly the whole time.\n\nI agree with almost everything you've said on this thread, but at the\nsame time I question the emphasis on I/O here. You've shown that\nspinning rust is about twice as slow as a fast SSD here. Fair enough,\nbut to me the real story is that spilling is clearly a lot slower in\ngeneral, regardless of how fast the storage subsystem happens to be (I\nwonder how fast it is with a ramdisk). To me, it makes more sense to\nthink of the problem here as the fact that v13 will *not* do\naggregation using the fast strategy (i.e. in-memory) -- as opposed to\nthe problem being that v13 does the aggregation using the slow\nstrategy (which is assumed to be slow because it involves I/O instead\nof memory buffers).\n\nI get approximately the same query runtimes with your \"make the\ntransition state a bit bigger\" test case. With \"set enable_hashagg =\noff\", I get a group aggregate + sort. It spills to disk, even with\n'work_mem = '15GB'\" -- leaving 4 runs to merge at the end. That takes\n63702.992 ms on v13. But if I reduce the amount of work_mem radically,\nto only 16MB (a x960 decrease!), then the run time only increases by\n~30% -- it's only 83123.926 ms. So we're talking about a ~200%\nincrease (for hash aggregate) versus a ~30% increase (for groupagg +\nsort) on fast SSDs.\n\nChanging the cost of I/O in the context of hashaggregate seems like it\nmisses the point. Jeff recently said \"Overall, the IO pattern is\nbetter for Sort, but not dramatically so\". Whatever the IO pattern may\nbe, I think that it's pretty clear that the performance\ncharacteristics of hash aggregation with limited memory are very\ndifferent to groupaggregate + sort, at least when only a fraction of\nthe optimal amount of memory we'd like is available. It's true that\nhash aggregate was weird among plan nodes in v12, and is now in some\nsense less weird among plan nodes. And yet we have a new problem now\n-- so where does that leave that whole \"weirdness\" framing? ISTM that\nthe work_mem framework was and is the main problem. We seem to have\nlost a crutch that ameliorated the problem before now, even though\nthat amelioration was kind of an accident. Or a thing that user apps\nevolved to rely on.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Jun 2020 13:53:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 07:45:13PM +0200, Tomas Vondra wrote:\n> On Fri, Jun 26, 2020 at 12:37:26PM -0400, Bruce Momjian wrote:\n> > I was thinking more of being able to allocate a single value to be\n> > shared by all active sesions.\n> \n> Not sure I understand. What \"single value\" do you mean?\n\nI was thinking of a full-cluster work_mem maximum allocation that could\nbe given to various backends that request it.\n\nImagine we set the cluster-wide total of work_mem to 1GB. If a session\nasks for 100MB, if there are no other active sessions, it can grant the\nentire 100MB. If there are other sessions running, and 500MB has\nalready been allocated, maybe it is only given an active per-node\nwork_mem of 50MB. As the amount of unallocated cluster-wide work_mem\ngets smaller, requests are granted smaller actual allocations.\n\nWhat we do now makes little sense, because we might have lots of free\nmemory, but we force nodes to spill to disk when they exceed a fixed\nwork_mem. I realize this is very imprecise, because you don't know what\nfuture work_mem requests are coming, or how long until existing\nallocations are freed, but it seems it would have to be better than what\nwe do now.\n\n> Wasn't the idea was to replace work_mem with something like query_mem?\n> That'd be nice, but I think it's inherently circular - we don't know how\n> to distribute this to different nodes until we know which nodes will\n> need a buffer, but the buffer size is important for costing (so we need\n> it when constructing the paths).\n> \n> Plus then there's the question whether all nodes should get the same\n> fraction, or less sensitive nodes should get smaller chunks, etc.\n> Ultimately this would be based on costing too, I think, but it makes it\n> soe much complex ...\n\nSince work_mem affect the optimizer choices, I can imagine it getting\ncomplex since nodes would have to ask the global work_mem allocator how\nmuch memory it _might_ get, but then ask for final work_mem during\nexecution, and they might differ. Still, our spill costs are so high\nfor so many node types, that reducing spills seems like it would be a\nwin, even if it sometimes causes poorer plans.\n\n> > Also, doesn't this blog entry also show that spiling to disk for ORDER\n> > BY is similarly slow compared to hash aggs?\n> > \n> > \thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n> \n> The post does not mention hashagg at all, so I'm not sure how could it\n> show that? But I think you're right the spilling itself is not that far\n> away, in most cases (thanks to the recent fixes made by Jeff).\n\nYeah, I was just measuring ORDER BY spill, but it seems to be a similar\noverhead to hashagg spill, which is being singled out in this discussion\nas particularly expensive, and I am questioning that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 19:00:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 4:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Imagine we set the cluster-wide total of work_mem to 1GB. If a session\n> asks for 100MB, if there are no other active sessions, it can grant the\n> entire 100MB. If there are other sessions running, and 500MB has\n> already been allocated, maybe it is only given an active per-node\n> work_mem of 50MB. As the amount of unallocated cluster-wide work_mem\n> gets smaller, requests are granted smaller actual allocations.\n\nI think that that's the right approach long term. But right now the\nDBA has no way to give hash-based nodes more memory, even though it's\nclear that that's where it's truly needed in most cases, across almost\nworkloads. I think that that's the really glaring problem.\n\nThis is just the intrinsic nature of hash-based aggregation and hash\njoin vs sort-based aggregation and merge join (roughly speaking). It's\nmuch more valuable to be able to do hash-based aggregation in one\npass, especially in cases where hashing already did particularly well\nin Postgres v12.\n\n> What we do now makes little sense, because we might have lots of free\n> memory, but we force nodes to spill to disk when they exceed a fixed\n> work_mem. I realize this is very imprecise, because you don't know what\n> future work_mem requests are coming, or how long until existing\n> allocations are freed, but it seems it would have to be better than what\n> we do now.\n\nPostgres 13 made hash aggregate respect work_mem. Perhaps it would\nhave made more sense to teach work_mem to respect hash aggregate,\nthough.\n\nHash aggregate cannot consume an unbounded amount of memory in v13,\nsince the old behavior was clearly unreasonable. Which is great. But\nit may be even more unreasonable to force users to conservatively set\nthe limit on the size of the hash table in an artificial, generic way.\n\n> Since work_mem affect the optimizer choices, I can imagine it getting\n> complex since nodes would have to ask the global work_mem allocator how\n> much memory it _might_ get, but then ask for final work_mem during\n> execution, and they might differ. Still, our spill costs are so high\n> for so many node types, that reducing spills seems like it would be a\n> win, even if it sometimes causes poorer plans.\n\nI don't think it's really about the spill costs, at least in one\nimportant sense. If performing a hash aggregate in memory uses twice\nas much memory as spilling (with either sorting or hashing), but the\noperation completes in one third the time, you have actually saved\nmemory in the aggregate (no pun intended). Also, the query is 3x\nfaster, which is a nice bonus! I don't think that this kind of\nscenario is rare.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Jun 2020 16:41:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 01:53:05PM -0700, Peter Geoghegan wrote:\n> On Thu, Jun 25, 2020 at 1:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > 12 28164.865 ms\n> >\n> > fast ssd:\n> > HEAD 92520.680 ms\n> >\n> > magnetic:\n> > HEAD 183968.538 ms\n> >\n> > (no reads, there's plenty enough memory. Just writes because the age /\n> > amount thresholds for dirty data are reached)\n> >\n> > In the magnetic case we're IO bottlenecked nearly the whole time.\n> \n> I agree with almost everything you've said on this thread, but at the\n> same time I question the emphasis on I/O here. You've shown that\n> spinning rust is about twice as slow as a fast SSD here. Fair enough,\n> but to me the real story is that spilling is clearly a lot slower in\n> general, regardless of how fast the storage subsystem happens to be (I\n> wonder how fast it is with a ramdisk). To me, it makes more sense to\n\nThis blog entry shows ORDER BY using ram disk, SSD, and magnetic:\n\n\thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n\nIt is from 2012, but I can re-run the test if you want.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 19:56:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 07:00:20PM -0400, Bruce Momjian wrote:\n>On Fri, Jun 26, 2020 at 07:45:13PM +0200, Tomas Vondra wrote:\n>> On Fri, Jun 26, 2020 at 12:37:26PM -0400, Bruce Momjian wrote:\n>> > I was thinking more of being able to allocate a single value to be\n>> > shared by all active sesions.\n>>\n>> Not sure I understand. What \"single value\" do you mean?\n>\n>I was thinking of a full-cluster work_mem maximum allocation that could\n>be given to various backends that request it.\n>\n>Imagine we set the cluster-wide total of work_mem to 1GB. If a session\n>asks for 100MB, if there are no other active sessions, it can grant the\n>entire 100MB. If there are other sessions running, and 500MB has\n>already been allocated, maybe it is only given an active per-node\n>work_mem of 50MB. As the amount of unallocated cluster-wide work_mem\n>gets smaller, requests are granted smaller actual allocations.\n>\n>What we do now makes little sense, because we might have lots of free\n>memory, but we force nodes to spill to disk when they exceed a fixed\n>work_mem. I realize this is very imprecise, because you don't know what\n>future work_mem requests are coming, or how long until existing\n>allocations are freed, but it seems it would have to be better than what\n>we do now.\n>\n>> Wasn't the idea was to replace work_mem with something like query_mem?\n>> That'd be nice, but I think it's inherently circular - we don't know how\n>> to distribute this to different nodes until we know which nodes will\n>> need a buffer, but the buffer size is important for costing (so we need\n>> it when constructing the paths).\n>>\n>> Plus then there's the question whether all nodes should get the same\n>> fraction, or less sensitive nodes should get smaller chunks, etc.\n>> Ultimately this would be based on costing too, I think, but it makes it\n>> soe much complex ...\n>\n>Since work_mem affect the optimizer choices, I can imagine it getting\n>complex since nodes would have to ask the global work_mem allocator how\n>much memory it _might_ get, but then ask for final work_mem during\n>execution, and they might differ. Still, our spill costs are so high\n>for so many node types, that reducing spills seems like it would be a\n>win, even if it sometimes causes poorer plans.\n>\n\nI may not understand what you mean by \"poorer plans\" here, but I find it\nhard to accept that reducing spills is generally worth poorer plans.\n\nI agree larger work_mem for hashagg (and thus less spilling) may mean\nlower work_mem for so some other nodes that are less sensitive to this.\nBut I think this needs to be formulated as a cost-based decision,\nalthough I don't know how to do that for the reasons I explained before\n(bottom-up plan construction vs. distributing the memory budget).\n\nFWIW some databases already do something like this - SQL Server has\nsomething called \"memory grant\" which I think mostly does what you\ndescribed here.\n\n\n>> > Also, doesn't this blog entry also show that spiling to disk for ORDER\n>> > BY is similarly slow compared to hash aggs?\n>> >\n>> > \thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n>>\n>> The post does not mention hashagg at all, so I'm not sure how could it\n>> show that? But I think you're right the spilling itself is not that far\n>> away, in most cases (thanks to the recent fixes made by Jeff).\n>\n>Yeah, I was just measuring ORDER BY spill, but it seems to be a similar\n>overhead to hashagg spill, which is being singled out in this discussion\n>as particularly expensive, and I am questioning that.\n>\n\nThe difference between sort and hashagg spills is that for sorts there\nis no behavior change. Plans that did (not) spill in v12 will behave the\nsame way on v13, modulo some random perturbation. For hashagg that's not\nthe case - some queries that did not spill before will spill now.\n\nSo even if the hashagg spills are roughly equal to sort spills, both are\nsignificantly more expensive than not spilling.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 27 Jun 2020 01:58:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jun 27, 2020 at 01:58:50AM +0200, Tomas Vondra wrote:\n> > Since work_mem affect the optimizer choices, I can imagine it getting\n> > complex since nodes would have to ask the global work_mem allocator how\n> > much memory it _might_ get, but then ask for final work_mem during\n> > execution, and they might differ. Still, our spill costs are so high\n> > for so many node types, that reducing spills seems like it would be a\n> > win, even if it sometimes causes poorer plans.\n> > \n> \n> I may not understand what you mean by \"poorer plans\" here, but I find it\n> hard to accept that reducing spills is generally worth poorer plans.\n\nWe might cost a plan based on a work_mem that the global allocator\nthings it will give us when we are in the executor, but that might\nchange when we are in the executor. We could code is to an optimizer\nrequest is always honored in the executor, but prepared plans would be a\nproblem, or perhaps already are if you prepare a plan and change\nwork_mem before EXECUTE.\n\n> I agree larger work_mem for hashagg (and thus less spilling) may mean\n> lower work_mem for so some other nodes that are less sensitive to this.\n> But I think this needs to be formulated as a cost-based decision,\n> although I don't know how to do that for the reasons I explained before\n> (bottom-up plan construction vs. distributing the memory budget).\n> \n> FWIW some databases already do something like this - SQL Server has\n> something called \"memory grant\" which I think mostly does what you\n> described here.\n\nYep, something like that.\n\n> > > > Also, doesn't this blog entry also show that spiling to disk for ORDER\n> > > > BY is similarly slow compared to hash aggs?\n> > > >\n> > > > \thttps://momjian.us/main/blogs/pgblog/2012.html#February_2_2012\n> > > \n> > > The post does not mention hashagg at all, so I'm not sure how could it\n> > > show that? But I think you're right the spilling itself is not that far\n> > > away, in most cases (thanks to the recent fixes made by Jeff).\n> > \n> > Yeah, I was just measuring ORDER BY spill, but it seems to be a similar\n> > overhead to hashagg spill, which is being singled out in this discussion\n> > as particularly expensive, and I am questioning that.\n> > \n> \n> The difference between sort and hashagg spills is that for sorts there\n> is no behavior change. Plans that did (not) spill in v12 will behave the\n> same way on v13, modulo some random perturbation. For hashagg that's not\n> the case - some queries that did not spill before will spill now.\n\nWell, my point is that we already had ORDER BY problems, and if hash agg\nnow has them too in PG 13, I am fine with that. We don't guarantee no\nproblems in major versions. If we want to add a general knob that says,\n\"Hey allow this node to exceed work_mem by X%,\" I don't see the point\n--- just increase work_mem, or have different work_mem settings for\ndifferent node types, as I outlined previously.\n\n> So even if the hashagg spills are roughly equal to sort spills, both are\n> significantly more expensive than not spilling.\n\nYes, but that means we need a more general fix and worrying about hash\nagg is not addressing the core issue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 20:05:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 4:59 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I agree larger work_mem for hashagg (and thus less spilling) may mean\n> lower work_mem for so some other nodes that are less sensitive to this.\n> But I think this needs to be formulated as a cost-based decision,\n> although I don't know how to do that for the reasons I explained before\n> (bottom-up plan construction vs. distributing the memory budget).\n\nWhy do you think that it needs to be formulated as a cost-based\ndecision? That's probably true of a scheme that allocates memory very\nintelligently, but what about an approach that's slightly better than\nwork_mem?\n\nWhat problems do you foresee (if any) with adding a hash_mem GUC that\ngets used for both planning and execution for hash aggregate and hash\njoin nodes, in about the same way as work_mem is now?\n\n> FWIW some databases already do something like this - SQL Server has\n> something called \"memory grant\" which I think mostly does what you\n> described here.\n\nSame is true of Oracle. But Oracle also has simple work_mem-like\nsettings for sorting and hashing. People don't really use them\nanymore, but apparently it was once common for the DBA to explicitly\ngive over more memory to hashing -- much like the hash_mem setting I\nasked about. IIRC the same is true of DB2.\n\n> The difference between sort and hashagg spills is that for sorts there\n> is no behavior change. Plans that did (not) spill in v12 will behave the\n> same way on v13, modulo some random perturbation. For hashagg that's not\n> the case - some queries that did not spill before will spill now.\n>\n> So even if the hashagg spills are roughly equal to sort spills, both are\n> significantly more expensive than not spilling.\n\nJust to make sure we're on the same page: both are significantly more\nexpensive than a hash aggregate not spilling *specifically*. OTOH, a\ngroup aggregate may not be much slower when it spills compared to an\nin-memory sort group aggregate. It may even be noticeably faster, due\nto caching effects, as you mentioned at one point upthread.\n\nThis is the property that makes hash aggregate special, and justifies\ngiving it more memory than other nodes on a system-wide basis (the\nsame thing applies to hash join). This could even work as a multiplier\nof work_mem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Jun 2020 17:24:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> So, I don't think we can wire in a constant like 10x. That's really\n> unprincipled and I think it's a bad idea. What we could do, though, is\n> replace the existing Boolean-valued GUC with a new GUC that controls\n> the size at which the aggregate spills. The default could be -1,\n> meaning work_mem, but a user could configure a larger value if desired\n> (presumably, we would just treat a value smaller than work_mem as\n> work_mem, and document the same).\n>\n> I think that's actually pretty appealing. Separating the memory we\n> plan to use from the memory we're willing to use before spilling seems\n> like a good idea in general, and I think we should probably also do it\n> in other places - like sorts.\n>\n\n+1. I also think GUC on these lines could help not only the problem\nbeing discussed here but in other cases as well. However, I think the\nreal question is do we want to design/implement it for PG13? It seems\nto me at this stage we don't have a clear understanding of what\npercentage of real-world cases will get impacted due to the new\nbehavior of hash aggregates. We want to provide some mechanism as a\nsafety net to avoid problems that users might face which is not a bad\nidea but what if we wait and see the real impact of this? Is it too\nbad to provide a GUC later in back-branch if we see users face such\nproblems quite often? I think the advantage of delaying it is that we\nmight see some real problems (like where hash aggregate is not a good\nchoice) which can be fixed via the costing model.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 15:30:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jun 26, 2020 at 05:24:36PM -0700, Peter Geoghegan wrote:\n>On Fri, Jun 26, 2020 at 4:59 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> I agree larger work_mem for hashagg (and thus less spilling) may mean\n>> lower work_mem for so some other nodes that are less sensitive to this.\n>> But I think this needs to be formulated as a cost-based decision,\n>> although I don't know how to do that for the reasons I explained before\n>> (bottom-up plan construction vs. distributing the memory budget).\n>\n>Why do you think that it needs to be formulated as a cost-based\n>decision? That's probably true of a scheme that allocates memory very\n>intelligently, but what about an approach that's slightly better than\n>work_mem?\n>\n\nWell, there are multiple ideas discussed in this (sub)thread, one of\nthem being a per-query memory limit. That requires decisions how much\nmemory should different nodes get, which I think would need to be\ncost-based.\n\n>What problems do you foresee (if any) with adding a hash_mem GUC that\n>gets used for both planning and execution for hash aggregate and hash\n>join nodes, in about the same way as work_mem is now?\n>\n\nOf course, a simpler scheme like this would not require that. And maybe\nintroducing hash_mem is a good idea - I'm not particularly opposed to\nthat, actually. But I think we should not introduce separate memory\nlimits for each node type, which was also mentioned earlier.\n\nThe problem of course is that hash_mem does not really solve the issue\ndiscussed at the beginning of this thread, i.e. regressions due to\nunderestimates and unexpected spilling at execution time.\n\nThe thread is getting a rather confusing mix of proposals how to fix\nthat for v13 and proposals how to improve our configuration of memory\nlimits :-(\n\n>> FWIW some databases already do something like this - SQL Server has\n>> something called \"memory grant\" which I think mostly does what you\n>> described here.\n>\n>Same is true of Oracle. But Oracle also has simple work_mem-like\n>settings for sorting and hashing. People don't really use them anymore,\n>but apparently it was once common for the DBA to explicitly give over\n>more memory to hashing -- much like the hash_mem setting I asked about.\n>IIRC the same is true of DB2.\n>\n\nInteresting. What is not entirely clear to me how do these databases\ndecide how much should each node get during planning. With the separate\nwork_mem-like settings it's fairly obvious, but how do they do that with\nthe global limit (either per-instance or per-query)?\n\n>> The difference between sort and hashagg spills is that for sorts\n>> there is no behavior change. Plans that did (not) spill in v12 will\n>> behave the same way on v13, modulo some random perturbation. For\n>> hashagg that's not the case - some queries that did not spill before\n>> will spill now.\n>>\n>> So even if the hashagg spills are roughly equal to sort spills, both\n>> are significantly more expensive than not spilling.\n>\n>Just to make sure we're on the same page: both are significantly more\n>expensive than a hash aggregate not spilling *specifically*. OTOH, a\n>group aggregate may not be much slower when it spills compared to an\n>in-memory sort group aggregate. It may even be noticeably faster, due\n>to caching effects, as you mentioned at one point upthread.\n>\n>This is the property that makes hash aggregate special, and justifies\n>giving it more memory than other nodes on a system-wide basis (the same\n>thing applies to hash join). This could even work as a multiplier of\n>work_mem.\n>\n\nYes, I agree.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 27 Jun 2020 12:41:41 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jun 27, 2020 at 3:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think the advantage of delaying it is that we\n> might see some real problems (like where hash aggregate is not a good\n> choice) which can be fixed via the costing model.\n\nI think any problem that might come up with the costing is best\nthought of as a distinct problem. This thread is mostly about the\nproblem of users getting fewer in-memory hash aggregates compared to a\nprevious release running the same application (though there has been\nsome discussion of the other problem, too [1], but it's thought to be\nless serious).\n\nThe problem is that affected users were theoretically never entitled\nto the performance they came to rely on, and yet there is good reason\nto think that hash aggregate really should be entitled to more memory.\nThey won't care that they were theoretically never entitled to that\nperformance, though -- they *liked* the fact that hash agg could\ncheat. And they'll dislike the fact that this cannot be corrected by\ntuning work_mem, since that affects all node types that consume\nwork_mem, not just hash aggregate -- that could cause OOMs for them.\n\nThere are two or three similar ideas under discussion that might fix\nthe problem. They all seem to involve admitting that hash aggregate's\n\"cheating\" might actually have been a good thing all along (even\nthough giving hash aggregate much much more memory than other nodes is\nterrible), and giving hash aggregate license to \"cheat openly\". Note\nthat the problem isn't exactly a problem with the hash aggregate\nspilling patch. You could think of the problem as a pre-existing issue\n-- a failure to give more memory to hash aggregate, which really\nshould be entitled to more memory. Jeff's patch just made the issue\nmore obvious.\n\n[1] https://postgr.es/m/20200624191433.5gnqgrxfmucexldm@alap3.anarazel.de\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 28 Jun 2020 17:40:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jun 27, 2020 at 3:41 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Well, there are multiple ideas discussed in this (sub)thread, one of\n> them being a per-query memory limit. That requires decisions how much\n> memory should different nodes get, which I think would need to be\n> cost-based.\n\nA design like that probably makes sense. But it's way out of scope for\nPostgres 13, and not something that should be discussed further on\nthis thread IMV.\n\n> Of course, a simpler scheme like this would not require that. And maybe\n> introducing hash_mem is a good idea - I'm not particularly opposed to\n> that, actually. But I think we should not introduce separate memory\n> limits for each node type, which was also mentioned earlier.\n\nI had imagined that hash_mem would apply to hash join and hash\naggregate only. A GUC that either represents a multiple of work_mem,\nor an absolute work_mem-style KB value.\n\n> The problem of course is that hash_mem does not really solve the issue\n> discussed at the beginning of this thread, i.e. regressions due to\n> underestimates and unexpected spilling at execution time.\n\nLike Andres, I suspect that that's a smaller problem in practice. A\nhash aggregate that spills often has performance characteristics\nsomewhat like a group aggregate + sort, anyway. I'm worried about\ncases where an *in-memory* hash aggregate is naturally far far faster\nthan other strategies, and yet we can't use it -- despite the fact\nthat Postgres 12 could \"safely\" do so. (It probably doesn't matter\nwhether the slow plan that you get in Postgres 13 is a hash aggregate\nthat spills, or something else -- this is not really a costing\nproblem.)\n\nBesides, hash_mem *can* solve that problem to some extent. Other cases\n(cases where the estimates are so bad that hash_mem won't help) seem\nlike less of a concern to me. To some extent, that's the price you pay\nto avoid the risk of an OOM.\n\n> The thread is getting a rather confusing mix of proposals how to fix\n> that for v13 and proposals how to improve our configuration of memory\n> limits :-(\n\nAs I said to Amit in my last message, I think that all of the ideas\nthat are worth pursuing involve giving hash aggregate nodes license to\nuse more memory than other nodes. One variant involves doing so only\nat execution time, while the hash_mem idea involves formalizing and\ndocumenting that hash-based nodes are special -- and taking that into\naccount during both planning and execution.\n\n> Interesting. What is not entirely clear to me how do these databases\n> decide how much should each node get during planning. With the separate\n> work_mem-like settings it's fairly obvious, but how do they do that with\n> the global limit (either per-instance or per-query)?\n\nI don't know, but that seems like a much more advanced way of\napproaching the problem. It isn't in scope here.\n\nPerhaps I'm not considering some unintended consequence of the planner\ngiving hash-based nodes extra memory \"for free\" in the common case\nwhere hash_mem exceeds work_mem (by 2x, say). But my guess is that\nthat won't be a significant problem that biases the planner in some\nobviously undesirable way.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 28 Jun 2020 18:39:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jun 28, 2020 at 06:39:40PM -0700, Peter Geoghegan wrote:\n>On Sat, Jun 27, 2020 at 3:41 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> Well, there are multiple ideas discussed in this (sub)thread, one of\n>> them being a per-query memory limit. That requires decisions how much\n>> memory should different nodes get, which I think would need to be\n>> cost-based.\n>\n>A design like that probably makes sense. But it's way out of scope for\n>Postgres 13, and not something that should be discussed further on\n>this thread IMV.\n>\n\n100% agree\n\n>> Of course, a simpler scheme like this would not require that. And maybe\n>> introducing hash_mem is a good idea - I'm not particularly opposed to\n>> that, actually. But I think we should not introduce separate memory\n>> limits for each node type, which was also mentioned earlier.\n>\n>I had imagined that hash_mem would apply to hash join and hash\n>aggregate only. A GUC that either represents a multiple of work_mem,\n>or an absolute work_mem-style KB value.\n>\n\nI'm not against having a hash_mem (and I'd vote to be it a simple\nwork_mem-style value, not a multiple). Maybe we should have it, the\nargument to allow hashagg (and perhaps hashjoin) to use more memory than\nsome other nodes seems convincing to me.\n\nI'm just not sure which of the problems mentioned in this thread it\nactually addresses ...\n\n>> The problem of course is that hash_mem does not really solve the issue\n>> discussed at the beginning of this thread, i.e. regressions due to\n>> underestimates and unexpected spilling at execution time.\n>\n>Like Andres, I suspect that that's a smaller problem in practice. A\n>hash aggregate that spills often has performance characteristics\n>somewhat like a group aggregate + sort, anyway. I'm worried about\n>cases where an *in-memory* hash aggregate is naturally far far faster\n>than other strategies, and yet we can't use it -- despite the fact\n>that Postgres 12 could \"safely\" do so. (It probably doesn't matter\n>whether the slow plan that you get in Postgres 13 is a hash aggregate\n>that spills, or something else -- this is not really a costing\n>problem.)\n>\n\nNot sure I follow. Which cases do you mean when you say that 12 could\nsafely do them, but 13 won't? I see the following two cases:\n\n\na) Planner in 12 and 13 disagree about whether the hash table will fit\ninto work_mem.\n\nI don't quite see why this would be the case (given the same cardinality\nestimates etc.), though. That is, if 12 says \"will fit\" I'd expect 13 to\nend up with the same conclusion. But maybe 13 has higher per-tuple\noverhead or something? I know we set aside some memory for BufFiles, but\nnot when we expect the whole hash table to fit into memory.\n\n\nb) Planner believes the hash table will fit, due to underestimate.\n\nOn 12, we'd just let the hash table overflow, which may be a win when\nthere's enough RAM and the estimate is not \"too wrong\". But it may\neasily end with a sad OOM.\n\nOn 13, we'll just start spilling. True - people tend to use conservative\nwork_mem values exactly because of cases like this (which is somewhat\nfutile as the underestimate may be arbitrarily wrong) and also because\nthey don't know how many work_mem instances the query will use.\n\nSo yeah, I understand why people may not want to increase work_mem too\nmuch, and maybe hash_work would be a way to get the \"no spill\" behavior.\n\n\n>Besides, hash_mem *can* solve that problem to some extent. Other cases\n>(cases where the estimates are so bad that hash_mem won't help) seem\n>like less of a concern to me. To some extent, that's the price you pay\n>to avoid the risk of an OOM.\n>\n\nTrue, avoiding the risk of OOM has it's cost.\n\n>> The thread is getting a rather confusing mix of proposals how to fix\n>> that for v13 and proposals how to improve our configuration of memory\n>> limits :-(\n>\n>As I said to Amit in my last message, I think that all of the ideas\n>that are worth pursuing involve giving hash aggregate nodes license to\n>use more memory than other nodes. One variant involves doing so only\n>at execution time, while the hash_mem idea involves formalizing and\n>documenting that hash-based nodes are special -- and taking that into\n>account during both planning and execution.\n>\n\nUnderstood. I mostly agree with this.\n\n>> Interesting. What is not entirely clear to me how do these databases\n>> decide how much should each node get during planning. With the separate\n>> work_mem-like settings it's fairly obvious, but how do they do that with\n>> the global limit (either per-instance or per-query)?\n>\n>I don't know, but that seems like a much more advanced way of\n>approaching the problem. It isn't in scope here.\n>\n\n+1\n\n>Perhaps I'm not considering some unintended consequence of the planner\n>giving hash-based nodes extra memory \"for free\" in the common case\n>where hash_mem exceeds work_mem (by 2x, say). But my guess is that\n>that won't be a significant problem that biases the planner in some\n>obviously undesirable way.\n>\n\nMy concern is how much more difficult would these proposals make the\nreasoning about memory usage get. Maybe not much, not sure.\n\nI certainly agree it may be beneficial to give more memory to hashagg at\nthe expense of other nodes.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jun 2020 17:06:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jun 28, 2020 at 05:40:16PM -0700, Peter Geoghegan wrote:\n> I think any problem that might come up with the costing is best\n> thought of as a distinct problem. This thread is mostly about the\n> problem of users getting fewer in-memory hash aggregates compared to a\n> previous release running the same application (though there has been\n> some discussion of the other problem, too [1], but it's thought to be\n> less serious).\n> \n> The problem is that affected users were theoretically never entitled\n> to the performance they came to rely on, and yet there is good reason\n> to think that hash aggregate really should be entitled to more memory.\n> They won't care that they were theoretically never entitled to that\n> performance, though -- they *liked* the fact that hash agg could\n> cheat. And they'll dislike the fact that this cannot be corrected by\n> tuning work_mem, since that affects all node types that consume\n> work_mem, not just hash aggregate -- that could cause OOMs for them.\n> \n> There are two or three similar ideas under discussion that might fix\n> the problem. They all seem to involve admitting that hash aggregate's\n> \"cheating\" might actually have been a good thing all along (even\n> though giving hash aggregate much much more memory than other nodes is\n> terrible), and giving hash aggregate license to \"cheat openly\". Note\n> that the problem isn't exactly a problem with the hash aggregate\n> spilling patch. You could think of the problem as a pre-existing issue\n> -- a failure to give more memory to hash aggregate, which really\n> should be entitled to more memory. Jeff's patch just made the issue\n> more obvious.\n\nIn thinking some more about this, I came out with two ideas. First, in\npre-PG 13, we didn't choose hash_agg if we thought it would spill, but\nif we misestimated and it used more work_mem, we allowed it. The effect\nof this is that if we were close, but it went over, we allowed it just\nfor hash_agg. Is this something we want to codify for all node types,\ni.e., choose a non-spill node type if we need a lot more than work_mem,\nbut then let work_mem be a soft limit if we do choose it, e.g., allow\n50% over work_mem in the executor for misestimation before spill? My\npoint is, do we want to use a lower work_mem for planning and a higher\none in the executor before spilling.\n\nMy second thought is from an earlier report that spilling is very\nexpensive, but smaller work_mem doesn't seem to hurt much. Would we\nachieve better overall performance by giving a few nodes a lot of memory\n(and not spill those), and other nodes very little, rather than having\nthem all be the same size, and all spill?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:29:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 8:07 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Not sure I follow. Which cases do you mean when you say that 12 could\n> safely do them, but 13 won't? I see the following two cases:\n\n> a) Planner in 12 and 13 disagree about whether the hash table will fit\n> into work_mem.\n>\n> I don't quite see why this would be the case (given the same cardinality\n> estimates etc.), though. That is, if 12 says \"will fit\" I'd expect 13 to\n> end up with the same conclusion. But maybe 13 has higher per-tuple\n> overhead or something? I know we set aside some memory for BufFiles, but\n> not when we expect the whole hash table to fit into memory.\n\nI have no reason to believe that the planner is any more or any less\nlikely to conclude that the hash table will fit in memory in v13 as\nthings stand (I don't know if the BufFile issue matters).\n\nIn general, grouping estimates probably aren't very good compared to\njoin estimates. I imagine that in either v12 or v13 the planner is\nlikely to incorrectly believe that it'll all fit in memory fairly\noften. v12 was much too permissive about what could happen. But v13 is\ntoo conservative.\n\n> b) Planner believes the hash table will fit, due to underestimate.\n>\n> On 12, we'd just let the hash table overflow, which may be a win when\n> there's enough RAM and the estimate is not \"too wrong\". But it may\n> easily end with a sad OOM.\n\nIt might end up with an OOM on v12 due to an underestimate -- but\nprobably not! The fact that a hash aggregate is faster than a group\naggregate ameliorates the higher memory usage. You might actually use\nless memory this way.\n\n> On 13, we'll just start spilling. True - people tend to use conservative\n> work_mem values exactly because of cases like this (which is somewhat\n> futile as the underestimate may be arbitrarily wrong) and also because\n> they don't know how many work_mem instances the query will use.\n>\n> So yeah, I understand why people may not want to increase work_mem too\n> much, and maybe hash_work would be a way to get the \"no spill\" behavior.\n\nAndres wanted to increase the amount of memory that could be used at\nexecution time, without changing planning. You could say that hash_mem\nis a more ambitious proposal than that. It's changing the behavior\nacross the board -- though in a way that makes sense anyway. It has\nthe additional benefit of making it more likely that an in-memory hash\naggregate will be used. That isn't a problem that we're obligated to\nsolve now, so this may seem odd. But if the more ambitious plan is\nactually easier to implement and support, why not pursue it?\n\nhash_mem seems a lot easier to explain and reason about than having\ndifferent work_mem budgets during planning and execution, which is\nclearly a kludge. hash_mem makes sense generally, and more or less\nsolves the problems raised on this thread.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Jun 2020 10:20:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:20:14AM -0700, Peter Geoghegan wrote:\n> I have no reason to believe that the planner is any more or any less\n> likely to conclude that the hash table will fit in memory in v13 as\n> things stand (I don't know if the BufFile issue matters).\n> \n> In general, grouping estimates probably aren't very good compared to\n> join estimates. I imagine that in either v12 or v13 the planner is\n> likely to incorrectly believe that it'll all fit in memory fairly\n> often. v12 was much too permissive about what could happen. But v13 is\n> too conservative.\n\nFYI, we have improved planner statistics estimates for years, which must\nhave affected node spill behavior on many node types (except hash_agg),\nand don't remember any complaints about it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 13:31:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> Is this something we want to codify for all node types,\n> i.e., choose a non-spill node type if we need a lot more than work_mem,\n> but then let work_mem be a soft limit if we do choose it, e.g., allow\n> 50% over work_mem in the executor for misestimation before spill? My\n> point is, do we want to use a lower work_mem for planning and a higher\n> one in the executor before spilling.\n\nAndres said something about doing that with hash aggregate, which I\ncan see an argument for, but I don't think that it would make sense\nwith most other nodes. In particular, sorts still perform almost as\nwell with only a fraction of the \"optimal\" memory.\n\n> My second thought is from an earlier report that spilling is very\n> expensive, but smaller work_mem doesn't seem to hurt much.\n\nIt's not really about the spilling itself IMV. It's the inability to\ndo hash aggregation in a single pass.\n\nYou can think of hashing (say for hash join or hash aggregate) as a\nstrategy that consists of a logical division followed by a physical\ncombination. Sorting (or sort merge join, or group agg), in contrast,\nconsists of a physical division and logical combination. As a\nconsequence, it can be a huge win to do everything in memory in the\ncase of hash aggregate. Whereas sort-based aggregation can sometimes\nbe slightly faster with external sorts due to CPU caching effects, and\nbecause an on-the-fly merge in tuplesort can output the first tuple\nbefore the tuples are fully sorted.\n\n> Would we\n> achieve better overall performance by giving a few nodes a lot of memory\n> (and not spill those), and other nodes very little, rather than having\n> them all be the same size, and all spill?\n\nIf the nodes that we give more memory to use it for a hash table, then yes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Jun 2020 10:36:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:20:14AM -0700, Peter Geoghegan wrote:\n>On Mon, Jun 29, 2020 at 8:07 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> Not sure I follow. Which cases do you mean when you say that 12 could\n>> safely do them, but 13 won't? I see the following two cases:\n>\n>> a) Planner in 12 and 13 disagree about whether the hash table will fit\n>> into work_mem.\n>>\n>> I don't quite see why this would be the case (given the same cardinality\n>> estimates etc.), though. That is, if 12 says \"will fit\" I'd expect 13 to\n>> end up with the same conclusion. But maybe 13 has higher per-tuple\n>> overhead or something? I know we set aside some memory for BufFiles, but\n>> not when we expect the whole hash table to fit into memory.\n>\n>I have no reason to believe that the planner is any more or any less\n>likely to conclude that the hash table will fit in memory in v13 as\n>things stand (I don't know if the BufFile issue matters).\n>\n>In general, grouping estimates probably aren't very good compared to\n>join estimates. I imagine that in either v12 or v13 the planner is\n>likely to incorrectly believe that it'll all fit in memory fairly\n>often. v12 was much too permissive about what could happen. But v13 is\n>too conservative.\n>\n\nCan you give and example of what you mean by being too permissive or too\nconservative? Do you mean the possibility of unlimited memory usage in\nv12, and strict enforcement in v13?\n\nIMO enforcing the work_mem limit (in v13) is right in principle, but I\ndo understand the concerns about unexpected regressions compared to v12.\n\n>> b) Planner believes the hash table will fit, due to underestimate.\n>>\n>> On 12, we'd just let the hash table overflow, which may be a win when\n>> there's enough RAM and the estimate is not \"too wrong\". But it may\n>> easily end with a sad OOM.\n>\n>It might end up with an OOM on v12 due to an underestimate -- but\n>probably not! The fact that a hash aggregate is faster than a group\n>aggregate ameliorates the higher memory usage. You might actually use\n>less memory this way.\n>\n\nI don't understand what you mean by \"less memory\" when the whole issue\nis significantly exceeding work_mem?\n\nI don't think the OOM is the only negative performance here - using a\nlot of memory also forces eviction of data from page cache (although\nwriting a lot of temporary files may have similar effect).\n\n>> On 13, we'll just start spilling. True - people tend to use conservative\n>> work_mem values exactly because of cases like this (which is somewhat\n>> futile as the underestimate may be arbitrarily wrong) and also because\n>> they don't know how many work_mem instances the query will use.\n>>\n>> So yeah, I understand why people may not want to increase work_mem too\n>> much, and maybe hash_work would be a way to get the \"no spill\" behavior.\n>\n>Andres wanted to increase the amount of memory that could be used at\n>execution time, without changing planning. You could say that hash_mem\n>is a more ambitious proposal than that. It's changing the behavior\n>across the board -- though in a way that makes sense anyway. It has\n>the additional benefit of making it more likely that an in-memory hash\n>aggregate will be used. That isn't a problem that we're obligated to\n>solve now, so this may seem odd. But if the more ambitious plan is\n>actually easier to implement and support, why not pursue it?\n>\n>hash_mem seems a lot easier to explain and reason about than having\n>different work_mem budgets during planning and execution, which is\n>clearly a kludge. hash_mem makes sense generally, and more or less\n>solves the problems raised on this thread.\n>\n\nI agree with this, and I'm mostly OK with having hash_mem. In fact, from\nthe proposals in this thread I like it the most - as long as it's used\nboth during planning and execution. It's a pretty clear solution.\n\nIt's not a perfect solution in the sense that it does not reintroduce\nthe v12 behavior perfectly (i.e. we'll still spill after reaching\nhash_mem) but that may be good enougn.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jun 2020 23:22:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 01:31:40PM -0400, Bruce Momjian wrote:\n>On Mon, Jun 29, 2020 at 10:20:14AM -0700, Peter Geoghegan wrote:\n>> I have no reason to believe that the planner is any more or any less\n>> likely to conclude that the hash table will fit in memory in v13 as\n>> things stand (I don't know if the BufFile issue matters).\n>>\n>> In general, grouping estimates probably aren't very good compared to\n>> join estimates. I imagine that in either v12 or v13 the planner is\n>> likely to incorrectly believe that it'll all fit in memory fairly\n>> often. v12 was much too permissive about what could happen. But v13 is\n>> too conservative.\n>\n>FYI, we have improved planner statistics estimates for years, which must\n>have affected node spill behavior on many node types (except hash_agg),\n>and don't remember any complaints about it.\n>\n\nI think misestimates for GROUP BY are quite common and very hard to fix.\nFirstly, our ndistinct estimator may give pretty bad results depending\ne.g. on how is the table correlated.\n\nI've been running some TPC-H benchmarks, and for partsupp.ps_partkey our\nestimate was 4338776, when the actual value is 15000000, i.e. ~3.5x\nhigher. This was with statistics target increased to 1000. I can easily\nimagine even worse estimates with lower values.\n\nThis ndistinct estimator is used even for extended statistics, so that\ncan't quite save us. Moreover, the grouping may be on top of a join, in\nwhich case using ndistinct coefficients may not be possible :-(\n\nSo I think this is a quite real problem ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jun 2020 23:33:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 2:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Can you give and example of what you mean by being too permissive or too\n> conservative? Do you mean the possibility of unlimited memory usage in\n> v12, and strict enforcement in v13?\n\nYes -- that's all I meant.\n\n> IMO enforcing the work_mem limit (in v13) is right in principle, but I\n> do understand the concerns about unexpected regressions compared to v12.\n\nYeah. Both of these two things are true at the same time.\n\n> I don't understand what you mean by \"less memory\" when the whole issue\n> is significantly exceeding work_mem?\n\nI was just reiterating what I said a few times already: Not using an\nin-memory hash aggregate when the amount of memory required is high\nbut not prohibitively high is penny wise, pound foolish. It's easy to\nimagine this actually using up more memory when an entire workload is\nconsidered. This observation does not apply to a system that only ever\nhas one active connection at a time, but who cares about that?\n\n> I don't think the OOM is the only negative performance here - using a\n> lot of memory also forces eviction of data from page cache (although\n> writing a lot of temporary files may have similar effect).\n\nTrue.\n\n> I agree with this, and I'm mostly OK with having hash_mem. In fact, from\n> the proposals in this thread I like it the most - as long as it's used\n> both during planning and execution. It's a pretty clear solution.\n\nGreat.\n\nIt's not trivial to write the patch, since there are a few tricky\ncases. For example, maybe there is some subtlety in nodeAgg.c with\nAGG_MIXED cases. Is there somebody else that knows that code better\nthan I do that wants to have a go at writing a patch?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Jun 2020 14:46:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jun 29, 2020 at 2:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jun 29, 2020 at 2:22 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > I agree with this, and I'm mostly OK with having hash_mem. In fact, from\n> > the proposals in this thread I like it the most - as long as it's used\n> > both during planning and execution. It's a pretty clear solution.\n>\n> Great.\n>\n> It's not trivial to write the patch, since there are a few tricky\n> cases. For example, maybe there is some subtlety in nodeAgg.c with\n> AGG_MIXED cases.\n\nAttached is an attempt at this. I have not been particularly thorough,\nsince it is still not completely clear that the hash_mem proposal has\na serious chance of resolving the \"many users rely on hashagg\nexceeding work_mem, regardless of whether or not that is the intended\nbehavior in Postgres 12\" problem. But at least we have a patch now,\nand so have some idea of how invasive this will have to be. We also\nhave something to test.\n\nNote that I created a new open item for this \"maybe we need something\nlike a hash_mem GUC now\" problem today. To recap, this thread started\nout being a discussion about the enable_hashagg_disk GUC, which seems\nlike a distinct problem to me. It won't make much sense to return to\ndiscussing the original problem before we have a solution to this\nother problem (the problem that I propose to address by inventing\nhash_mem).\n\nAbout the patch:\n\nThe patch adds hash_mem, which is just another work_mem-like GUC that\nthe patch has us use in certain cases -- cases where the work area is\na hash table (but not when it's a sort, or some kind of bitset, or\nanything else). I still think that the new GUC should work as a\nmultiplier of work_mem, or something else along those lines, though\nfor now it's just an independent work_mem used for hashing. I bring it\nup again because I'm concerned about users that upgrade to Postgres 13\nincautiously, and find that hashing uses *less* memory than before.\nMany users probably get away with setting work_mem quite high across\nthe board. At the very least, hash_mem should be ignored when it's set\nto below work_mem (which isn't what the patch does).\n\nIt might have made more sense to call the new GUC hash_work_mem\ninstead of hash_mem. I don't feel strongly about the name. Again, this\nis just a starting point for further discussion.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 2 Jul 2020 19:05:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 02, 2020 at 07:05:48PM -0700, Peter Geoghegan wrote:\n> On Mon, Jun 29, 2020 at 2:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Mon, Jun 29, 2020 at 2:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > I agree with this, and I'm mostly OK with having hash_mem. In fact, from\n> > > the proposals in this thread I like it the most - as long as it's used\n> > > both during planning and execution. It's a pretty clear solution.\n> >\n> > Great.\n> >\n> > It's not trivial to write the patch, since there are a few tricky\n> > cases. For example, maybe there is some subtlety in nodeAgg.c with\n> > AGG_MIXED cases.\n> \n> Attached is an attempt at this. I have not been particularly thorough,\n\nThanks for putting it together, I agree that hash_mem seems to be an obvious\n\"escape hatch\" that generalizes existing GUCs and independently useful.\n\n> anything else). I still think that the new GUC should work as a\n> multiplier of work_mem, or something else along those lines, though\n> for now it's just an independent work_mem used for hashing. I bring it\n> up again because I'm concerned about users that upgrade to Postgres 13\n> incautiously, and find that hashing uses *less* memory than before.\n> Many users probably get away with setting work_mem quite high across\n> the board. At the very least, hash_mem should be ignored when it's set\n> to below work_mem (which isn't what the patch does).\n\nI feel it should same as work_mem, as it's written, and not a multiplier.\n\nAnd actually I don't think a lower value should be ignored: \"mechanism not\npolicy\". Do we refuse atypical values of maintenance_work_mem < work_mem ?\n\nI assumed that hash_mem would default to -1, which would mean \"fall back to\nwork_mem\". We'd then advise users to increase it if they have an issue in v13\nwith performance of hashes spilled to disk. (And maybe in other cases, too.)\n\nI read the argument that hash tables are a better use of RAM than sort.\nHowever it seems like setting the default to greater than work_mem is a\nseparate change than providing the mechanism allowing user to do so. I guess\nthe change in default is intended to mitigate the worst possible behavior\nchange someone might experience in v13 hashing, and might be expected to\nimprove \"out of the box\" performance. I'm not opposed to it, but it's not an\nessential part of the patch.\n\nIn nodeHash.c, you missed an underscore:\n+ * Target in-memory hashtable size is hashmem kilobytes.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Jul 2020 21:46:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 09:46:49PM -0500, Justin Pryzby wrote:\n> On Thu, Jul 02, 2020 at 07:05:48PM -0700, Peter Geoghegan wrote:\n> > anything else). I still think that the new GUC should work as a\n> > multiplier of work_mem, or something else along those lines, though\n> > for now it's just an independent work_mem used for hashing. I bring it\n> > up again because I'm concerned about users that upgrade to Postgres 13\n> > incautiously, and find that hashing uses *less* memory than before.\n> > Many users probably get away with setting work_mem quite high across\n> > the board. At the very least, hash_mem should be ignored when it's set\n> > to below work_mem (which isn't what the patch does).\n> \n> I feel it should same as work_mem, as it's written, and not a multiplier.\n> \n> And actually I don't think a lower value should be ignored: \"mechanism not\n> policy\". Do we refuse atypical values of maintenance_work_mem < work_mem ?\n> \n> I assumed that hash_mem would default to -1, which would mean \"fall back to\n> work_mem\". We'd then advise users to increase it if they have an issue in v13\n> with performance of hashes spilled to disk. (And maybe in other cases, too.)\n\nUh, with this patch, don't we really have sort_mem and hash_mem, but\nhash_mem default to sort_mem, or something like that. If hash_mem is a\nmultiplier, it would make more sense to keep the work_mem name.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 2 Jul 2020 22:58:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 10:58:34PM -0400, Bruce Momjian wrote:\n> On Thu, Jul 2, 2020 at 09:46:49PM -0500, Justin Pryzby wrote:\n> > On Thu, Jul 02, 2020 at 07:05:48PM -0700, Peter Geoghegan wrote:\n> > > anything else). I still think that the new GUC should work as a\n> > > multiplier of work_mem, or something else along those lines, though\n> > > for now it's just an independent work_mem used for hashing. I bring it\n> > > up again because I'm concerned about users that upgrade to Postgres 13\n> > > incautiously, and find that hashing uses *less* memory than before.\n> > > Many users probably get away with setting work_mem quite high across\n> > > the board. At the very least, hash_mem should be ignored when it's set\n> > > to below work_mem (which isn't what the patch does).\n> > \n> > I feel it should same as work_mem, as it's written, and not a multiplier.\n> > \n> > And actually I don't think a lower value should be ignored: \"mechanism not\n> > policy\". Do we refuse atypical values of maintenance_work_mem < work_mem ?\n> > \n> > I assumed that hash_mem would default to -1, which would mean \"fall back to\n> > work_mem\". We'd then advise users to increase it if they have an issue in v13\n> > with performance of hashes spilled to disk. (And maybe in other cases, too.)\n> \n> Uh, with this patch, don't we really have sort_mem and hash_mem, but\n> hash_mem default to sort_mem, or something like that. If hash_mem is a\n> multiplier, it would make more sense to keep the work_mem name.\n\nAlso, I feel this is all out of scope for PG 13, frankly. I think our\nonly option is to revert the hash spill entirely, and return to PG 13\nbehavior, if people are too worried about hash performance regression.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 2 Jul 2020 23:00:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 8:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Also, I feel this is all out of scope for PG 13, frankly. I think our\n> only option is to revert the hash spill entirely, and return to PG 13\n> behavior, if people are too worried about hash performance regression.\n\nBut the problem isn't really the hashaggs-that-spill patch itself.\nRather, the problem is the way that work_mem is supposed to behave in\ngeneral, and the impact that that has on hash aggregate now that it\nhas finally been brought into line with every other kind of executor\nnode. There just isn't much reason to think that we should give the\nsame amount of memory to a groupagg + sort as a hash aggregate. The\npatch more or less broke an existing behavior that is itself\nofficially broken. That is, the problem that we're trying to fix here\nis only a problem to the extent that the previous scheme isn't really\noperating as intended (because grouping estimates are inherently very\nhard). A revert doesn't seem like it helps anyone.\n\nI accept that the idea of inventing hash_mem to fix this problem now\nis unorthodox. In a certain sense it solves problems beyond the\nproblems that we're theoretically obligated to solve now. But any\n\"more conservative\" approach that I can think of seems like a big\nmess.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 20:35:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 7:46 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Thanks for putting it together, I agree that hash_mem seems to be an obvious\n> \"escape hatch\" that generalizes existing GUCs and independently useful.\n\nIt is independently useful. It's a natural consequence of \"being\nhonest\" about work_mem and hashing.\n\n> I feel it should same as work_mem, as it's written, and not a multiplier.\n>\n> And actually I don't think a lower value should be ignored: \"mechanism not\n> policy\". Do we refuse atypical values of maintenance_work_mem < work_mem ?\n\nI see your point, but AFAIK maintenance_work_mem was not retrofit like\nthis. It seems different. (Unless we add the -1 behavior, perhaps)\n\n> I assumed that hash_mem would default to -1, which would mean \"fall back to\n> work_mem\". We'd then advise users to increase it if they have an issue in v13\n> with performance of hashes spilled to disk. (And maybe in other cases, too.)\n\nYeah, this kind of -1 behavior could make sense.\n\n> I read the argument that hash tables are a better use of RAM than sort.\n> However it seems like setting the default to greater than work_mem is a\n> separate change than providing the mechanism allowing user to do so. I guess\n> the change in default is intended to mitigate the worst possible behavior\n> change someone might experience in v13 hashing, and might be expected to\n> improve \"out of the box\" performance. I'm not opposed to it, but it's not an\n> essential part of the patch.\n\nThat's true.\n\n> In nodeHash.c, you missed an underscore:\n> + * Target in-memory hashtable size is hashmem kilobytes.\n\nGot it; thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 20:56:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:\n> But the problem isn't really the hashaggs-that-spill patch itself.\n> Rather, the problem is the way that work_mem is supposed to behave in\n> general, and the impact that that has on hash aggregate now that it\n> has finally been brought into line with every other kind of executor\n> node. There just isn't much reason to think that we should give the\n> same amount of memory to a groupagg + sort as a hash aggregate. The\n> patch more or less broke an existing behavior that is itself\n> officially broken. That is, the problem that we're trying to fix here\n> is only a problem to the extent that the previous scheme isn't really\n> operating as intended (because grouping estimates are inherently very\n> hard). A revert doesn't seem like it helps anyone.\n> \n> I accept that the idea of inventing hash_mem to fix this problem now\n> is unorthodox. In a certain sense it solves problems beyond the\n> problems that we're theoretically obligated to solve now. But any\n> \"more conservative\" approach that I can think of seems like a big\n> mess.\n\nWell, the bottom line is that we are designing features during beta.\nPeople are supposed to be testing PG 13 behavior during beta, including\noptimizer behavior. We don't even have a user report yet of a\nregression compared to PG 12, or one that can't be fixed by increasing\nwork_mem.\n\nIf we add a new behavior to PG 13, we then have the pre-PG 13 behavior,\nthe pre-patch behavior, and the post-patch behavior. How are people\nsupposed to test all of that? Add to that that some don't even feel we\nneed a new behavior, which is delaying any patch from being applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:08:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Fri, Jul 03, 2020 at 10:08:08AM -0400, Bruce Momjian wrote:\n> On Thu, Jul 2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:\n> > But the problem isn't really the hashaggs-that-spill patch itself.\n> > Rather, the problem is the way that work_mem is supposed to behave in\n> > general, and the impact that that has on hash aggregate now that it\n> > has finally been brought into line with every other kind of executor\n> > node. There just isn't much reason to think that we should give the\n> > same amount of memory to a groupagg + sort as a hash aggregate. The\n> > patch more or less broke an existing behavior that is itself\n> > officially broken. That is, the problem that we're trying to fix here\n> > is only a problem to the extent that the previous scheme isn't really\n> > operating as intended (because grouping estimates are inherently very\n> > hard). A revert doesn't seem like it helps anyone.\n> > \n> > I accept that the idea of inventing hash_mem to fix this problem now\n> > is unorthodox. In a certain sense it solves problems beyond the\n> > problems that we're theoretically obligated to solve now. But any\n> > \"more conservative\" approach that I can think of seems like a big\n> > mess.\n> \n> Well, the bottom line is that we are designing features during beta.\n> People are supposed to be testing PG 13 behavior during beta, including\n> optimizer behavior. We don't even have a user report yet of a\n> regression compared to PG 12, or one that can't be fixed by increasing\n> work_mem.\n> \n> If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,\n> the pre-patch behavior, and the post-patch behavior. How are people\n> supposed to test all of that? Add to that that some don't even feel we\n> need a new behavior, which is delaying any patch from being applied.\n\nIf we default hash_mem=-1, the post-patch behavior by default would be same as\nthe pre-patch behavior.\n\nActually, another reason it should be -1 is simply to reduce the minimum,\nessential number of GUCs everyone has to change or review on a new installs of\na dedicated or nontrivial instance. shared_buffers, max_wal_size,\ncheckpoint_timeout, eff_cache_size, work_mem.\n\nI don't think anybody said it before, but now it occurs to me that one\nadvantage of making hash_mem a multiplier (I'm thinking of\nhash_mem_scale_factor) rather than an absolute is that one wouldn't need to\nremember to increase hash_mem every time they increase work_mem. Otherwise,\nthis is kind of a foot-gun: hash_mem would default to 16MB, and people\nexperiencing poor performance would increase work_mem to 256MB like they've\nbeen doing for decades, and see no effect. Or someone would increase work_mem\nfrom 4MB to 256MB, which exceeds hash_mem default of 16MB, so then (if Peter\nhas his way) hash_mem is ignored.\n\nDue to these behaviors, I'll retract my previous preference:\n| \"I feel it should same as work_mem, as it's written, and not a multiplier.\"\n\nI think the better ideas are:\n - hash_mem=-1\n - hash_mem_scale_factor=1 ?\n\nMaybe as a separate patch we'd set default hash_mem_scale_factor=4, possibly\nonly in master not and v13.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Jul 2020 09:56:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On 2020-Jul-03, Bruce Momjian wrote:\n\n> Well, the bottom line is that we are designing features during beta.\n\nWell, we're designing a way for users to interact the new feature.\nThe feature itself is already in, and it works well in general terms. I\nexpect that the new behavior is a win in the majority of cases, and the\nproblem being discussed here will only manifest as a regression in\ncorner cases. (I don't have data to back this up, but if this weren't\nthe case we would have realized much earlier).\n\nIt seem to me we're designing a solution to a problem that was found\nduring testing, which seems perfectly acceptable to me. I don't see\ngrounds for reverting the behavior and I haven't seen anyone suggesting\nthat it would be an appropriate solution to the issue.\n\n> If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,\n> the pre-patch behavior, and the post-patch behavior. How are people\n> supposed to test all of that?\n\nThey don't have to. We tell them that we added some new tweak for a new\npg13 feature in beta3 and that's it.\n\n> Add to that that some don't even feel we\n> need a new behavior, which is delaying any patch from being applied.\n\nIf we don't need any new behavior, then we would just close the open\nitem and call the current state good, no?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 15:50:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Fri, Jul 3, 2020 at 7:38 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jul 2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:\n> > But the problem isn't really the hashaggs-that-spill patch itself.\n> > Rather, the problem is the way that work_mem is supposed to behave in\n> > general, and the impact that that has on hash aggregate now that it\n> > has finally been brought into line with every other kind of executor\n> > node. There just isn't much reason to think that we should give the\n> > same amount of memory to a groupagg + sort as a hash aggregate. The\n> > patch more or less broke an existing behavior that is itself\n> > officially broken. That is, the problem that we're trying to fix here\n> > is only a problem to the extent that the previous scheme isn't really\n> > operating as intended (because grouping estimates are inherently very\n> > hard). A revert doesn't seem like it helps anyone.\n> >\n> > I accept that the idea of inventing hash_mem to fix this problem now\n> > is unorthodox. In a certain sense it solves problems beyond the\n> > problems that we're theoretically obligated to solve now. But any\n> > \"more conservative\" approach that I can think of seems like a big\n> > mess.\n>\n> We don't even have a user report yet of a\n> regression compared to PG 12, or one that can't be fixed by increasing\n> work_mem.\n>\n\nYeah, this is exactly the same point I have raised above. I feel we\nshould wait before designing any solution to match pre-13 behavior for\nhashaggs to see what percentage of users face problems related to this\nand how much is a problem for them to increase work_mem to avoid\nregression. Say, if only less than 1% of users face this problem and\nsome of them are happy by just increasing work_mem then we might not\nneed to do anything. OTOH, if 10% users face this problem and most of\nthem don't want to increase work_mem then it would be evident that we\nneed to do something about it and we can probably provide a guc at\nthat stage for them to revert to old behavior and do some advanced\nsolution in the master branch. I am not sure what is the right thing\nto do here but it seems to me we are designing a solution based on the\nassumption that we will have a lot of users who will be hit by this\nproblem and would be unhappy by the new behavior.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Jul 2020 14:49:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Sat, 2020-07-04 at 14:49 +0530, Amit Kapila wrote:\n> > We don't even have a user report yet of a\n> > regression compared to PG 12, or one that can't be fixed by\n> > increasing\n> > work_mem.\n> > \n> \n> Yeah, this is exactly the same point I have raised above. I feel we\n> should wait before designing any solution to match pre-13 behavior\n> for\n> hashaggs to see what percentage of users face problems related to\n> this\n> and how much is a problem for them to increase work_mem to avoid\n> regression.\n\nI agree that it's good to wait for actual problems. But the challenge\nis that we can't backport an added GUC. Are there other, backportable\nchanges we could potentially make if a lot of users have a problem with\nv13 after release? Or will any users who experience a problem need to\nwait for v14?\n\nI'm OK not having a GUC, but we need consensus around what our response\nwill be if a user experiences a regression. If our only answer is\n\"tweak X, Y, and Z; and if that doesn't work, wait for v14\" then I'd\nlike almost everyone to be on board with that. If we have some\nbackportable potential solutions, that gives us a little more\nconfidence that we can still get that user onto v13 (even if they have\nto wait for a point release).\n\nWithout some backportable potential solutions, I'm inclined to ship\nwith either one or two escape-hatch GUCs, with warnings that they\nshould be used as a last resort. Hopefully users will complain on the\nlists (so we understand the problem) before setting them.\n\nIt's not very principled, and we may be stuck with some cruft, but it\nmitigates the risk a lot. There's a good chance we can remove them\nlater, especially if it's part of a larger overhall of\nwork_mem/hash_mem (which might happen fairly soon, given the interest\nin this thread), or if we change something about HashAgg that makes the\nGUCs harder to maintain.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 04 Jul 2020 13:53:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Sat, Jul 4, 2020 at 1:54 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I agree that it's good to wait for actual problems. But the challenge\n> is that we can't backport an added GUC. Are there other, backportable\n> changes we could potentially make if a lot of users have a problem with\n> v13 after release?\n\nI doubt that there are.\n\n> I'm OK not having a GUC, but we need consensus around what our response\n> will be if a user experiences a regression. If our only answer is\n> \"tweak X, Y, and Z; and if that doesn't work, wait for v14\" then I'd\n> like almost everyone to be on board with that.\n\nI'm practically certain that there will be users that complain about\nregressions. It's all but inevitable given that in general grouping\nestimates are often wrong by orders of magnitude.\n\n> Without some backportable potential solutions, I'm inclined to ship\n> with either one or two escape-hatch GUCs, with warnings that they\n> should be used as a last resort. Hopefully users will complain on the\n> lists (so we understand the problem) before setting them.\n\nWhere does that leave the hash_mem idea (or some other similar proposal)?\n\nI think that we should offer something like hash_mem that can work as\na multiple of work_mem, for the reason that Justin mentioned recently.\nThis can be justified as something that more or less maintains some\nkind of continuity with the old design.\n\nI think that it should affect hash join too, though I suppose that\nthat part might be controversial -- that is certainly more than an\nescape hatch for this particular problem. Any thoughts on that?\n\n> It's not very principled, and we may be stuck with some cruft, but it\n> mitigates the risk a lot. There's a good chance we can remove them\n> later, especially if it's part of a larger overhall of\n> work_mem/hash_mem (which might happen fairly soon, given the interest\n> in this thread), or if we change something about HashAgg that makes the\n> GUCs harder to maintain.\n\nThere are several reasons to get rid of work_mem entirely in the\nmedium to long term. Some relatively obvious, others less so.\n\nAn example in the latter category is \"hash teams\" [1]: a design that\nteaches multiple hash operations (e.g. a hash join and a hash\naggregate that hash on the same columns) to cooperate in processing\ntheir inputs. It's more or less the hashing equivalent of what are\nsometimes called \"interesting sort orders\" (e.g. cases where the same\nsort/sort order is used by both a merge join and a group aggregate).\nThe hash team controls spilling behavior for related hash nodes as a\nwhole. That's the most sensible way of thinking about the related hash\nnodes, to enable a slew of optimizations. For example, I think that it\nenables bushy plans with multiple hash joins that can have much lower\nhigh watermark memory consumption.\n\nThis hash teams business seems quite important in general, but it is\nfundamentally incompatible with the work_mem model, which supposes\nthat each node exists on its own in a vacuum. (I suspect you already\nknew about this, Jeff, but not everyone will.)\n\n[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.3183&rep=rep1&type=pdf\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 5 Jul 2020 16:47:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Sun, Jul 5, 2020 at 2:24 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2020-07-04 at 14:49 +0530, Amit Kapila wrote:\n> > > We don't even have a user report yet of a\n> > > regression compared to PG 12, or one that can't be fixed by\n> > > increasing\n> > > work_mem.\n> > >\n> >\n> > Yeah, this is exactly the same point I have raised above. I feel we\n> > should wait before designing any solution to match pre-13 behavior\n> > for\n> > hashaggs to see what percentage of users face problems related to\n> > this\n> > and how much is a problem for them to increase work_mem to avoid\n> > regression.\n>\n> I agree that it's good to wait for actual problems. But the challenge\n> is that we can't backport an added GUC.\n>\n\nIs it because we won't be able to edit existing postgresql.conf file\nor for some other reasons?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jul 2020 15:59:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Mon, 2020-07-06 at 15:59 +0530, Amit Kapila wrote:\n> I agree that it's good to wait for actual problems. But the\n> > challenge\n> > is that we can't backport an added GUC.\n> > \n> \n> Is it because we won't be able to edit existing postgresql.conf file\n> or for some other reasons?\n\nPerhaps \"can't\" was too strong of a word, but I think it would be\nunprecedented to introduce a GUC in a minor version. It could be a\nsource of confusion.\n\nIf others think that adding a GUC in a minor version would be\nacceptable, please let me know.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 06 Jul 2020 20:48:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Sun, 2020-07-05 at 16:47 -0700, Peter Geoghegan wrote:\n> Where does that leave the hash_mem idea (or some other similar\n> proposal)?\n\nhash_mem is acceptable to me if the consensus is moving toward that,\nbut I'm not excited about it.\n\nIt would be one thing if hash_mem was a nice clean solution. But it\ndoesn't seem like a clean solution to me; and it's likely that it will\nget in the way of the next person who tries to improve the work_mem\nsituation.\n\n> I think that we should offer something like hash_mem that can work as\n> a multiple of work_mem, for the reason that Justin mentioned\n> recently.\n> This can be justified as something that more or less maintains some\n> kind of continuity with the old design.\n\nDidn't Justin argue against using a multiplier?\nhttps://postgr.es/m/20200703024649.GJ4107@telsasoft.com\n\n> I think that it should affect hash join too, though I suppose that\n> that part might be controversial -- that is certainly more than an\n> escape hatch for this particular problem. Any thoughts on that?\n\nIf it's called hash_mem, then I guess it needs to affect HJ. If not, it\nshould have a different name.\n\n> There are several reasons to get rid of work_mem entirely in the\n> medium to long term. Some relatively obvious, others less so.\n\nAgreed.\n\nIt seems like the only argument against the escape hatch GUCs is that\nthey are cruft and we will end up stuck with them. But if we are\ndispensing with work_mem in a few releases, surely we'd need to\ndispense with hash_mem or the proposed escape-hatch GUCs anyway.\n\n> An example in the latter category is \"hash teams\" [1]: a design that\n> teaches multiple hash operations (e.g. a hash join and a hash\n> aggregate that hash on the same columns) to cooperate in processing\n> their inputs.\n\nCool! It would certainly be nice to share the partitioning work between\na HashAgg and a HJ.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 06 Jul 2020 21:57:08 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Mon, Jul 06, 2020 at 09:57:08PM -0700, Jeff Davis wrote:\n> > I think that we should offer something like hash_mem that can work as\n> > a multiple of work_mem, for the reason that Justin mentioned\n> > recently.\n> > This can be justified as something that more or less maintains some\n> > kind of continuity with the old design.\n> \n> Didn't Justin argue against using a multiplier?\n> https://postgr.es/m/20200703024649.GJ4107@telsasoft.com\n\nI recanted.\nhttps://www.postgresql.org/message-id/20200703145620.GK4107@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Jul 2020 00:02:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Tue, Jul 7, 2020 at 9:18 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2020-07-06 at 15:59 +0530, Amit Kapila wrote:\n> > I agree that it's good to wait for actual problems. But the\n> > > challenge\n> > > is that we can't backport an added GUC.\n> > >\n> >\n> > Is it because we won't be able to edit existing postgresql.conf file\n> > or for some other reasons?\n>\n> Perhaps \"can't\" was too strong of a word, but I think it would be\n> unprecedented to introduce a GUC in a minor version.\n>\n\nI don't think this is true. We seem to have introduced three new guc\nvariables in a 9.3.3 minor release. See the following entry in 9.3.3\nrelease notes [1]: \"Create separate GUC parameters to control\nmultixact freezing.... Introduce new settings\nvacuum_multixact_freeze_min_age, vacuum_multixact_freeze_table_age,\nand autovacuum_multixact_freeze_max_age to control when to freeze\nmultixacts.\"\n\nApart from this, we have asked users to not only edit postgresql.conf\nfile but also update system catalogs. See the fix for \"Cope with the\nWindows locale named \"Norwegian (Bokmål)\" [2] in 9.4.1 release.\n\nThere are other instances where we also suggest users to set gucs,\ncreate new system objects (like views), perform DDL, DMLs, run REINDEX\non various indexes, etc. in the minor release.\n\n[1] - https://www.postgresql.org/docs/release/9.3.3/\n[2] - https://wiki.postgresql.org/wiki/Changes_To_Norwegian_Locale\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jul 2020 15:20:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Tue, 7 Jul 2020 at 16:57, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sun, 2020-07-05 at 16:47 -0700, Peter Geoghegan wrote:\n> > Where does that leave the hash_mem idea (or some other similar\n> > proposal)?\n>\n> hash_mem is acceptable to me if the consensus is moving toward that,\n> but I'm not excited about it.\n\nFWIW, I'm not a fan of the hash_mem idea. It was my impression that we\naimed to provide an escape hatch for people we have become accustomed\nto <= PG12 behaviour and hash_mem sounds like it's not that. Surely a\nGUC by that name would control what Hash Join does too? Otherwise, it\nwould be called hashagg_mem. I'd say changing the behaviour of Hash\njoin is not well aligned to the goal of allowing users to get\nsomething closer to what PG12 did.\n\nI know there has been talk over the years to improve how work_mem\nworks. I see Tomas mentioned memory grants on the other thread [1]. I\ndo imagine this is the long term solution to the problem where users\nmust choose very conservative values for work_mem. We're certainly not\ngoing to get that for PG13, so I do think what we need here is just a\nsimple escape hatch. I mentioned my thoughts in [2], so won't go over\nit again here. Once we've improved the situation in some future\nversion of postgres, perhaps along the lines of what Tomas mentioned,\nthen we can get rid of the escape hatch.\n\nHere are my reasons for not liking the hash_mem idea:\n\n1. if it also increases the amount of memory that Hash Join can use\nthen that makes the partition-wise hash join problem of hash_mem *\nnpartitions even bigger when users choose to set hash_mem higher than\nwork_mem to get Hash Agg doing what they're used to.\n2. Someone will one day ask for sort_mem and then materialize_mem.\nMaybe then cte_mem. Once those are done we might as well just add a\nGUC to control every executor node that uses work_mem.\n3. I'm working on a Result cache node [3]. It uses a hash table\ninternally. Should it constraint its memory consumption according to\nhash_mem or work_mem? It's not really that obvious to people that it\ninternally uses a hash table. \"Hash\" does not appear in the node name.\nDo people need to look that up in the documents?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200626235850.gvl3lpfyeobu4evi@development\n[2] https://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 00:54:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "út 7. 7. 2020 v 14:55 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Tue, 7 Jul 2020 at 16:57, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Sun, 2020-07-05 at 16:47 -0700, Peter Geoghegan wrote:\n> > > Where does that leave the hash_mem idea (or some other similar\n> > > proposal)?\n> >\n> > hash_mem is acceptable to me if the consensus is moving toward that,\n> > but I'm not excited about it.\n>\n> FWIW, I'm not a fan of the hash_mem idea. It was my impression that we\n> aimed to provide an escape hatch for people we have become accustomed\n> to <= PG12 behaviour and hash_mem sounds like it's not that. Surely a\n> GUC by that name would control what Hash Join does too? Otherwise, it\n> would be called hashagg_mem. I'd say changing the behaviour of Hash\n> join is not well aligned to the goal of allowing users to get\n> something closer to what PG12 did.\n>\n> I know there has been talk over the years to improve how work_mem\n> works. I see Tomas mentioned memory grants on the other thread [1]. I\n> do imagine this is the long term solution to the problem where users\n> must choose very conservative values for work_mem. We're certainly not\n> going to get that for PG13, so I do think what we need here is just a\n> simple escape hatch. I mentioned my thoughts in [2], so won't go over\n> it again here. Once we've improved the situation in some future\n> version of postgres, perhaps along the lines of what Tomas mentioned,\n> then we can get rid of the escape hatch.\n>\n> Here are my reasons for not liking the hash_mem idea:\n>\n> 1. if it also increases the amount of memory that Hash Join can use\n> then that makes the partition-wise hash join problem of hash_mem *\n> npartitions even bigger when users choose to set hash_mem higher than\n> work_mem to get Hash Agg doing what they're used to.\n> 2. Someone will one day ask for sort_mem and then materialize_mem.\n> Maybe then cte_mem. Once those are done we might as well just add a\n> GUC to control every executor node that uses work_mem.\n> 3. I'm working on a Result cache node [3]. It uses a hash table\n> internally. Should it constraint its memory consumption according to\n> hash_mem or work_mem? It's not really that obvious to people that it\n> internally uses a hash table. \"Hash\" does not appear in the node name.\n> Do people need to look that up in the documents?\n>\n\n+1\n\nI share your opinion.\n\n\n\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/20200626235850.gvl3lpfyeobu4evi@development\n> [2]\n> https://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\n> [3]\n> https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n>\n>\n>\n\nút 7. 7. 2020 v 14:55 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Tue, 7 Jul 2020 at 16:57, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sun, 2020-07-05 at 16:47 -0700, Peter Geoghegan wrote:\n> > Where does that leave the hash_mem idea (or some other similar\n> > proposal)?\n>\n> hash_mem is acceptable to me if the consensus is moving toward that,\n> but I'm not excited about it.\n\nFWIW, I'm not a fan of the hash_mem idea. It was my impression that we\naimed to provide an escape hatch for people we have become accustomed\nto <= PG12 behaviour and hash_mem sounds like it's not that. Surely a\nGUC by that name would control what Hash Join does too?  Otherwise, it\nwould be called hashagg_mem. I'd say changing the behaviour of Hash\njoin is not well aligned to the goal of allowing users to get\nsomething closer to what PG12 did.\n\nI know there has been talk over the years to improve how work_mem\nworks. I see Tomas mentioned memory grants on the other thread [1]. I\ndo imagine this is the long term solution to the problem where users\nmust choose very conservative values for work_mem. We're certainly not\ngoing to get that for PG13, so I do think what we need here is just a\nsimple escape hatch. I mentioned my thoughts in [2], so won't go over\nit again here. Once we've improved the situation in some future\nversion of postgres, perhaps along the lines of what Tomas mentioned,\nthen we can get rid of the escape hatch.\n\nHere are my reasons for not liking the hash_mem idea:\n\n1. if it also increases the amount of memory that Hash Join can use\nthen that makes the partition-wise hash join problem of hash_mem *\nnpartitions even bigger when users choose to set hash_mem higher than\nwork_mem to get Hash Agg doing what they're used to.\n2. Someone will one day ask for sort_mem and then materialize_mem.\nMaybe then cte_mem. Once those are done we might as well just add a\nGUC to control every executor node that uses work_mem.\n3. I'm working on a Result cache node [3]. It uses a hash table\ninternally. Should it constraint its memory consumption according to\nhash_mem or work_mem? It's not really that obvious to people that it\ninternally uses a hash table. \"Hash\" does not appear in the node name.\nDo people need to look that up in the documents?+1I share your opinion. \n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200626235850.gvl3lpfyeobu4evi@development\n[2] https://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com", "msg_date": "Tue, 7 Jul 2020 15:16:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "Hi,\n\nOn 2020-07-03 10:08:08 -0400, Bruce Momjian wrote:\n> Well, the bottom line is that we are designing features during beta.\n> People are supposed to be testing PG 13 behavior during beta, including\n> optimizer behavior.\n\nI think it makes no too much sense to plan invent something like\nhash_mem for v13, it's clearly too much work. That's a seperate\ndiscussion from having something like it for v14.\n\n\n> We don't even have a user report yet of a\n> regression compared to PG 12, or one that can't be fixed by increasing\n> work_mem.\n\nI posted a repro, and no you can't fix it by increasing work_mem without\nincreasing memory usage in the whole query / all queries.\n\n\n> If we add a new behavior to PG 13, we then have the pre-PG 13 behavior,\n> the pre-patch behavior, and the post-patch behavior. How are people\n> supposed to test all of that?\n\nI don't really buy this as a problem. It's not like the pre-13 behaviour\nwould be all new. It's how PG has behaved approximately forever.\n\n\nMy conclusion about this topic is that I think we'll be doing our users\na disservice by not providing an escape hatch, but that I also don't\nhave the energy / time to fight for it further. This is a long thread\nalready, and I sense little movement towards a conclusion.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Jul 2020 10:12:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Tue, Jul 7, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> FWIW, I'm not a fan of the hash_mem idea. It was my impression that we\n> aimed to provide an escape hatch for people we have become accustomed\n> to <= PG12 behaviour and hash_mem sounds like it's not that.\n\nThe exact scope of the problem is unclear. If it was clear, then we'd\nbe a lot closer to a resolution than we seem to be. Determining the\nscope of the problem is the hardest part of the problem.\n\nAll that is ~100% clear now is that some users will experience what\nthey'll call a regression. Those users will be unhappy, even if and\nwhen they come to understand that technically they're just \"living\nwithin their means\" for the first time, and were theoretically not\nentitled to the performance from earlier versions all along. That's\nbad, and we should try our best to avoid or mitigate it.\n\nSometimes the more ambitious plan (in this case hash_mem) actually has\na greater chance of succeeding, despite solving more problems than the\nimmediate problem. I don't think that it's reasonable to hold that\nagainst my proposal. It may be that hash_mem is a bad idea based on\nthe costs and benefits, or the new risks, in which case it should be\nrejected. But if it's the best proposal on the table by a clear\nmargin, then it shouldn't be disqualified for not satisfying the\noriginal framing of the problem.\n\n> Surely a\n> GUC by that name would control what Hash Join does too? Otherwise, it\n> would be called hashagg_mem. I'd say changing the behaviour of Hash\n> join is not well aligned to the goal of allowing users to get\n> something closer to what PG12 did.\n\nMy tentative hash_mem proposal assumed that hash join would be\naffected alongside hash agg, in the obvious way. Yes, that's clearly\nbeyond the scope of the open item.\n\nThe history of some other database systems is instructive. At least a\ncouple of these systems had something like a work_mem/sort_mem GUC, as\nwell as a separate hash_mem-like GUC that only affects hashing. It's\nsloppy, but nevertheless better than completely ignoring the\nfundamental ways in which hashing really is special. This is a way of\npapering-over one of the main deficiencies of the general idea of a\nwork_mem style per-node allocation. Yes, that's pretty ugly.\n\nI think that work_mem would be a lot easier to tune if you assume that\nhash-based nodes don't exist (i.e. only merge joins and nestloop joins\nare available, plus group aggregate for aggregation). You don't need\nto do this as a thought experiment. That really was how things were up\nuntil about the mid-1980s, when increasing memory capacities made hash\njoin and hash agg in database systems feasible for the first time.\nHashing came after most of the serious work on cost-based optimizers\nhad already been done. This argues for treating hash-based nodes as\nspecial now, if only to extend work_mem beyond its natural life as a\npragmatic short-term measure. Separately, it argues for a *total\nrethink* of how memory is used in the executor in the long term -- it\nshouldn't be per-node in a few important cases (I'm thinking of the\n\"hash teams\" design I mentioned on this thread recently, which seems\nlike a fundamentally better way of doing it).\n\n> We're certainly not\n> going to get that for PG13, so I do think what we need here is just a\n> simple escape hatch. I mentioned my thoughts in [2], so won't go over\n> it again here. Once we've improved the situation in some future\n> version of postgres, perhaps along the lines of what Tomas mentioned,\n> then we can get rid of the escape hatch.\n\nIf it really has to be a simple escape hatch in Postgres 13, then I\ncould live with a hard disabling of spilling at execution time. That\nseems like the most important thing that is addressed by your\nproposal. I'm concerned that way too many users will have to use the\nescape hatch, and that that misses the opportunity to provide a\nsmoother experience.\n\n> Here are my reasons for not liking the hash_mem idea:\n\nI'm sure that your objections are valid to varying degrees. But they\ncould almost be thought of as problems with work_mem itself. I am\ntrying to come up with a practical way of ameliorating the downsides\nof work_mem. I don't for a second imagine that this won't create new\nproblems. I think that it's the least worst thing right now. I have my\nmisgivings.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Jul 2020 12:24:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On 2020-Jul-07, Amit Kapila wrote:\n\n> I don't think this is true. We seem to have introduced three new guc\n> variables in a 9.3.3 minor release.\n\nYeah, backporting GUCs is not a big deal. Sure, the GUC won't appear in\npostgresql.conf files generated by initdb prior to the release that\nintroduces it. But users that need it can just edit their .confs and\nadd the appropriate line, or just do ALTER SYSTEM after the minor\nupgrade. For people that don't need it, it would have a reasonable\ndefault (probably work_mem, so that behavior doesn't change on the minor\nupgrade).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Jul 2020 16:18:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Tue, Jul 7, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Yeah, backporting GUCs is not a big deal. Sure, the GUC won't appear in\n> postgresql.conf files generated by initdb prior to the release that\n> introduces it. But users that need it can just edit their .confs and\n> add the appropriate line, or just do ALTER SYSTEM after the minor\n> upgrade.\n\nI don't buy that argument myself. At a minimum, if we do it then we\nought to feel bad about it. It should be rare.\n\nThe fact that you can have a replica on an earlier point release\nenforces the idea that it ought to be broadly compatible. Technically\nusers are not guaranteed that this will work, just like there are no\nguarantees about WAL compatibility across point releases. We\nnevertheless tacitly provide a \"soft\" guarantee that we won't break\nWAL -- and that we won't add entirely new GUCs in a point release.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Jul 2020 13:37:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On 2020-Jul-07, Peter Geoghegan wrote:\n\n> On Tue, Jul 7, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Yeah, backporting GUCs is not a big deal. Sure, the GUC won't appear in\n> > postgresql.conf files generated by initdb prior to the release that\n> > introduces it. But users that need it can just edit their .confs and\n> > add the appropriate line, or just do ALTER SYSTEM after the minor\n> > upgrade.\n> \n> I don't buy that argument myself. At a minimum, if we do it then we\n> ought to feel bad about it. It should be rare.\n\nJudging history, it's pretty clear that it *is* rare. I'm not\nsuggesting we do it now. I'm just contesting the assertion that it's\nimpossible.\n\n> The fact that you can have a replica on an earlier point release\n> enforces the idea that it ought to be broadly compatible.\n\nA replica without hash_mem is not going to fail if the primary is\nupgraded to a version with hash_mem, so I'm not sure this argument\nmeans anything in this case. In any case, when we add WAL message types\nin minor releases, users are suggested to upgrade the replicas first; if\nthey fail to do so, the replicas shut down when they reach a WAL point\nwhere the primary emitted the new message. Generally speaking, we *don't*\npromise that running a replica with an older minor always works, though\nobviously it does work most of the time.\n\n> Technically\n> users are not guaranteed that this will work, just like there are no\n> guarantees about WAL compatibility across point releases. We\n> nevertheless tacitly provide a \"soft\" guarantee that we won't break\n> WAL -- and that we won't add entirely new GUCs in a point release.\n\nAgreed, we do provide those guarantees.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Jul 2020 16:53:00 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Tue, Jul 7, 2020 at 10:12 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it makes no too much sense to plan invent something like\n> hash_mem for v13, it's clearly too much work. That's a seperate\n> discussion from having something like it for v14.\n\nCan you explain why you believe that to be the case? It seems quite\npossible that there is some subtlety that I missed in grouping sets or\nsomething like that. I would like to know the specifics, if there are\nany specifics.\n\n> My conclusion about this topic is that I think we'll be doing our users\n> a disservice by not providing an escape hatch, but that I also don't\n> have the energy / time to fight for it further. This is a long thread\n> already, and I sense little movement towards a conclusion.\n\nAn escape hatch seems necessary. I accept that a hard disabling of\nspilling at execution time meets that standard, and that may be enough\nfor Postgres 13. But I am concerned that an uncomfortably large\nproportion of our users will end up needing this. (Perhaps I should\nsay a large proportion of the subset of users that might be affected\neither way. You get the idea.)\n\nI have to wonder if this escape hatch is an escape hatch for our\nusers, or an escape hatch for us. There is a difference.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Jul 2020 14:03:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Wed, 8 Jul 2020 at 07:25, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jul 7, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > We're certainly not\n> > going to get that for PG13, so I do think what we need here is just a\n> > simple escape hatch. I mentioned my thoughts in [2], so won't go over\n> > it again here. Once we've improved the situation in some future\n> > version of postgres, perhaps along the lines of what Tomas mentioned,\n> > then we can get rid of the escape hatch.\n>\n> If it really has to be a simple escape hatch in Postgres 13, then I\n> could live with a hard disabling of spilling at execution time. That\n> seems like the most important thing that is addressed by your\n> proposal. I'm concerned that way too many users will have to use the\n> escape hatch, and that that misses the opportunity to provide a\n> smoother experience.\n\nYeah. It's a valid concern. I'd rather nobody would ever have to exit\nthrough the escape hatch either. I don't think anyone here actually\nwants that to happen. It's only been proposed to allow users a method\nto escape the new behaviour and get back what they're used to.\n\nI think the smoother experience will come in some future version of\nPostgreSQL with generally better memory management for work_mem all\nround. It's certainly been talked about enough and I don't think\nanyone here disagrees that there is a problem with N being unbounded\nwhen it comes to N * work_mem.\n\nI'd really like to see this thread move forward to a solution and I'm\nnot sure how best to do that. I started by reading back over both this\nthread and the original one and tried to summarise what people have\nsuggested.\n\nI understand some people did change their minds along the way, so I\nmay have made some mistakes. I could have assumed the latest mindset\noverruled, but it was harder to determine that due to the thread being\nsplit.\n\nFor hash_mem = Justin [16], PeterG [15], Tomas [7]\nhash_mem out of scope for PG13 = Bruce [8], Andres [9]\nWait for reports from users = Amit [10]\nEscape hatch that can be removed later when we get something better =\nJeff [11], David [12], Pavel [13], Andres [14], Justin [1]\nAdd enable_hashagg_spill = Tom [2] (I'm unclear on this proposal. Does\nit affect the planner or executor or both?)\nMaybe do nothing until we see how things go during beta = Bruce [3]\nJust let users set work_mem = Alvaro [4] (I think he changed his mind\nafter Andres pointed out that changes other nodes in the plan too)\nSwap enable_hashagg for a GUC that specifies when spilling should\noccur. -1 means work_mem = Robert [17], Amit [18]\nhash_mem does not solve the problem = Tomas [6]\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200624031443.GV4107@telsasoft.com\n[2] https://www.postgresql.org/message-id/2214502.1593019796@sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/20200625182512.GC12486@momjian.us\n[4] https://www.postgresql.org/message-id/20200625224422.GA9653@alvherre.pgsql\n[5] https://www.postgresql.org/message-id/CAA4eK1K0cgk_8hRyxsvppgoh_Z-NY+UZTcFWB2we6baJ9DXCQw@mail.gmail.com\n[6] https://www.postgresql.org/message-id/20200627104141.gq7d3hm2tvoqgjjs@development\n[7] https://www.postgresql.org/message-id/20200629212229.n3afgzq6xpxrr4cu@development\n[8] https://www.postgresql.org/message-id/20200703030001.GD26235@momjian.us\n[9] https://www.postgresql.org/message-id/20200707171216.jqxrld2jnxwf5ozv@alap3.anarazel.de\n[10] https://www.postgresql.org/message-id/CAA4eK1KfPi6iz0hWxBLZzfVOG_NvOVJL=9UQQirWLpaN=kANTQ@mail.gmail.com\n[11] https://www.postgresql.org/message-id/8bff2e4e8020c3caa16b61a46918d21b573eaf78.camel@j-davis.com\n[12] https://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\n[13] https://www.postgresql.org/message-id/CAFj8pRBf1w4ndz-ynd+mUpTfiZfbs7+CPjc4ob8v9d3X0MscCg@mail.gmail.com\n[14] https://www.postgresql.org/message-id/20200624191433.5gnqgrxfmucexldm@alap3.anarazel.de\n[15] https://www.postgresql.org/message-id/CAH2-WzmD+i1pG6rc1+Cjc4V6EaFJ_qSuKCCHVnH=oruqD-zqow@mail.gmail.com\n[16] https://www.postgresql.org/message-id/20200703024649.GJ4107@telsasoft.com\n[17] https://www.postgresql.org/message-id/CA+TgmobyV9+T-Wjx-cTPdQuRCgt1THz1mL3v1NXC4m4G-H6Rcw@mail.gmail.com\n[18] https://www.postgresql.org/message-id/CAA4eK1K0cgk_8hRyxsvppgoh_Z-NY+UZTcFWB2we6baJ9DXCQw@mail.gmail.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 13:57:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "On Wed, Jul 8, 2020 at 7:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>\n> I'd really like to see this thread move forward to a solution and I'm\n> not sure how best to do that. I started by reading back over both this\n> thread and the original one and tried to summarise what people have\n> suggested.\n>\n\nThanks, I think this might help us in reaching some form of consensus\nby seeing what most people prefer.\n\n> I understand some people did change their minds along the way, so I\n> may have made some mistakes. I could have assumed the latest mindset\n> overruled, but it was harder to determine that due to the thread being\n> split.\n>\n\n\n> For hash_mem = Justin [16], PeterG [15], Tomas [7]\n> hash_mem out of scope for PG13 = Bruce [8], Andres [9]\n>\n\n+1 for hash_mem out of scope for PG13. Apart from the reasons you\nhave mentioned above, the other reason is if this is a way to allow\nusers to get a smooth experience for hash aggregates, then I think the\nidea proposed by Robert is not yet ruled out and we should see which\none is better. OTOH, if we want to see this as a way to give smooth\nexperience for current use cases for hash aggregates and improve the\nsituation for hash joins as well then I think this seems to be a new\nbehavior which should be discussed for PG14. Having said that, I am\nnot saying this is not a good idea but just I don't think we should\npursue it for PG13.\n\n> Wait for reports from users = Amit [10]\n\nI think this is mostly inline with Bruce is intending to say (\"Maybe\ndo nothing until we see how things go during beta\"). So, probably we\ncan club the votes.\n\n> Escape hatch that can be removed later when we get something better =\n> Jeff [11], David [12], Pavel [13], Andres [14], Justin [1]\n> Add enable_hashagg_spill = Tom [2] (I'm unclear on this proposal. Does\n> it affect the planner or executor or both?)\n> Maybe do nothing until we see how things go during beta = Bruce [3]\n> Just let users set work_mem = Alvaro [4] (I think he changed his mind\n> after Andres pointed out that changes other nodes in the plan too)\n> Swap enable_hashagg for a GUC that specifies when spilling should\n> occur. -1 means work_mem = Robert [17], Amit [18]\n> hash_mem does not solve the problem = Tomas [6]\n>\n\n[1] - https://www.postgresql.org/message-id/CA+TgmobyV9+T-Wjx-cTPdQuRCgt1THz1mL3v1NXC4m4G-H6Rcw@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 08:56:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk (hash_mem)" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2020-Jun-25, Andres Freund wrote:\n> \n> > >My point here is that maybe we don't need to offer a GUC to explicitly\n> > >turn spilling off; it seems sufficient to let users change work_mem so\n> > >that spilling will naturally not occur. Why do we need more?\n> > \n> > That's not really a useful escape hatch, because I'll often lead to\n> > other nodes using more memory.\n> \n> Ah -- other nodes in the same query -- you're right, that's not good.\n\nIt's exactly how the system has been operating for, basically, forever,\nfor everything. Yes, it'd be good to have a way to manage the\noverall amount of memory that a query is allowed to use but that's a\nhuge change and inventing some new 'hash_mem' or some such GUC doesn't\nstrike me as a move in the right direction- are we going to have\nsort_mem next? What if having one large hash table for aggregation\nwould be good, but having the other aggregate use a lot of memory would\nrun the system out of memory? Yes, we need to do better, but inventing\nnew node_mem GUCs isn't the direction to go in.\n\nThat HashAgg previously didn't care that it was going wayyyyy over\nwork_mem was, if anything, a bug. Inventing new GUCs late in the\ncycle like this under duress seems like a *really* bad idea. Yes,\npeople are going to have to adjust work_mem if they want these queries\nto continue using a ton of memory to run when the planner didn't think\nit'd actually take that much memory- but then, in lots of the kinds of\ncases that I think you're worrying about, the stats aren't actually that\nfar off and people did increase work_mem to get the HashAgg plan in the\nfirst place.\n\nI'm also in support of having enable_hashagg_disk set to true as the\ndefault, just like all of the other enable_*'s.\n\nThanks,\n\nStephen", "msg_date": "Wed, 8 Jul 2020 10:00:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Wed, 2020-07-08 at 10:00 -0400, Stephen Frost wrote:\n> That HashAgg previously didn't care that it was going wayyyyy over\n> work_mem was, if anything, a bug.\n\nI think we all agree about that, but some people may be depending on\nthat bug.\n\n> Inventing new GUCs late in the\n> cycle like this under duress seems like a *really* bad idea.\n\nAre you OK with escape-hatch GUCs that allow the user to opt for v12\nbehavior in the event that they experience a regression?\n\nThe one for the planner is already there, and it looks like we need one\nfor the executor as well (to tell HashAgg to ignore the memory limit\njust like v12).\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 08 Jul 2020 23:47:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Wed, 2020-07-08 at 10:00 -0400, Stephen Frost wrote:\n> > That HashAgg previously didn't care that it was going wayyyyy over\n> > work_mem was, if anything, a bug.\n> \n> I think we all agree about that, but some people may be depending on\n> that bug.\n\nThat's why we don't make these kinds of changes in a minor release and\ninstead have major releases.\n\n> > Inventing new GUCs late in the\n> > cycle like this under duress seems like a *really* bad idea.\n> \n> Are you OK with escape-hatch GUCs that allow the user to opt for v12\n> behavior in the event that they experience a regression?\n\nThe enable_* options aren't great, and the one added for this is even\nstranger since it's an 'enable' option for a particular capability of a\nnode rather than just a costing change for a node, but I feel like\npeople generally understand that they shouldn't be messing with the\nenable_* options and that they're not really intended for end users.\n\n> The one for the planner is already there, and it looks like we need one\n> for the executor as well (to tell HashAgg to ignore the memory limit\n> just like v12).\n\nNo, ignoring the limit set was, as agreed above, a bug, and I don't\nthink it makes sense to add some new user tunable for this. If folks\nwant to let HashAgg use more memory then they can set work_mem higher,\njust the same as if they want a Sort node to use more memory or a\nHashJoin. Yes, that comes with potential knock-on effects about other\nnodes (possibly) using more memory but that's pretty well understood for\nall the other cases and I don't think that it makes sense to have a\nspecial case for HashAgg when the only justification is that \"well, you\nsee, it used to have this bug, so...\".\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 10:03:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 9, 2020 at 7:03 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > The one for the planner is already there, and it looks like we need one\n> > for the executor as well (to tell HashAgg to ignore the memory limit\n> > just like v12).\n>\n> No, ignoring the limit set was, as agreed above, a bug, and I don't\n> think it makes sense to add some new user tunable for this.\n\nIt makes more sense than simply ignoring what our users will see as a\nsimple regression. (Though I still lean towards fixing the problem by\nintroducing hash_mem, which at least tries to fix the problem head\non.)\n\n> If folks\n> want to let HashAgg use more memory then they can set work_mem higher,\n> just the same as if they want a Sort node to use more memory or a\n> HashJoin. Yes, that comes with potential knock-on effects about other\n> nodes (possibly) using more memory but that's pretty well understood for\n> all the other cases and I don't think that it makes sense to have a\n> special case for HashAgg when the only justification is that \"well, you\n> see, it used to have this bug, so...\".\n\nThat's not the only justification. The other justification is that\nit's generally reasonable to prefer giving hash aggregate more memory.\nThis would even be true in a world where all grouping estimates were\nsomehow magically accurate. These two justifications coincide in a way\nthat may seem a bit too convenient to truly be an accident of history.\nAnd if they do: I agree. It's no accident.\n\nIt seems likely that we have been \"complicit\" in enabling\n\"applications that live beyond their means\", work_mem-wise. We knew\nthat hash aggregate had this \"bug\" forever, and yet we were reasonably\nhappy to have it be the common case for years. It's very fast, and\ndidn't actually explode most of the time (even though grouping\nestimates are often pretty poor). Hash agg was and is the common case.\nYes, we were concerned about the risk of OOM for many years, but it\nwas considered a risk worth taking. We knew what the trade-off was. We\nnever quite admitted it, but what does it matter?\n\nOur own tacit attitude towards hash agg + work_mem mirrors that of our\nusers (or at least the users that will be affected by this issue, of\nwhich there will be plenty). Declaring this behavior a bug with no\nredeeming qualities now seems a bit rich.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Jul 2020 15:32:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Thu, Jul 9, 2020 at 7:03 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > The one for the planner is already there, and it looks like we need one\n> > > for the executor as well (to tell HashAgg to ignore the memory limit\n> > > just like v12).\n> >\n> > No, ignoring the limit set was, as agreed above, a bug, and I don't\n> > think it makes sense to add some new user tunable for this.\n> \n> It makes more sense than simply ignoring what our users will see as a\n> simple regression. (Though I still lean towards fixing the problem by\n> introducing hash_mem, which at least tries to fix the problem head\n> on.)\n\nThe presumption that this will always end up resulting in a regression\nreally doesn't seem sensible to me. We could rip out the logic in Sort\nthat spills to disk and see how much faster it gets- as long as we don't\nactually run out of memory, but that's kind of the entire point of\nhaving some kind of limit on the amount of memory we use, isn't it?\n\n> > If folks\n> > want to let HashAgg use more memory then they can set work_mem higher,\n> > just the same as if they want a Sort node to use more memory or a\n> > HashJoin. Yes, that comes with potential knock-on effects about other\n> > nodes (possibly) using more memory but that's pretty well understood for\n> > all the other cases and I don't think that it makes sense to have a\n> > special case for HashAgg when the only justification is that \"well, you\n> > see, it used to have this bug, so...\".\n> \n> That's not the only justification. The other justification is that\n> it's generally reasonable to prefer giving hash aggregate more memory.\n\nSure, and it's generally reasonably to prefer giving Sorts more memory\ntoo... as long as you've got it available.\n\n> This would even be true in a world where all grouping estimates were\n> somehow magically accurate. These two justifications coincide in a way\n> that may seem a bit too convenient to truly be an accident of history.\n> And if they do: I agree. It's no accident.\n\nI disagree that the lack of HashAgg's ability to spill to disk was\nbecause, for this one particular node, we should always just give it\nhowever much memory it needs, regardless of if it's anywhere near how\nmuch we thought it'd need or not.\n\n> It seems likely that we have been \"complicit\" in enabling\n> \"applications that live beyond their means\", work_mem-wise. We knew\n> that hash aggregate had this \"bug\" forever, and yet we were reasonably\n> happy to have it be the common case for years. It's very fast, and\n> didn't actually explode most of the time (even though grouping\n> estimates are often pretty poor). Hash agg was and is the common case.\n\nI disagree that we were reasonably happy with this bug or that it\nsomehow makes sense to retain it. HashAgg is certainly commonly used,\nbut that's not really relevant- it's still going to be used quite a bit,\nit's just that, now, when our estimates are far wrong, we won't just\ngobble up all the memory available and instead will spill to disk- just\nlike we do with the other nodes.\n\n> Yes, we were concerned about the risk of OOM for many years, but it\n> was considered a risk worth taking. We knew what the trade-off was. We\n> never quite admitted it, but what does it matter?\n\nThis is not some well designed feature of HashAgg that had a lot of\nthought put into it, whereby the community agreed that we should just\nlet it be and hope no one noticed or got bit by it- I certainly have\nmanaged to kill servers by a HashAgg gone bad and I seriously doubt I'm\nalone in that.\n\n> Our own tacit attitude towards hash agg + work_mem mirrors that of our\n> users (or at least the users that will be affected by this issue, of\n> which there will be plenty). Declaring this behavior a bug with no\n> redeeming qualities now seems a bit rich.\n\nNo, I disagree entirely.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 18:58:40 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 9, 2020 at 3:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > That's not the only justification. The other justification is that\n> > it's generally reasonable to prefer giving hash aggregate more memory.\n>\n> Sure, and it's generally reasonably to prefer giving Sorts more memory\n> too... as long as you've got it available.\n\nDid you actually read any of the discussion?\n\nThe value of doing a hash aggregate all in memory is generally far\ngreater than the value of doing a sort all in memory. They're just\nvery different situations, owing to the fundamental laws-of-physics\nprinciples that apply in each case. Even to the extent that sometimes\nan external sort can actually be slightly *faster* (it's not really\nsupposed to be, but it is). Try it out yourself.\n\nI'm not going to repeat this in full again. The thread is already long enough.\n\n> > This would even be true in a world where all grouping estimates were\n> > somehow magically accurate. These two justifications coincide in a way\n> > that may seem a bit too convenient to truly be an accident of history.\n> > And if they do: I agree. It's no accident.\n>\n> I disagree that the lack of HashAgg's ability to spill to disk was\n> because, for this one particular node, we should always just give it\n> however much memory it needs, regardless of if it's anywhere near how\n> much we thought it'd need or not.\n\nThis is a straw man.\n\nIt's possible to give hash agg an amount of memory that exceeds\nwork_mem, but is less than infinity. That's more or less what I\npropose to enable by inventing a new hash_mem GUC, in fact.\n\nThere is also the separate \"escape hatch\" idea that David Rowley\nproposed, that I consider to be a plausible way of resolving the\nproblem. That wouldn't \"always give it [hash agg] however much memory\nit asks for\", either. It would only do that when the setting indicated\nthat hash agg should be given however much memory it asks for.\n\n> I disagree that we were reasonably happy with this bug or that it\n> somehow makes sense to retain it.\n\nWhile we're far from resolving this open item, I think that you'll\nfind that most people agree that it's reasonable to think of hash agg\nas special -- at least in some contexts. The central disagreement\nseems to be on the question of how to maintain some kind of continuity\nwith the old behavior, how ambitious our approach should be in\nPostgres 13, etc.\n\n> > Yes, we were concerned about the risk of OOM for many years, but it\n> > was considered a risk worth taking. We knew what the trade-off was. We\n> > never quite admitted it, but what does it matter?\n>\n> This is not some well designed feature of HashAgg that had a lot of\n> thought put into it, whereby the community agreed that we should just\n> let it be and hope no one noticed or got bit by it- I certainly have\n> managed to kill servers by a HashAgg gone bad and I seriously doubt I'm\n> alone in that.\n\nI was talking about the evolutionary pressures that led to this\ncurious state of affairs, where hashagg's overuse of memory was often\nactually quite desirable. I understand that it also sometimes causes\nOOMs, and that OOMs are bad. Both beliefs are compatible, just as a\ndesign that takes both into account is possible.\n\nIf it isn't practical to do that in Postgres 13, then an escape hatch\nis highly desirable, if not essential.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Jul 2020 16:42:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 9, 2020 at 3:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> > > If folks\n> > > want to let HashAgg use more memory then they can set work_mem higher,\n> > > just the same as if they want a Sort node to use more memory or a\n> > > HashJoin. Yes, that comes with potential knock-on effects about other\n> > > nodes (possibly) using more memory but that's pretty well understood\n> for\n> > > all the other cases and I don't think that it makes sense to have a\n> > > special case for HashAgg when the only justification is that \"well, you\n> > > see, it used to have this bug, so...\".\n> >\n> > That's not the only justification. The other justification is that\n> > it's generally reasonable to prefer giving hash aggregate more memory.\n>\n> Sure, and it's generally reasonably to prefer giving Sorts more memory\n> too... as long as you've got it available.\n>\n\nLooking at the docs for work_mem it was decided to put \"such as\" before\n\"sort\" and \"hash table\" even though the rest of the paragraph then only\ntalks about those two. Are there other things possible that warrant the\n\"such as\" qualifier or can we write \"specifically, a sort, or a hash table\"?\n\nFor me, as a user that doesn't presently need to deal with all this, I'd\nrather have a multiplier GUC for max_hash_work_mem_units defaulting to\nsomething like 4. The planner would then use that multiple. We've closed\nthe \"bug\" while still giving me a region of utility that emulates the v12\nreality without me touching anything, or even being aware of the bug that\nis being fixed.\n\nI cannot see myself wanting to globally revert to v12 behavior on the\nexecution side as the OOM side-effect is definitely more unpleasant than\nslowed queries. If I have to go into a specific query anyway I'd go for a\nmeasured change on the work_mem or multiplier rather than choosing to\nconsume as much memory as needed.\n\nDavid J.\n\nOn Thu, Jul 9, 2020 at 3:58 PM Stephen Frost <sfrost@snowman.net> wrote:> > If folks\n> > want to let HashAgg use more memory then they can set work_mem higher,\n> > just the same as if they want a Sort node to use more memory or a\n> > HashJoin.  Yes, that comes with potential knock-on effects about other\n> > nodes (possibly) using more memory but that's pretty well understood for\n> > all the other cases and I don't think that it makes sense to have a\n> > special case for HashAgg when the only justification is that \"well, you\n> > see, it used to have this bug, so...\".\n> \n> That's not the only justification. The other justification is that\n> it's generally reasonable to prefer giving hash aggregate more memory.\n\nSure, and it's generally reasonably to prefer giving Sorts more memory\ntoo... as long as you've got it available.Looking at the docs for work_mem it was decided to put \"such as\" before \"sort\" and \"hash table\" even though the rest of the paragraph then only talks about those two.  Are there other things possible that warrant the \"such as\" qualifier or can we write \"specifically, a sort, or a hash table\"?For me, as a user that doesn't presently need to deal with all this, I'd rather have a multiplier GUC for max_hash_work_mem_units defaulting to something like 4.  The planner would then use that multiple.  We've closed the \"bug\" while still giving me a region of utility that emulates the v12 reality without me touching anything, or even being aware of the bug that is being fixed.I cannot see myself wanting to globally revert to v12 behavior on the execution side as the OOM side-effect is definitely more unpleasant than slowed queries.  If I have to go into a specific query anyway I'd go for a measured change on the work_mem or multiplier rather than choosing to consume as much memory as needed.David J.", "msg_date": "Thu, 9 Jul 2020 16:57:22 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Thu, Jul 9, 2020 at 3:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > That's not the only justification. The other justification is that\n> > > it's generally reasonable to prefer giving hash aggregate more memory.\n> >\n> > Sure, and it's generally reasonably to prefer giving Sorts more memory\n> > too... as long as you've got it available.\n> \n> The value of doing a hash aggregate all in memory is generally far\n> greater than the value of doing a sort all in memory. They're just\n> very different situations, owing to the fundamental laws-of-physics\n> principles that apply in each case. Even to the extent that sometimes\n> an external sort can actually be slightly *faster* (it's not really\n> supposed to be, but it is). Try it out yourself.\n\nI didn't, and don't, think it particularly relevant to the discussion,\nbut if you don't like the comparison to Sort then we could compare it to\na HashJoin instead- the point is that, yes, if you are willing to give\nmore memory to a given operation, it's going to go faster, and the way\nusers tell us that they'd like the query to use more memory is already\nwell-defined and understood to be through work_mem. We do not do them a\nservice by ignoring that.\n\n> > > This would even be true in a world where all grouping estimates were\n> > > somehow magically accurate. These two justifications coincide in a way\n> > > that may seem a bit too convenient to truly be an accident of history.\n> > > And if they do: I agree. It's no accident.\n> >\n> > I disagree that the lack of HashAgg's ability to spill to disk was\n> > because, for this one particular node, we should always just give it\n> > however much memory it needs, regardless of if it's anywhere near how\n> > much we thought it'd need or not.\n> \n> This is a straw man.\n\nIt's really not- the system has been quite intentionally designed, and\ndocumented, to work within the constraints given to it (even if they're\nnot very well defined, this is still the case) and this particular node\ndidn't. That isn't a feature.\n\n> It's possible to give hash agg an amount of memory that exceeds\n> work_mem, but is less than infinity. That's more or less what I\n> propose to enable by inventing a new hash_mem GUC, in fact.\n\nWe already have a GUC that we've documented and explained to users that\nis there specifically to control this exact thing, and that's work_mem.\nHow would we document this? \"work_mem is used to control the amount of\nmemory a given node can consider using- oh, except for this one\nparticular kind of node called a HashAgg, then you have to use this\nother variable; no, there's no other node-specific tunable like that,\nand no, you can't control how much memory is used for a given HashAgg or\nfor a given node\". Sure, I'm going over the top here to show my point,\nbut I don't think I'm far from the mark on how this would look.\n\n> There is also the separate \"escape hatch\" idea that David Rowley\n> proposed, that I consider to be a plausible way of resolving the\n> problem. That wouldn't \"always give it [hash agg] however much memory\n> it asks for\", either. It would only do that when the setting indicated\n> that hash agg should be given however much memory it asks for.\n\nWhere's the setting for HashJoin or for Sort, to do the same thing?\nWould we consider it sensible to set everything to \"use as much memory\nas you want?\" I disagree with this notion that HashAgg is so very\nspecial that it must have an independent set of tunables like this.\n\n> > I disagree that we were reasonably happy with this bug or that it\n> > somehow makes sense to retain it.\n> \n> While we're far from resolving this open item, I think that you'll\n> find that most people agree that it's reasonable to think of hash agg\n> as special -- at least in some contexts. The central disagreement\n> seems to be on the question of how to maintain some kind of continuity\n> with the old behavior, how ambitious our approach should be in\n> Postgres 13, etc.\n\nThe old behavior was buggy and we are providing quite enough continuity\nthrough the fact that we've got major versions which will be maintained\nfor the next 5 years that folks can run as they test out newer versions.\nInventing hacks to preserve bug-compatibility across major versions is\nnot a good direction to go in.\n\n> > > Yes, we were concerned about the risk of OOM for many years, but it\n> > > was considered a risk worth taking. We knew what the trade-off was. We\n> > > never quite admitted it, but what does it matter?\n> >\n> > This is not some well designed feature of HashAgg that had a lot of\n> > thought put into it, whereby the community agreed that we should just\n> > let it be and hope no one noticed or got bit by it- I certainly have\n> > managed to kill servers by a HashAgg gone bad and I seriously doubt I'm\n> > alone in that.\n> \n> I was talking about the evolutionary pressures that led to this\n> curious state of affairs, where hashagg's overuse of memory was often\n> actually quite desirable. I understand that it also sometimes causes\n> OOMs, and that OOMs are bad. Both beliefs are compatible, just as a\n> design that takes both into account is possible.\n\nI don't agree that evolution of the system led us to have a HashAgg node\nthat overused memory- certainly it wasn't intentional as it only\nhappened when we thought it wouldn't based on what information we had at\nplan time, a fact that has certainly led a lot of folks to increase\nwork_mem to get the HashAgg that they wanted and who, most likely, won't\nactually end up being hit by this at all.\n\n> If it isn't practical to do that in Postgres 13, then an escape hatch\n> is highly desirable, if not essential.\n\nWe have a parameter which already drives this and which users are\nwelcome to (and quite often do) tune. I disagree that anything further\nis either essential or particularly desirable.\n\nI'm really rather astounded at the direction this has been going in.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 20:08:42 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 09, 2020 at 06:58:40PM -0400, Stephen Frost wrote:\n> * Peter Geoghegan (pg@bowt.ie) wrote:\n> > On Thu, Jul 9, 2020 at 7:03 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > It makes more sense than simply ignoring what our users will see as a\n> > simple regression. (Though I still lean towards fixing the problem by\n> > introducing hash_mem, which at least tries to fix the problem head\n> > on.)\n> \n> The presumption that this will always end up resulting in a regression\n> really doesn't seem sensible to me.\n\nNobody said \"always\" - we're concerned about a fraction of workloads which\nregress, badly affecting only only a small fraction of users.\n\nMaybe pretend that Jeff implemented something called CashAgg, which does\neverything HashAgg does but implemented from scratch. Users would be able to\ntune it or disable it, and we could talk about removing HashAgg for the next 3\nyears. But instead we decide to remove HashAgg right now since it's redundant.\nThat's a bad way to transition from an old behavior to a new one. It's scary\nbecause it imposes a burden, rather than offering a new option without also\ntaking away the old one.\n\n> > That's not the only justification. The other justification is that\n> > it's generally reasonable to prefer giving hash aggregate more memory.\n> \n> Sure, and it's generally reasonably to prefer giving Sorts more memory\n> too... as long as you've got it available.\n\nI believe he meant:\n\"it's generally reasonable to prefer giving hash aggregate more memory THAN OTHER NODES\"\n\nReferencing:\nhttps://www.postgresql.org/message-id/CAH2-Wz=YEMOeXdAPwZo7uriR5KPsf_RGuMHvk3HvLDVksdrwHg@mail.gmail.com\nhttps://www.postgresql.org/message-id/CAH2-Wznd_wL+Q3sUjLN3o5F6Q5AvHSTYOozPAei2QfuYDSd4fw@mail.gmail.com\nhttps://www.postgresql.org/message-id/CAH2-Wz=osB4oi_nH8MnosYhVVSNOm5q3=exGe-b3q6gWOgf98w@mail.gmail.com\n\nPeter's patch and David's competing proposal aim to provide a kind of \"soft\"\ntransition, where the old behavior is still possible (whether default or not).\n\nDavid's proposal (enable_hashagg=neverspill/soft) [0] isn't implemented, but\nwould allow returning *exactly* to the pre-13 behavior (modulo other planner\nchanges). That's a bit of a kludge and we'd hope to eventually remove the\nneverspill/soft stuff, returning enable_* to boolean.\n\nWhereas Peter's hash_mem patch allows returning to the pre-13 behavior if the\nmem required for hashagg is between work_mem and hash_mem; but, by definition,\nnot if the required mem exceeds hash_mem, in which case it would spill to disk.\nThis has the advantage that it's independently useful and not a transitional\nkludge.\n\n-- \nJustin\n\n[0] Bruce proposed some \"naming things\" amendments to David's idea:\nhttps://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\nhttps://www.postgresql.org/message-id/20200625171756.GB12486@momjian.us\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:18:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 9, 2020 at 5:08 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I didn't, and don't, think it particularly relevant to the discussion,\n> but if you don't like the comparison to Sort then we could compare it to\n> a HashJoin instead- the point is that, yes, if you are willing to give\n> more memory to a given operation, it's going to go faster, and the way\n> users tell us that they'd like the query to use more memory is already\n> well-defined and understood to be through work_mem. We do not do them a\n> service by ignoring that.\n\nThe hash_mem design (as it stands) would affect both hash join and\nhash aggregate. I believe that it makes most sense to have hash-based\nnodes under the control of a single GUC. I believe that this\ngranularity will cause the least problems. It certainly represents a\ntrade-off.\n\nwork_mem is less of a problem with hash join, primarily because join\nestimates are usually a lot better than grouping estimates. But it is\nnevertheless something that it makes sense to put in the same\nconceptual bucket as hash aggregate, pending a future root and branch\nredesign of work_mem.\n\n> > This is a straw man.\n>\n> It's really not- the system has been quite intentionally designed, and\n> documented, to work within the constraints given to it (even if they're\n> not very well defined, this is still the case) and this particular node\n> didn't. That isn't a feature.\n\nI don't think that it was actually designed, so much as it evolved --\nat least in this particular respect. But it hardly matters now.\n\n> We already have a GUC that we've documented and explained to users that\n> is there specifically to control this exact thing, and that's work_mem.\n> How would we document this?\n\nhash_mem would probably work as a multiplier of work_mem when negated,\nor as an absolute KB value, like work_mem. It would apply to nodes\nthat use hashing, currently defined as hash agg and hash join. We\nmight make the default -2, meaning twice whatever work_mem was (David\nJohnson suggested 4x just now, which seems a little on the aggressive\nside to me).\n\nYes, that is a new burden for users that need to tune work_mem.\nSimilar settings exist in other DB systems (or did before they finally\nreplaced the equivalent of work_mem with something fundamentally\nbetter). All of the choices on the table have significant downsides.\n\nNobody can claim the mantle of prudent conservative by proposing that\nwe do nothing here. To do so is to ignore predictable significant\nnegative consequences for our users. That much isn't really in\nquestion. I'm pretty sure that Andres, Robert, David Rowley, Alvaro,\nJustin, and Tomas will all agree with that statement (I'm sure that\nI'm forgetting somebody else, though). If this seems strange or\nunlikely, then look back over the thread.\n\n> Where's the setting for HashJoin or for Sort, to do the same thing?\n> Would we consider it sensible to set everything to \"use as much memory\n> as you want?\" I disagree with this notion that HashAgg is so very\n> special that it must have an independent set of tunables like this.\n\nRegardless of what we do now, the fact is that the economic case for\ngiving hash agg more memory (relative to most other executor nodes)\nwhen the system as a whole is short on memory is very strong. It does\nnot follow that the current hash_mem proposal is the best way forward\nnow, of course, but I don't see why you don't at least agree with me\nabout that much. It seems rather obvious to me.\n\n> The old behavior was buggy and we are providing quite enough continuity\n> through the fact that we've got major versions which will be maintained\n> for the next 5 years that folks can run as they test out newer versions.\n> Inventing hacks to preserve bug-compatibility across major versions is\n> not a good direction to go in.\n\nLike I said, the escape hatch GUC is not my preferred solution. But at\nleast it acknowledges the problem. I don't think that anyone (or\nanyone else) believes that work_mem doesn't have serious limitations.\n\n> We have a parameter which already drives this and which users are\n> welcome to (and quite often do) tune. I disagree that anything further\n> is either essential or particularly desirable.\n\nThis is a user hostile attitude.\n\n> I'm really rather astounded at the direction this has been going in.\n\nWhy?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Jul 2020 18:21:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-07-09 at 19:18 -0500, Justin Pryzby wrote:\n> Maybe pretend that Jeff implemented something called CashAgg, which\n> does\n> everything HashAgg does but implemented from scratch. Users would be\n> able to\n> tune it or disable it, and we could talk about removing HashAgg for\n> the next 3\n> years.\n\nThat's kind of what we'd have if we had the two escape-hatch GUCs.\nDefault gives new behavior, changing the GUCs would give the v12\nbehavior.\n\nIn principle, Stephen is right: the v12 behavior is a bug, lots of\npeople are unhappy about it, it causes real problems, and it would not\nbe acceptable if proposed today. Otherwise I wouldn't have spent the\ntime to fix it. \n\nSimilarly, potential regressions are not the \"fault\" of my feature --\nthey are the fault of the limitations of work_mem, the limitations of\nthe planner, the wrong expectations from customers, or just\nhappenstance. \n\nBut at a certain point, I have to weigh the potential anger of\ncustomers hitting regressions versus the potential anger of hackers\nseeing a couple extra GUCs. I have to say that I am more worried about\nthe former.\n\nIf there is some more serious consequence of adding a GUC that I missed\nin this thread, please let me know. Otherwise, I intend to commit a new\nGUC shortly that will enable users to bypass work_mem for HashAgg, just\nas in v12.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 10 Jul 2020 01:29:18 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Thu, Jul 09, 2020 at 06:58:40PM -0400, Stephen Frost wrote:\n> > * Peter Geoghegan (pg@bowt.ie) wrote:\n> > > On Thu, Jul 9, 2020 at 7:03 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > It makes more sense than simply ignoring what our users will see as a\n> > > simple regression. (Though I still lean towards fixing the problem by\n> > > introducing hash_mem, which at least tries to fix the problem head\n> > > on.)\n> > \n> > The presumption that this will always end up resulting in a regression\n> > really doesn't seem sensible to me.\n> \n> Nobody said \"always\" - we're concerned about a fraction of workloads which\n> regress, badly affecting only only a small fraction of users.\n\nAnd those workloads would be addressed by increasing work_mem, no? Why\nare we inventing something new here for something that'll only impact a\nsmall fraction of users in a small fraction of cases and where there's\nalready a perfectly workable way to address the issue?\n\n> Maybe pretend that Jeff implemented something called CashAgg, which does\n> everything HashAgg does but implemented from scratch. Users would be able to\n> tune it or disable it, and we could talk about removing HashAgg for the next 3\n> years. But instead we decide to remove HashAgg right now since it's redundant.\n> That's a bad way to transition from an old behavior to a new one. It's scary\n> because it imposes a burden, rather than offering a new option without also\n> taking away the old one.\n\nWe already have enable_hashagg. Users are free to disable it. This\nmakes it also respect work_mem- allowing users to tune that value to\nadjust how much memory HashAgg actually uses.\n\n> > > That's not the only justification. The other justification is that\n> > > it's generally reasonable to prefer giving hash aggregate more memory.\n> > \n> > Sure, and it's generally reasonably to prefer giving Sorts more memory\n> > too... as long as you've got it available.\n> \n> I believe he meant:\n> \"it's generally reasonable to prefer giving hash aggregate more memory THAN OTHER NODES\"\n\nIf we were developing a wholistic view of memory usage, with an overall\ncap on how much memory is allowed to be used for a query, then that\nwould be an interesting thing to consider and discuss. That's not what\nany of this is.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Jul 2020 09:43:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Thu, Jul 9, 2020 at 5:08 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I didn't, and don't, think it particularly relevant to the discussion,\n> > but if you don't like the comparison to Sort then we could compare it to\n> > a HashJoin instead- the point is that, yes, if you are willing to give\n> > more memory to a given operation, it's going to go faster, and the way\n> > users tell us that they'd like the query to use more memory is already\n> > well-defined and understood to be through work_mem. We do not do them a\n> > service by ignoring that.\n> \n> The hash_mem design (as it stands) would affect both hash join and\n> hash aggregate. I believe that it makes most sense to have hash-based\n> nodes under the control of a single GUC. I believe that this\n> granularity will cause the least problems. It certainly represents a\n> trade-off.\n\nSo, now this has moved from being a hack to deal with a possible\nregression for a small number of users due to new behavior in one node,\nto a change that has impacts on other nodes that hadn't been changed,\nall happening during beta.\n\nNo, I don't agree with this. Now is not the time for designing new\nfeatures for v13.\n\n> work_mem is less of a problem with hash join, primarily because join\n> estimates are usually a lot better than grouping estimates. But it is\n> nevertheless something that it makes sense to put in the same\n> conceptual bucket as hash aggregate, pending a future root and branch\n> redesign of work_mem.\n\nI'm still not thrilled with the 'hash_mem' kind of idea as it's going in\nthe wrong direction because what's actually needed is a way to properly\nconsider and track overall memory usage- a redesign of work_mem (or some\nnew parameter, but it wouldn't be 'hash_mem') as you say, but all of\nthis discussion should be targeting v14.\n\n> Like I said, the escape hatch GUC is not my preferred solution. But at\n> least it acknowledges the problem. I don't think that anyone (or\n> anyone else) believes that work_mem doesn't have serious limitations.\n\nwork_mem obviously has serious limitations, but that's not novel or new\nor unexpected by anyone.\n\n> > We have a parameter which already drives this and which users are\n> > welcome to (and quite often do) tune. I disagree that anything further\n> > is either essential or particularly desirable.\n> \n> This is a user hostile attitude.\n\nI don't find that argument convincing, at all.\n\n> > I'm really rather astounded at the direction this has been going in.\n> \n> Why?\n\nDue to the fact that we're in beta and now is not the time to be\nredesigning this feature. What Jeff implemented was done in a way that\nfollows the existing structure for how all of the other nodes work and\nhow HashAgg was *intended* to work (as in- if we thought the HashAgg\nwould go over work_mem, we wouldn't pick it and would do a GroupAgg\ninstead). If there's bugs in his implementation (which I doubt, but it\ncan happen, of course) then that'd be useful to discuss and look at\nfixing, but this discussion isn't appropriate for beta.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Jul 2020 10:17:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> In principle, Stephen is right: the v12 behavior is a bug, lots of\n> people are unhappy about it, it causes real problems, and it would not\n> be acceptable if proposed today. Otherwise I wouldn't have spent the\n> time to fix it. \n> \n> Similarly, potential regressions are not the \"fault\" of my feature --\n> they are the fault of the limitations of work_mem, the limitations of\n> the planner, the wrong expectations from customers, or just\n> happenstance. \n\nExactly.\n\n> But at a certain point, I have to weigh the potential anger of\n> customers hitting regressions versus the potential anger of hackers\n> seeing a couple extra GUCs. I have to say that I am more worried about\n> the former.\n\nWe work, quite intentionally, to avoid having a billion knobs that\npeople have to understand and to tune. Yes, we could create a bunch of\nnew GUCs to change all kinds of behavior, and we could add hints while\nwe're at it, but there's been quite understandable and good pressure\nagainst doing so because much of the point of this database system is\nthat it should be figuring out the best plan on its own and within the\nconstraints that users have configured.\n\n> If there is some more serious consequence of adding a GUC that I missed\n> in this thread, please let me know. Otherwise, I intend to commit a new\n> GUC shortly that will enable users to bypass work_mem for HashAgg, just\n> as in v12.\n\nI don't think this thread has properly considered that every new GUC,\nevery additional knob that we create, increases the complexity of the\nsystem for users to have to deal with and, in some sense, creates a\nfailure of ours to be able to just figure out what the right answer\nis. For such a small set of users, who somehow have a problem with a\nSort taking up more memory but are fine with HashAgg doing so, I don't\nthink the requirement is met that this is a large enough issue to\nwarrant a new GUC. Users who are actually hit by this in a negative way\nhave an option- increase work_mem to reflect what was actually happening\nalready. I seriously doubt that we'd get tons of users complaining\nabout that answer or asking us to have something separate from that, and\nwe'd avoid adding some new GUC that has to be explained to every new\nuser to the system and complicate the documentation that explains how\nwork_mem works.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Jul 2020 10:34:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 7:17 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > The hash_mem design (as it stands) would affect both hash join and\n> > hash aggregate. I believe that it makes most sense to have hash-based\n> > nodes under the control of a single GUC. I believe that this\n> > granularity will cause the least problems. It certainly represents a\n> > trade-off.\n>\n> So, now this has moved from being a hack to deal with a possible\n> regression for a small number of users due to new behavior in one node,\n> to a change that has impacts on other nodes that hadn't been changed,\n> all happening during beta.\n\nThe common goal is ameliorating or avoiding predictable negative\nconsequences for our users. One proposal is an ambitious and\ncomprehensive way of dealing with that, that certainly has unique\nrisks. The other is much less ambitious, and clearly a kludge -- but\nit's also much less risky. The discussion hasn't really moved at all.\n\n> I'm still not thrilled with the 'hash_mem' kind of idea as it's going in\n> the wrong direction because what's actually needed is a way to properly\n> consider and track overall memory usage- a redesign of work_mem (or some\n> new parameter, but it wouldn't be 'hash_mem') as you say, but all of\n> this discussion should be targeting v14.\n\nIt's certainly possible that hash_mem is too radical, and yet not\nradical enough -- in any timeframe (i.e. a total redesign of work_mem\nis the only thing that will be acceptable). I don't understand why you\nrefuse to engage with the idea at all, though. The mere fact that\nhash_mem could in theory fix this problem comprehensively *usefully\nframes the problem*. This is the kind of issue where developing a\nshared understanding is very important.\n\nAndres said to me privately that hash_mem could be a good idea, even\nthough he opposes it as a fix to the open item for Postgres 13. I\nunderstand that proposing such a thing during beta is controversial,\nwhatever the specifics are. It is a proposal made in the spirit of\ntrying to move things forward. Hand wringing about ignoring the\ncommunity's process is completely counterproductive.\n\nThere are about 3 general approaches to addressing this problem, and\nhash_mem is one of them. Am I not allowed to point that out? I have\nbeen completely open and honest about the risks.\n\n> > Like I said, the escape hatch GUC is not my preferred solution. But at\n> > least it acknowledges the problem. I don't think that anyone (or\n> > anyone else) believes that work_mem doesn't have serious limitations.\n>\n> work_mem obviously has serious limitations, but that's not novel or new\n> or unexpected by anyone.\n\nIn your other email from this morning, you wrote:\n\n\"And those workloads would be addressed by increasing work_mem, no?\nWhy are we inventing something new here for something that'll only\nimpact a small fraction of users in a small fraction of cases and\nwhere there's already a perfectly workable way to address the issue?\"\n\nWhich one is it?\n\n> > > I'm really rather astounded at the direction this has been going in.\n> >\n> > Why?\n>\n> Due to the fact that we're in beta and now is not the time to be\n> redesigning this feature.\n\nDid you read the discussion?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:26:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jul 10, 2020 at 7:17 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> Due to the fact that we're in beta and now is not the time to be\n>> redesigning this feature.\n\n> Did you read the discussion?\n\nBeta is when we fix problems that testing exposes in new features.\nObviously, we'd rather not design new APIs at this point, but if it's\nthe only reasonable way to resolve a problem, that's what we've got\nto do. I don't think anyone is advocating for reverting the hashagg\nspill feature, and \"do nothing\" is not an attractive option either.\nOn the other hand, it's surely too late to engage in any massive\nredesigns such as some of this thread has speculated about.\n\nI looked over Peter's patch in [1], and it seems generally pretty\nsane to me, though I concur with the idea that it'd be better to\ndefine the GUC as a multiplier for work_mem. (For one thing, we could\nthen easily limit it to be at least 1.0, ensuring sanity; also, if\nwork_mem does eventually become more dynamic than it is now, we might\nstill be able to salvage this knob as something useful. Or if not,\nwe just rip it out.) So my vote is for moving in that direction.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 13:46:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, 2020-07-10 at 13:46 -0400, Tom Lane wrote:\n> I looked over Peter's patch in [1], and it seems generally pretty\n> sane to me, though I concur with the idea that it'd be better to\n> define the GUC as a multiplier for work_mem. (For one thing, we\n> could\n> then easily limit it to be at least 1.0, ensuring sanity; also, if\n> work_mem does eventually become more dynamic than it is now, we might\n> still be able to salvage this knob as something useful. Or if not,\n> we just rip it out.) So my vote is for moving in that direction.\n\nIn that case, I will hold off on my \"escape-hatch\" GUC.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 10 Jul 2020 11:34:44 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 10:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I looked over Peter's patch in [1], and it seems generally pretty\n> sane to me, though I concur with the idea that it'd be better to\n> define the GUC as a multiplier for work_mem. (For one thing, we could\n> then easily limit it to be at least 1.0, ensuring sanity; also, if\n> work_mem does eventually become more dynamic than it is now, we might\n> still be able to salvage this knob as something useful. Or if not,\n> we just rip it out.) So my vote is for moving in that direction.\n\nCool. I agree that it makes sense to constrain the effective value to\nbe at least work_mem in all cases.\n\nWith that in mind, I propose that this new GUC have the following\ncharacteristics:\n\n* It should be named \"hash_mem_multiplier\", a floating point GUC\n(somewhat like bgwriter_lru_multiplier).\n\n* The default value is 2.0.\n\n* The minimum allowable value is 1.0, to protect users from\naccidentally giving less memory to hash-based nodes.\n\n* The maximum allowable value is 100.0, to protect users from\naccidentally setting hash_mem_multiplier to a value intended to work\nlike a work_mem-style KB value (you can't provide an absolute value\nlike that directly). This maximum is absurdly high.\n\nI think that it's possible that a small number of users will find it\nuseful to set the value of hash_mem_multiplier as high as 5.0. That is\na very aggressive value, but one that could still make sense with\ncertain workloads.\n\nThoughts?\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 14:00:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jul-10, Peter Geoghegan wrote:\n\n> * The maximum allowable value is 100.0, to protect users from\n> accidentally setting hash_mem_multiplier to a value intended to work\n> like a work_mem-style KB value (you can't provide an absolute value\n> like that directly). This maximum is absurdly high.\n> \n> I think that it's possible that a small number of users will find it\n> useful to set the value of hash_mem_multiplier as high as 5.0. That is\n> a very aggressive value, but one that could still make sense with\n> certain workloads.\n\nI'm not sure about this bit; sounds a bit like what has been qualified\nas \"nannyism\" elsewhere. Suppose I want to give a hash table 2GB of\nmemory for whatever reason. If my work_mem is default (4MB) then I\ncannot possibly achieve that without altering both settings.\n\nSo I propose that maybe we do want a maximum value, but if so it should\nbe higher than what you propose. I think 10000 is acceptable in that it\ndoesn't get in the way.\n\nAnother point is that if you specify a unit for the multiplier (which is\nwhat users are likely to do for larger values), it'll fail anyway, so\nI'm not sure this is such terrible a problem.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:10:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 11:34 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2020-07-10 at 13:46 -0400, Tom Lane wrote:\n> > I looked over Peter's patch in [1], and it seems generally pretty\n> > sane to me, though I concur with the idea that it'd be better to\n> > define the GUC as a multiplier for work_mem. (For one thing, we\n> > could\n> > then easily limit it to be at least 1.0, ensuring sanity; also, if\n> > work_mem does eventually become more dynamic than it is now, we might\n> > still be able to salvage this knob as something useful. Or if not,\n> > we just rip it out.) So my vote is for moving in that direction.\n>\n> In that case, I will hold off on my \"escape-hatch\" GUC.\n\nIt now seems likely that the hash_mem/hash_mem_multiplier proposal has\nthe support it needs to get into Postgres 13. Assuming that the\nproposal doesn't lose momentum, then it's about time to return to the\noriginal question you posed at the start of the thread:\n\nWhat should we do with the hashagg_avoid_disk_plan GUC (formerly known\nas the enable_hashagg_disk GUC), if anything?\n\nI myself think that there is a case to be made for removing it\nentirely. But if we keep it then we should also not change the\ndefault. In other words, by default the planner should *not* try to\navoid hash aggs that spill. AFAICT there is no particular reason to be\nconcerned about that now, since nobody has expressed any concerns\nabout any of the possibly-relevant cost models. That said, I don't\nfeel strongly about this hashagg_avoid_disk_plan question. It seems\n*much* less important.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 14:24:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 2:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I'm not sure about this bit; sounds a bit like what has been qualified\n> as \"nannyism\" elsewhere. Suppose I want to give a hash table 2GB of\n> memory for whatever reason. If my work_mem is default (4MB) then I\n> cannot possibly achieve that without altering both settings.\n>\n> So I propose that maybe we do want a maximum value, but if so it should\n> be higher than what you propose. I think 10000 is acceptable in that it\n> doesn't get in the way.\n\nThat's a good point.\n\nI amend my proposal: the maximum allowable value of\nhash_mem_multiplier should be 10000.0 (i.e., ten thousand times\nwhatever work_mem is set to, which is subject to the existing work_mem\nsizing restrictions).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 14:27:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-10, Peter Geoghegan wrote:\n>> * The maximum allowable value is 100.0, to protect users from\n>> accidentally setting hash_mem_multiplier to a value intended to work\n>> like a work_mem-style KB value (you can't provide an absolute value\n>> like that directly). This maximum is absurdly high.\n\n> I'm not sure about this bit; sounds a bit like what has been qualified\n> as \"nannyism\" elsewhere. Suppose I want to give a hash table 2GB of\n> memory for whatever reason. If my work_mem is default (4MB) then I\n> cannot possibly achieve that without altering both settings.\n> So I propose that maybe we do want a maximum value, but if so it should\n> be higher than what you propose. I think 10000 is acceptable in that it\n> doesn't get in the way.\n\nI was kind of thinking 1000 as the limit ;-). In any case, the code\nwill need to internally clamp the product to not exceed whatever the\nwork_mem physical limit is these days.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:28:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It now seems likely that the hash_mem/hash_mem_multiplier proposal has\n> the support it needs to get into Postgres 13. Assuming that the\n> proposal doesn't lose momentum, then it's about time to return to the\n> original question you posed at the start of the thread:\n\n> What should we do with the hashagg_avoid_disk_plan GUC (formerly known\n> as the enable_hashagg_disk GUC), if anything?\n\n> I myself think that there is a case to be made for removing it\n> entirely.\n\n+0.5 or so for removing it. It seems too confusing and dubiously\nuseful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:30:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Fri, Jul 10, 2020 at 7:17 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > The hash_mem design (as it stands) would affect both hash join and\n> > > hash aggregate. I believe that it makes most sense to have hash-based\n> > > nodes under the control of a single GUC. I believe that this\n> > > granularity will cause the least problems. It certainly represents a\n> > > trade-off.\n> >\n> > So, now this has moved from being a hack to deal with a possible\n> > regression for a small number of users due to new behavior in one node,\n> > to a change that has impacts on other nodes that hadn't been changed,\n> > all happening during beta.\n> \n> The common goal is ameliorating or avoiding predictable negative\n> consequences for our users. One proposal is an ambitious and\n> comprehensive way of dealing with that, that certainly has unique\n> risks. The other is much less ambitious, and clearly a kludge -- but\n> it's also much less risky. The discussion hasn't really moved at all.\n\nNeither seem to be going in a direction which looks appropriate for a\nbeta-time change, particuarly given that none of this is *actually* new\nterritory. Having to increase work_mem to get a HashAgg or HashJoin\nthat hasn't got a bunch of batches is routine and while it'd be nicer if\nPG had a way to, overall, manage memory usage to stay within some\nparticular value, we don't. When we start going in that direction it'll\nbe interesting to discuss how much we should favor trying to do an\nin-memory HashAgg by reducing the amount of memory allocated to a Sort\nnode.\n\n> > I'm still not thrilled with the 'hash_mem' kind of idea as it's going in\n> > the wrong direction because what's actually needed is a way to properly\n> > consider and track overall memory usage- a redesign of work_mem (or some\n> > new parameter, but it wouldn't be 'hash_mem') as you say, but all of\n> > this discussion should be targeting v14.\n> \n> It's certainly possible that hash_mem is too radical, and yet not\n> radical enough -- in any timeframe (i.e. a total redesign of work_mem\n> is the only thing that will be acceptable). I don't understand why you\n> refuse to engage with the idea at all, though. The mere fact that\n> hash_mem could in theory fix this problem comprehensively *usefully\n> frames the problem*. This is the kind of issue where developing a\n> shared understanding is very important.\n\nI don't see hash_mem as being any kind of proper fix- it's just punting\nto the user saying \"we can't figure this out, how about you do it\" and,\nworse, it's in conflict with how we already ask the user that question.\nTurning it into a multiplier doesn't change that either.\n\n> Andres said to me privately that hash_mem could be a good idea, even\n> though he opposes it as a fix to the open item for Postgres 13. I\n> understand that proposing such a thing during beta is controversial,\n> whatever the specifics are. It is a proposal made in the spirit of\n> trying to move things forward. Hand wringing about ignoring the\n> community's process is completely counterproductive.\n\nI disagree that caring about the fact that we're in beta is\ncounterproductive. Saying that we should ignore that we're in beta\nisn't appropriate and I could say that's counterproductive- though that\nhardly seems to be helpful, so I wonder why that comment was chosen.\n\n> There are about 3 general approaches to addressing this problem, and\n> hash_mem is one of them. Am I not allowed to point that out? I have\n> been completely open and honest about the risks.\n\nI don't think I said at any point that you weren't allowed to suggest\nsomething. I do think, and continue to feel, that not enough\nconsideration is being given to the fact that we're well past the point\nwhere this kind of development should be happening- and commenting on\nhow leveling that concern at your proposed solution is mere 'hand\nwringing' certainly doesn't reduce my feeling that we're being far too\ncavalier with this.\n\n> > > Like I said, the escape hatch GUC is not my preferred solution. But at\n> > > least it acknowledges the problem. I don't think that anyone (or\n> > > anyone else) believes that work_mem doesn't have serious limitations.\n> >\n> > work_mem obviously has serious limitations, but that's not novel or new\n> > or unexpected by anyone.\n> \n> In your other email from this morning, you wrote:\n> \n> \"And those workloads would be addressed by increasing work_mem, no?\n> Why are we inventing something new here for something that'll only\n> impact a small fraction of users in a small fraction of cases and\n> where there's already a perfectly workable way to address the issue?\"\n> \n> Which one is it?\n\nUh, it's clearly both. Those two statements are not contractictory at\nall- I agree that work_mem isn't good, and it has limitations, but this\nisn't one of those- people can increase work_mem and get the same\nHashAgg they got before and have it use all that memory just as it did\nbefore if they want to.\n\n> > > > I'm really rather astounded at the direction this has been going in.\n> > >\n> > > Why?\n> >\n> > Due to the fact that we're in beta and now is not the time to be\n> > redesigning this feature.\n> \n> Did you read the discussion?\n\nThis is not productive to the discussion. I'd ask that you stop.\n\nNothing of what you've said thus far has shown me that there were\nmaterial bits of the discussion that I've missed. No, that other people\nfeel differently or have made comments supporting one thing or another\nisn't what I would consider material- I'm as allowed my opinions as much\nas others, even when I disagree with the majority (or so claimed anyhow-\nI've not gone back to count, but I don't claim it to be otherwise\neither).\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Jul 2020 17:50:13 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > It now seems likely that the hash_mem/hash_mem_multiplier proposal has\n> > the support it needs to get into Postgres 13. Assuming that the\n> > proposal doesn't lose momentum, then it's about time to return to the\n> > original question you posed at the start of the thread:\n> \n> > What should we do with the hashagg_avoid_disk_plan GUC (formerly known\n> > as the enable_hashagg_disk GUC), if anything?\n> \n> > I myself think that there is a case to be made for removing it\n> > entirely.\n> \n> +0.5 or so for removing it. It seems too confusing and dubiously\n> useful.\n\nI agree that it shouldn't exist.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Jul 2020 17:52:02 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I don't see hash_mem as being any kind of proper fix- it's just punting\n> to the user saying \"we can't figure this out, how about you do it\" and,\n> worse, it's in conflict with how we already ask the user that question.\n> Turning it into a multiplier doesn't change that either.\n\nHave you got a better proposal that is reasonably implementable for v13?\n(I do not accept the argument that \"do nothing\" is a better proposal.)\n\nI agree that hash_mem is a stopgap, whether it's a multiplier or no,\nbut at this point it seems difficult to avoid inventing a stopgap.\nGetting rid of the process-global work_mem setting is a research project,\nand one I wouldn't even count on having results from for v14. In the\nmeantime, it seems dead certain that there are applications for which\nthe current behavior will be problematic. hash_mem seems like a cleaner\nand more useful stopgap than the \"escape hatch\" approach, at least to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:59:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 2:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Nothing of what you've said thus far has shown me that there were\n> material bits of the discussion that I've missed.\n\nMaybe that's just because you missed those bits too?\n\n> No, that other people\n> feel differently or have made comments supporting one thing or another\n> isn't what I would consider material- I'm as allowed my opinions as much\n> as others, even when I disagree with the majority (or so claimed anyhow-\n> I've not gone back to count, but I don't claim it to be otherwise\n> either).\n\nYou are of course entitled to your opinion.\n\nThe problem we're trying to address here is paradoxical, in a certain\nsense. The HashAggs-that-spill patch is somehow not at fault on the\none hand, but on the other hand has created this urgent need to\nameliorate what is for all intents and purposes a regression.\nEverything is intertwined. Yes -- this *is* weird! And, I admit that\nthe hash_mem proposal is unorthodox, even ugly -- in fact, I've said\nwords to that effect on perhaps a dozen occasions at this point. This\nis also weird.\n\nI pointed out that my hash_mem proposal was popular because it seemed\nlike it might save time. When I see somebody I know proposing\nsomething strange, my first thought is \"why are they proposing that?\".\nI might only realize some time later that there are special\ncircumstances that make the proposal much more reasonable than it\nseemed at first (maybe even completely reasonable). There is no\ninherent reason why other people supporting the proposal makes it more\nvalid, but in general it does suggest that special circumstances might\napply. It guides me in the direction of looking for and understanding\nwhat they might be sooner.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:10:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 11 Jul 2020 at 10:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I don't see hash_mem as being any kind of proper fix- it's just punting\n> > to the user saying \"we can't figure this out, how about you do it\" and,\n> > worse, it's in conflict with how we already ask the user that question.\n> > Turning it into a multiplier doesn't change that either.\n>\n> Have you got a better proposal that is reasonably implementable for v13?\n> (I do not accept the argument that \"do nothing\" is a better proposal.)\n>\n> I agree that hash_mem is a stopgap, whether it's a multiplier or no,\n> but at this point it seems difficult to avoid inventing a stopgap.\n> Getting rid of the process-global work_mem setting is a research project,\n> and one I wouldn't even count on having results from for v14. In the\n> meantime, it seems dead certain that there are applications for which\n> the current behavior will be problematic. hash_mem seems like a cleaner\n> and more useful stopgap than the \"escape hatch\" approach, at least to me.\n\nIf we're going to end up going down the route of something like\nhash_mem for PG13, wouldn't it be better to have something more like\nhashagg_mem that only adjusts the memory limits for Hash Agg only?\n\nStephen mentions in [1] that:\n> Users who are actually hit by this in a negative way\n> have an option- increase work_mem to reflect what was actually happening\n> already.\n\nPeter is not a fan of that idea, which can only be due to the fact\nthat will also increase the maximum memory consumption allowed by\nother nodes in the plan too. My concern is that if we do hash_mem and\nhave that control the memory allowances for Hash Joins and Hash Aggs,\nthen that solution is just as good as Stephen's idea when the plan\nonly contains Hash Joins and Hash Aggs.\n\nAs much as I do want to see us get something to allow users some\nreasonable way to get the same performance as they're used to, I'm\nconcerned that giving users something that works for many of the use\ncases is not really going to be as good as giving them something that\nworks in all their use cases. A user who has a partitioned table\nwith a good number of partitions and partition-wise joins enabled\nmight not like it if their Hash Join plan suddenly consumes hash_mem *\nnPartitions when they've set hash_mem to 10x of work_mem due to some\nother plan that requires that to maintain PG12's performance in PG13.\n If that user is unable to adjust hash_mem due to that then they're\nnot going to be very satisfied that we've added hash_mem to allow\ntheir query to perform as well as it did in PG12. They'll be at the\nsame OOM risk that they were exposed to in PG12 if they were to\nincrease hash_mem here.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200710143415.GJ12375@tamriel.snowman.net\n\n\n", "msg_date": "Sat, 11 Jul 2020 12:16:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 5:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Stephen mentions in [1] that:\n> > Users who are actually hit by this in a negative way\n> > have an option- increase work_mem to reflect what was actually happening\n> > already.\n>\n> Peter is not a fan of that idea, which can only be due to the fact\n> that will also increase the maximum memory consumption allowed by\n> other nodes in the plan too.\n\n\nThat isn't the only reason for me - the main advantage of hash_mem is that\nwe get to set a default to some multiple greater than 1.0 so that an\nupgrade to v13 has a region where behavior similar to v12 is effectively\nmaintained. I have no feel for whether that should be 2.0, 4.0, or\nsomething else, but 2.0 seemed small and I chose to use a power of 2.\n\nMy concern is that if we do hash_mem and\n> have that control the memory allowances for Hash Joins and Hash Aggs,\n> then that solution is just as good as Stephen's idea when the plan\n> only contains Hash Joins and Hash Aggs.\n>\n> As much as I do want to see us get something to allow users some\n> reasonable way to get the same performance as they're used to, I'm\n> concerned that giving users something that works for many of the use\n> cases is not really going to be as good as giving them something that\n> works in all their use cases. A user who has a partitioned table\n> with a good number of partitions and partition-wise joins enabled\n> might not like it if their Hash Join plan suddenly consumes hash_mem *\n> nPartitions when they've set hash_mem to 10x of work_mem due to some\n> other plan that requires that to maintain PG12's performance in PG13.\n>\n\nI don't know enough about the hash join dynamic to comment there but if an\nadmin goes in and changes the system default to 10x in lieu of a targeted\nfix for a query that actually needs work_mem to be increased to 10 times\nits current value to work properly I'd say that would be a poor decision.\nAbsent hash_mem they wouldn't update work_mem on their system to 10x its\ncurrent value in order to upgrade to v13, they'd set work_mem for that\nquery specifically. The same should happen here.\n\nFrankly, if admins are on top of their game and measuring and monitoring\nquery performance and memory consumption they would be able to operate in\nour \"do nothing\" mode by setting the default for hash_mem to 1.0 and just\ndole out memory via work_mem as they have always done. Though setting\nhash_mem to 10x for that single query would reduce their risk of OOM (none\nof the work_mem consulting nodes would be increased) so having the GUC\nwould be a net win should they avail themselves of it.\n\nThe multiplier seems strictly better than \"rely on work_mem alone, i.e., do\nnothing\"; the detracting factor being one more GUC. Even if one wants to\nargue the solution is ugly or imperfect the current state seems worse and a\nmore perfect option doesn't seem worth waiting for. The multiplier won't\nmake every single upgrade a non-event but it provides a more than\nsufficient amount of control and in the worse case can be effectively\nignored by setting it to 1.0.\n\nIs there some reason to think that having this multiplier with a\nconservative default of 2.0 would cause an actual problem - and would that\nscenario have likely caused an OOM anyway in v12? Given that \"work_mem can\nbe used many many times in a single query\" I'm having trouble imagining\nsuch a problem.\n\nDavid J.\n\nOn Fri, Jul 10, 2020 at 5:16 PM David Rowley <dgrowleyml@gmail.com> wrote:Stephen mentions in [1] that:\n> Users who are actually hit by this in a negative way\n> have an option- increase work_mem to reflect what was actually happening\n> already.\n\nPeter is not a fan of that idea, which can only be due to the fact\nthat will also increase the maximum memory consumption allowed by\nother nodes in the plan too.That isn't the only reason for me - the main advantage of hash_mem is that we get to set a default to some multiple greater than 1.0 so that an upgrade to v13 has a region where behavior similar to v12 is effectively maintained.  I have no feel for whether that should be 2.0, 4.0, or something else, but 2.0 seemed small and I chose to use a power of 2. My concern is that if we do hash_mem and\nhave that control the memory allowances for Hash Joins and Hash Aggs,\nthen that solution is just as good as Stephen's idea when the plan\nonly contains Hash Joins and Hash Aggs.\n\nAs much as I do want to see us get something to allow users some\nreasonable way to get the same performance as they're used to, I'm\nconcerned that giving users something that works for many of the use\ncases is not really going to be as good as giving them something that\nworks in all their use cases.   A user who has a partitioned table\nwith a good number of partitions and partition-wise joins enabled\nmight not like it if their Hash Join plan suddenly consumes hash_mem *\nnPartitions when they've set hash_mem to 10x of work_mem due to some\nother plan that requires that to maintain PG12's performance in PG13.I don't know enough about the hash join dynamic to comment there but if an admin goes in and changes the system default to 10x in lieu of a targeted fix for a query that actually needs work_mem to be increased to 10 times its current value to work properly I'd say that would be a poor decision.  Absent hash_mem they wouldn't update work_mem on their system to 10x its current value in order to upgrade to v13, they'd set work_mem for that query specifically.  The same should happen here.Frankly, if admins are on top of their game and measuring and monitoring query performance and memory consumption they would be able to operate in our \"do nothing\" mode by setting the default for hash_mem to 1.0 and just dole out memory via work_mem as they have always done.  Though setting hash_mem to 10x for that single query would reduce their risk of OOM (none of the work_mem consulting nodes would be increased) so having the GUC would be a net win should they avail themselves of it.The multiplier seems strictly better than \"rely on work_mem alone, i.e., do nothing\"; the detracting factor being one more GUC.  Even if one wants to argue the solution is ugly or imperfect the current state seems worse and a more perfect option doesn't seem worth waiting for.  The multiplier won't make every single upgrade a non-event but it provides a more than sufficient amount of control and in the worse case can be effectively ignored by setting it to 1.0.Is there some reason to think that having this multiplier with a conservative default of 2.0 would cause an actual problem - and would that scenario have likely caused an OOM anyway in v12?  Given that \"work_mem can be used many many times in a single query\" I'm having trouble imagining such a problem.David J.", "msg_date": "Fri, 10 Jul 2020 17:47:03 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 11 Jul 2020 at 12:47, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> The multiplier seems strictly better than \"rely on work_mem alone, i.e., do nothing\"; the detracting factor being one more GUC. Even if one wants to argue the solution is ugly or imperfect the current state seems worse and a more perfect option doesn't seem worth waiting for. The multiplier won't make every single upgrade a non-event but it provides a more than sufficient amount of control and in the worse case can be effectively ignored by setting it to 1.0.\n\nMy argument wasn't related to if the new GUC should be a multiplier of\nwork_mem or an absolute amount of memory. The point I was trying to\nmake was that the solution to add a GUC to allow users to increase the\nmemory Hash Join and Hash Agg for plans which don't contain any other\nnodes types that use work_mem is the same as doing nothing. As of\ntoday, those people could just increase work_mem. If we get hash_mem\nor some variant that is a multiplier of work_mem, then that user is in\nexactly the same situation for that plan. i.e there's no ability to\nincrease the memory allowances for Hash Agg alone.\n\nIf we have to have a new GUC, my preference would be hashagg_mem,\nwhere -1 means use work_mem and a value between 64 and MAX_KILOBYTES\nwould mean use that value. We'd need some sort of check hook to\ndisallow 0-63. I really am just failing to comprehend why we're\ncontemplating changing the behaviour of Hash Join here. Of course, I\nunderstand that that node type also uses a hash table, but why does\nthat give it the right to be involved in a change that we're making to\ntry and give users the ability to avoid possible regressions with Hash\nAgg?\n\nDavid\n\n\n", "msg_date": "Sat, 11 Jul 2020 13:19:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> If we have to have a new GUC, my preference would be hashagg_mem,\n> where -1 means use work_mem and a value between 64 and MAX_KILOBYTES\n> would mean use that value. We'd need some sort of check hook to\n> disallow 0-63. I really am just failing to comprehend why we're\n> contemplating changing the behaviour of Hash Join here.\n\n\nIf we add a setting that defaults to work_mem then the benefit is severely\nreduced. You still have to modify individual queries, but the change can\nsimply be more targeted than changing work_mem alone. I truly desire to\nhave whatever we do provide that ability as well as a default value that is\ngreater than the current work_mem value - which in v12 was being ignored\nand thus production usages saw memory consumption greater than work_mem.\nOnly a multiplier does this. A multiplier-only solution fixes the problem\nat hand. A multiplier-or-memory solution adds complexity but provides\nflexibility. If adding that flexibility is straight-forward I don't see\nany serious downside other than the complexity of having the meaning of a\nsingle GUC's value dependent upon its magnitude.\n\nOf course, I\n> understand that that node type also uses a hash table, but why does\n> that give it the right to be involved in a change that we're making to\n> try and give users the ability to avoid possible regressions with Hash\n> Agg?\n>\n\nIf Hash Join isn't affected by the \"was allowed to use unlimited amounts of\nexecution memory but now isn't\" change then it probably should continue to\nconsult work_mem instead of being changed to use the calculated value\n(work_mem x multiplier).\n\nDavid J.\n\nOn Fri, Jul 10, 2020 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:If we have to have a new GUC, my preference would be hashagg_mem,\nwhere -1 means use work_mem and a value between 64 and MAX_KILOBYTES\nwould mean use that value.  We'd need some sort of check hook to\ndisallow 0-63. I really am just failing to comprehend why we're\ncontemplating changing the behaviour of Hash Join here. If we add a setting that defaults to work_mem then the benefit is severely reduced.  You still have to modify individual queries, but the change can simply be more targeted than changing work_mem alone.  I truly desire to have whatever we do provide that ability as well as a default value that is greater than the current work_mem value - which in v12 was being ignored and thus production usages saw memory consumption greater than work_mem.  Only a multiplier does this.  A multiplier-only solution fixes the problem at hand.  A multiplier-or-memory solution adds complexity but provides flexibility.  If adding that flexibility is straight-forward I don't see any serious downside other than the complexity of having the meaning of a single GUC's value dependent upon its magnitude.Of course, I\nunderstand that that node type also uses a hash table, but why does\nthat give it the right to be involved in a change that we're making to\ntry and give users the ability to avoid possible regressions with Hash\nAgg?If Hash Join isn't affected by the \"was allowed to use unlimited amounts of execution memory but now isn't\" change then it probably should continue to consult work_mem instead of being changed to use the calculated value (work_mem x multiplier).David J.", "msg_date": "Fri, 10 Jul 2020 18:35:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 11 Jul 2020 at 13:36, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> If we add a setting that defaults to work_mem then the benefit is severely reduced. You still have to modify individual queries, but the change can simply be more targeted than changing work_mem alone.\n\nI think the idea is that this is an escape hatch to allow users to get\nsomething closer to what PG12 did, but only if they really need it. I\ncan't quite understand why we need to leave the escape hatch open and\npush them halfway through it. I find escape hatches are best left\nclosed until you really have no choice but to use them.\n\nDavid\n\n\n", "msg_date": "Sat, 11 Jul 2020 13:43:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 6:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sat, 11 Jul 2020 at 13:36, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > If we add a setting that defaults to work_mem then the benefit is\n> severely reduced. You still have to modify individual queries, but the\n> change can simply be more targeted than changing work_mem alone.\n>\n> I think the idea is that this is an escape hatch to allow users to get\n> something closer to what PG12 did, but only if they really need it. I\n> can't quite understand why we need to leave the escape hatch open and\n> push them halfway through it. I find escape hatches are best left\n> closed until you really have no choice but to use them.\n>\n>\nThe escape hatch dynamic is \"the user senses a problem, goes into their\nquery, and modifies some GUCs to make the problem go away\". As a user\nI'd much rather have the odds of my needing to use that escape hatch\nreduced - especially if that reduction can be done without risk and without\nany action on my part.\n\nIt's like having someone in a box right now, and then turning up the heat.\nWe can give them an opening to get out of the box if they need it but we\ncan also give them A/C. For some the A/C may be unnecessary, but also not\nharmful, while a smaller group will stay in the margin, while for the\nothers it's not enough and use the opening (which they would have done\nanyway without the A/C).\n\nDavid J.\n\nOn Fri, Jul 10, 2020 at 6:43 PM David Rowley <dgrowleyml@gmail.com> wrote:On Sat, 11 Jul 2020 at 13:36, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> If we add a setting that defaults to work_mem then the benefit is severely reduced.  You still have to modify individual queries, but the change can simply be more targeted than changing work_mem alone.\n\nI think the idea is that this is an escape hatch to allow users to get\nsomething closer to what PG12 did, but only if they really need it.  I\ncan't quite understand why we need to leave the escape hatch open and\npush them halfway through it.  I find escape hatches are best left\nclosed until you really have no choice but to use them.The escape hatch dynamic is \"the user senses a problem, goes into their query, and modifies some GUCs to make the problem go away\".  As a user I'd much rather have the odds of my needing to use that escape hatch reduced - especially if that reduction can be done without risk and without any action on my part.It's like having someone in a box right now, and then turning up the heat.  We can give them an opening to get out of the box if they need it but we can also give them A/C.  For some the A/C may be unnecessary, but also not harmful,  while a smaller group will stay in the margin, while for the others it's not enough and use the opening (which they would have done anyway without the A/C).David J.", "msg_date": "Fri, 10 Jul 2020 18:53:23 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> If we get hash_mem\n> or some variant that is a multiplier of work_mem, then that user is in\n> exactly the same situation for that plan. i.e there's no ability to\n> increase the memory allowances for Hash Agg alone.\n\nThat's true, of course.\n\n> If we have to have a new GUC, my preference would be hashagg_mem,\n> where -1 means use work_mem and a value between 64 and MAX_KILOBYTES\n> would mean use that value. We'd need some sort of check hook to\n> disallow 0-63. I really am just failing to comprehend why we're\n> contemplating changing the behaviour of Hash Join here.\n\nI don't understand why parititonwise hash join consumes work_mem in\nthe way it does. I assume that the reason is something like \"because\nthat behavior was the easiest to explain\", or perhaps \"people that use\npartitioning ought to be able to tune their database well\". Or even\n\"this design avoids an epic pgsql-hackers thread, because of course\nevery hash table should get its own work_mem\".\n\n> Of course, I\n> understand that that node type also uses a hash table, but why does\n> that give it the right to be involved in a change that we're making to\n> try and give users the ability to avoid possible regressions with Hash\n> Agg?\n\nIt doesn't, exactly. The idea of hash_mem came from similar settings\nin another database system that you'll have heard of, that affect all\nnodes that use a hash table. I read about this long ago, and thought\nthat it might make sense to do something similar as a way to improving\nwork_mem (without replacing it with something completely different to\nenable things like the \"hash teams\" design, which should be the long\nterm goal). It's unusual that it took this hashaggs-that-spill issue\nto make the work_mem situation come to a head, and it's unusual that\nthe proposal on the table doesn't just target hash agg. But it's not\n*that* unusual.\n\nI believe that it makes sense on balance to lump together hash\naggregate and hash join, with the expectation that the user might want\nto tune them for the system as a whole. This is not an escape hatch --\nit's something that adds granularity to how work_mem can be tuned in a\nway that makes sense (but doesn't make perfect sense). It doesn't\nreflect reality, but I think that it comes closer to reflecting\nreality than other variations that I can think of, including your\nhashagg_mem compromise proposal (which is still much better than plain\nwork_mem). In short, hash_mem is relatively conceptually clean, and\ndoesn't unduly burden the user.\n\nI understand that you only want to add an escape hatch, which is what\nhashagg_mem still amounts to. There are negative consequences to the\nsetting affecting hash join, which I am not unconcerned about. On the\nother hand, hashagg_mem is an escape hatch, and that's ugly in a way\nthat hash_mem isn't. I'm also concerned about that.\n\nIn the end, I think that the \"hash_mem vs. hashagg_mem\" question is\nfundamentally a matter of opinion.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 10 Jul 2020 19:02:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 11 Jul 2020 at 14:02, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jul 10, 2020 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > If we have to have a new GUC, my preference would be hashagg_mem,\n> > where -1 means use work_mem and a value between 64 and MAX_KILOBYTES\n> > would mean use that value. We'd need some sort of check hook to\n> > disallow 0-63. I really am just failing to comprehend why we're\n> > contemplating changing the behaviour of Hash Join here.\n>\n> I don't understand why parititonwise hash join consumes work_mem in\n> the way it does. I assume that the reason is something like \"because\n> that behavior was the easiest to explain\", or perhaps \"people that use\n> partitioning ought to be able to tune their database well\". Or even\n> \"this design avoids an epic pgsql-hackers thread, because of course\n> every hash table should get its own work_mem\".\n\nhmm yeah. It's unfortunate, but I'm not sure how I'd have implemented\nit differently. The problem is made worse by the fact that we'll only\nrelease the memory for the hash table during ExecEndHashJoin(). If the\nplanner had some ability to provide the executor with knowledge that\nthe node would never be rescanned, then the executor could release the\nmemory for the hash table after the join is complete. For now, we'll\nneed to live with the fact that an Append containing many children\ndoing hash joins will mean holding onto all that memory until the\nexecutor is shutdown :-(\n\nThere's room to make improvements there, for sure, but not for PG13.\n\n> > Of course, I\n> > understand that that node type also uses a hash table, but why does\n> > that give it the right to be involved in a change that we're making to\n> > try and give users the ability to avoid possible regressions with Hash\n> > Agg?\n>\n> It doesn't, exactly. The idea of hash_mem came from similar settings\n> in another database system that you'll have heard of, that affect all\n> nodes that use a hash table. I read about this long ago, and thought\n> that it might make sense to do something similar as a way to improving\n> work_mem\n\nIt sounds interesting, but it also sounds like a new feature\npost-beta. Perhaps it's better we minimise the scope of the change to\nbe a minimal fix just for the behaviour we predict some users might\nnot like.\n\nDavid\n\n\n", "msg_date": "Sat, 11 Jul 2020 17:00:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I don't see hash_mem as being any kind of proper fix- it's just punting\n> > to the user saying \"we can't figure this out, how about you do it\" and,\n> > worse, it's in conflict with how we already ask the user that question.\n> > Turning it into a multiplier doesn't change that either.\n> \n> Have you got a better proposal that is reasonably implementable for v13?\n> (I do not accept the argument that \"do nothing\" is a better proposal.)\n> \n> I agree that hash_mem is a stopgap, whether it's a multiplier or no,\n> but at this point it seems difficult to avoid inventing a stopgap.\n> Getting rid of the process-global work_mem setting is a research project,\n> and one I wouldn't even count on having results from for v14. In the\n> meantime, it seems dead certain that there are applications for which\n> the current behavior will be problematic. hash_mem seems like a cleaner\n> and more useful stopgap than the \"escape hatch\" approach, at least to me.\n\nHave we heard from people running actual applications where there is a\nproblem with raising work_mem to simply match what's already happening\nwith the v12 behavior?\n\nSure, there's been some examples on this thread of people who know the\nbackend well showing how the default work_mem will cause the v13 HashAgg\nto spill to disk when given a query which has poor estimates, and that's\nslower than v12 where it ignored work_mem and used a bunch of memory,\nbut it was also shown that raising work_mem addresses that issue and\nbrings v12 and v13 back in line.\n\nThere was a concern raised that other nodes might then use more memory-\nbut there's nothing new there, if you wanted to avoid batching with a\nHashJoin in v12 you'd have exactly the same issue, and yet folks raise\nwork_mem all the time to address this, and to get that HashAgg plan in\nthe first place too when the estimates aren't so far off.\n\nThere now seems to be some suggestions that not only should we have a\nnew GUC, but we should default to having it not be equal to work_mem (or\n1.0 or whatever) and instead by higher, to be *twice* or larger whatever\nthe existing work_mem setting is- meaning that people whose systems are\nworking just fine and have good estimates that represent their workload\nand who get the plans they want may then start seeing differences and\nincreased memory utilization in places that they *don't* want that, all\nbecause we're scared that someone, somewhere, might see a regression due\nto HashAgg spilling to disk.\n\nSo, no, I don't agree that 'do nothing' (except ripping out the one GUC\nthat was already added) is a worse proposal than adding another work_mem\nlike thing that's only for some nodes types. There's no way that we'd\neven be considering such an approach during the regular development\ncycle either- there would be calls for a proper wholistic view, at least\nto the point where every node type that could possibly allocate a\nreasonable chunk of memory would be covered.\n\nThanks,\n\nStephen", "msg_date": "Sat, 11 Jul 2020 10:22:13 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> hmm yeah. It's unfortunate, but I'm not sure how I'd have implemented\n> it differently. The problem is made worse by the fact that we'll only\n> release the memory for the hash table during ExecEndHashJoin(). If the\n> planner had some ability to provide the executor with knowledge that\n> the node would never be rescanned, then the executor could release the\n> memory for the hash table after the join is complete.\n\nEXEC_FLAG_REWIND seems to fit the bill already?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Jul 2020 10:27:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> There now seems to be some suggestions that not only should we have a\n> new GUC, but we should default to having it not be equal to work_mem (or\n> 1.0 or whatever) and instead by higher, to be *twice* or larger whatever\n> the existing work_mem setting is- meaning that people whose systems are\n> working just fine and have good estimates that represent their workload\n> and who get the plans they want may then start seeing differences and\n> increased memory utilization in places that they *don't* want that, all\n> because we're scared that someone, somewhere, might see a regression due\n> to HashAgg spilling to disk.\n>\n\nIf that increased memory footprint allows the planner to give me a better\nplan with faster execution and with no OOM I'd be very happy that this\nchange happened. While having a more flexible memory allocation framework\nis not a primary goal in and of itself it is a nice side-effect. I'm not\ngoing to say \"let's only set work_mem to 32MB instead of 48MB so I can\navoid this faster HashAgg node and instead execute a nested loop (or\nwhatever)\". More probable is the user whose current nested loop plan is\nfast enough and doesn't even realize that with a bit more memory they could\nget an HashAgg that performs 15% faster. For them this is a win on its\nface.\n\nI don't believe this negatively impacts the super-admin in our user-base\nand is a decent win for the average and below average admin.\n\nDo we really have an issue with plans being chosen while having access to\nmore memory being slower than plans chosen while having less memory?\n\nThe main risk here is that we choose for a user to consume more memory than\nthey expected and they report OOM issues to us. We tell them to set this\nnew GUC to 1.0. But that implies they are getting many non-HashAgg plans\nproduced when with a bit more memory those HashAgg plans would have been\nchosen. If they get those faster plans without OOM it's a win, if it OOMs\nit's a loss. I'm feeling optimistic here and we'll get considerably more\nwins than losses. How loss-averse do we need to be here though? Npte we\ncan give the upgrading user advance notice of our loss-aversion level and\nthey can simply disagree and set it to 1.0 and/or perform more thorough\ntesting. So being optimistic feels like the right choice.\n\nDavid J.\n\nOn Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:There now seems to be some suggestions that not only should we have a\nnew GUC, but we should default to having it not be equal to work_mem (or\n1.0 or whatever) and instead by higher, to be *twice* or larger whatever\nthe existing work_mem setting is- meaning that people whose systems are\nworking just fine and have good estimates that represent their workload\nand who get the plans they want may then start seeing differences and\nincreased memory utilization in places that they *don't* want that, all\nbecause we're scared that someone, somewhere, might see a regression due\nto HashAgg spilling to disk.If that increased memory footprint allows the planner to give me a better plan with faster execution and with no OOM I'd be very happy that this change happened. While having a more flexible memory allocation framework is not a primary goal in and of itself it is a nice side-effect.  I'm not going to say \"let's only set work_mem to 32MB instead of 48MB so I can avoid this faster HashAgg node and instead execute a nested loop (or whatever)\".  More probable is the user whose current nested loop plan is fast enough and doesn't even realize that with a bit more memory they could get an HashAgg that performs 15% faster.  For them this is a win on its face.I don't believe this negatively impacts the super-admin in our user-base and is a decent win for the average and below average admin.Do we really have an issue with plans being chosen while having access to more memory being slower than plans chosen while having less memory?The main risk here is that we choose for a user to consume more memory than they expected and they report OOM issues to us.  We tell them to set this new GUC to 1.0.  But that implies they are getting many non-HashAgg plans produced when with a bit more memory those HashAgg plans would have been chosen.  If they get those faster plans without OOM it's a win, if it OOMs it's a loss.  I'm feeling optimistic here and we'll get considerably more wins than losses.  How loss-averse do we need to be here though?  Npte we can give the upgrading user advance notice of our loss-aversion level and they can simply disagree and set it to 1.0 and/or perform more thorough testing.  So being optimistic feels like the right choice.David J.", "msg_date": "Sat, 11 Jul 2020 09:02:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Have you got a better proposal that is reasonably implementable for v13?\n> > (I do not accept the argument that \"do nothing\" is a better proposal.)\n\n> So, no, I don't agree that 'do nothing' (except ripping out the one GUC\n> that was already added) is a worse proposal than adding another work_mem\n> like thing that's only for some nodes types.\n\nThe question was \"Have you got a better proposal that is reasonably\nimplementable for v13?\".\n\nThis is anecdotal, but just today somebody on Twitter reported\n*increasing* work_mem to stop getting OOMs from group aggregate +\nsort:\n\nhttps://twitter.com/theDressler/status/1281942941133615104\n\nIt was possible to fix the problem in this instance, since evidently\nthere wasn't anything else that really did try to consume ~5 GB of\nwork_mem memory. Evidently the memory isn't available in any general\nsense, so there are no OOMs now. Nevertheless, we can expect OOMs on\nthis server just as soon as there is a real need to do a ~5GB sort,\nregardless of anything else.\n\nI don't think that this kind of perverse effect is uncommon. Hash\naggregate can naturally be far faster than group agg + sort, Hash agg\ncan naturally use a lot less memory in many cases, and we have every\nreason to think that grouping estimates are regularly totally wrong.\nYou're significantly underestimating the risk.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Jul 2020 09:49:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 09:49:43AM -0700, Peter Geoghegan wrote:\n>On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> > Have you got a better proposal that is reasonably implementable for v13?\n>> > (I do not accept the argument that \"do nothing\" is a better proposal.)\n>\n>> So, no, I don't agree that 'do nothing' (except ripping out the one GUC\n>> that was already added) is a worse proposal than adding another work_mem\n>> like thing that's only for some nodes types.\n>\n>The question was \"Have you got a better proposal that is reasonably\n>implementable for v13?\".\n>\n>This is anecdotal, but just today somebody on Twitter reported\n>*increasing* work_mem to stop getting OOMs from group aggregate +\n>sort:\n>\n>https://twitter.com/theDressler/status/1281942941133615104\n>\n>It was possible to fix the problem in this instance, since evidently\n>there wasn't anything else that really did try to consume ~5 GB of\n>work_mem memory. Evidently the memory isn't available in any general\n>sense, so there are no OOMs now. Nevertheless, we can expect OOMs on\n>this server just as soon as there is a real need to do a ~5GB sort,\n>regardless of anything else.\n>\n\nI find that example rather suspicious. I mean, what exactly in the\nGroupAgg plan would consume this memory? Surely it'd have to be some\nnode below the grouping, but sort shouldn't do that, no?\n\nSeems strange.\n\n>I don't think that this kind of perverse effect is uncommon. Hash\n>aggregate can naturally be far faster than group agg + sort, Hash agg\n>can naturally use a lot less memory in many cases, and we have every\n>reason to think that grouping estimates are regularly totally wrong.\n>You're significantly underestimating the risk.\n>\n\nI agree grouping estimates are often quite off, and I kinda agree with\nintroducing hash_mem (or at least with the concept that hashing is more\nsensitive to amount of memory than sort). Not sure it's the right espace\nhatch to the hashagg spill problem, but maybe it is.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jul 2020 01:23:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 10, 2020 at 10:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> hmm yeah. It's unfortunate, but I'm not sure how I'd have implemented\n> it differently. The problem is made worse by the fact that we'll only\n> release the memory for the hash table during ExecEndHashJoin(). If the\n> planner had some ability to provide the executor with knowledge that\n> the node would never be rescanned, then the executor could release the\n> memory for the hash table after the join is complete. For now, we'll\n> need to live with the fact that an Append containing many children\n> doing hash joins will mean holding onto all that memory until the\n> executor is shutdown :-(\n>\n> There's room to make improvements there, for sure, but not for PG13.\n\nI think that we're stuck with the idea that partitionwise join uses up\nto one work_mem allocation per partition until we deprecate work_mem\nas a concept.\n\nAnyway, I only talked about partitionwise join because that was your\nexample. I could just as easily have picked on parallel hash join\ninstead, which is something that I was involved in myself (kind of).\nThis is more or less a negative consequence of the incremental\napproach we have taken here, which is a collective failure.\n\nI have difficulty accepting that something like hash_mem_multiplier\ncannot be accepted because it risks making the consequence of\nquestionable designs even worse. The problem remains that the original\nassumption just isn't very robust, and isn't something that the user\nhas granular control over. In general it makes sense that a design in\na stable branch is assumed to be the norm that new things need to\nrespect, and not the other way around. But there must be some limit to\nhow far that's taken.\n\n> It sounds interesting, but it also sounds like a new feature\n> post-beta. Perhaps it's better we minimise the scope of the change to\n> be a minimal fix just for the behaviour we predict some users might\n> not like.\n\nThat's an understandable interpretation of the\nhash_mem/hash_mem_multiplier proposal on the table, and yet one that I\ndisagree with. I consider it highly desirable to have a GUC that can\nbe tuned in a generic and high level way, on general principle. We\ndon't really do escape hatches, and I'd rather avoid adding one now\n(though it's far preferable to doing nothing, which I consider totally\nout of the question).\n\nPursuing what you called hashagg_mem is a compromise that will make\nneither of us happy. It seems like an escape hatch by another name. I\nwould rather just go with your original proposal instead, especially\nif that's the only thing that'll resolve the problem in front of us.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Jul 2020 16:46:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 4:23 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I find that example rather suspicious. I mean, what exactly in the\n> GroupAgg plan would consume this memory? Surely it'd have to be some\n> node below the grouping, but sort shouldn't do that, no?\n>\n> Seems strange.\n\nWell, I imagine hash aggregate manages to use much less memory than\nthe equivalent groupagg's sort, even though to the optimizer it\nappears as if hash agg should end up using more memory (which is not\nallowed by the optimizer when it exceeds work_mem, regardless of\nwhether or not it's faster). It may also be relevant that Hash agg can\nuse less memory simply by being faster. Going faster could easily\nreduce the memory usage for the system as a whole, even when you\nassume individual group agg nodes use more memory for as long as they\nrun. So in-memory hash agg is effectively less memory hungry.\n\nIt's not a great example of a specific case that we'd regress by not\nhaving hash_mem/hash_mem_multiplier. It's an overestimate where older\nreleases accidentally got a bad, slow plan, not an underestimate where\nolder releases \"lived beyond their means but got away with it\" by\ngetting a good, fast plan. ISTM that the example is a good example of\nthe strange dynamics involved.\n\n> I agree grouping estimates are often quite off, and I kinda agree with\n> introducing hash_mem (or at least with the concept that hashing is more\n> sensitive to amount of memory than sort). Not sure it's the right espace\n> hatch to the hashagg spill problem, but maybe it is.\n\nThe hash_mem/hash_mem_multiplier proposal aims to fix the problem\ndirectly, and not be an escape hatch, because we don't like escape\nhatches. I think that that probably fixes many or most of the problems\nin practice, at least assuming that the admin is willing to tune it.\nBut a small number of remaining installations may still need a \"true\"\nescape hatch. There is an argument for having both, though I hope that\nthe escape hatch can be avoided.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 Jul 2020 17:08:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 09:02:43AM -0700, David G. Johnston wrote:\n>On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> There now seems to be some suggestions that not only should we have a\n>> new GUC, but we should default to having it not be equal to work_mem (or\n>> 1.0 or whatever) and instead by higher, to be *twice* or larger whatever\n>> the existing work_mem setting is- meaning that people whose systems are\n>> working just fine and have good estimates that represent their workload\n>> and who get the plans they want may then start seeing differences and\n>> increased memory utilization in places that they *don't* want that, all\n>> because we're scared that someone, somewhere, might see a regression due\n>> to HashAgg spilling to disk.\n>>\n>\n>If that increased memory footprint allows the planner to give me a better\n>plan with faster execution and with no OOM I'd be very happy that this\n>change happened. While having a more flexible memory allocation framework\n>is not a primary goal in and of itself it is a nice side-effect. I'm not\n>going to say \"let's only set work_mem to 32MB instead of 48MB so I can\n>avoid this faster HashAgg node and instead execute a nested loop (or\n>whatever)\". More probable is the user whose current nested loop plan is\n>fast enough and doesn't even realize that with a bit more memory they could\n>get an HashAgg that performs 15% faster. For them this is a win on its\n>face.\n>\n>I don't believe this negatively impacts the super-admin in our user-base\n>and is a decent win for the average and below average admin.\n>\n>Do we really have an issue with plans being chosen while having access to\n>more memory being slower than plans chosen while having less memory?\n>\n>The main risk here is that we choose for a user to consume more memory than\n>they expected and they report OOM issues to us. We tell them to set this\n>new GUC to 1.0. But that implies they are getting many non-HashAgg plans\n>produced when with a bit more memory those HashAgg plans would have been\n>chosen. If they get those faster plans without OOM it's a win, if it OOMs\n>it's a loss. I'm feeling optimistic here and we'll get considerably more\n>wins than losses. How loss-averse do we need to be here though? Npte we\n>can give the upgrading user advance notice of our loss-aversion level and\n>they can simply disagree and set it to 1.0 and/or perform more thorough\n>testing. So being optimistic feels like the right choice.\n>\n\nI don't know, but one of the main arguments against simply suggesting\npeople to bump up work_mem (if they're hit by the hashagg spill in v13)\nwas that it'd increase overall memory usage for them. It seems strange\nto then propose a new GUC set to a default that would result in higher\nmemory usage *for everyone*.\n\nOf course, having such GUC with a default a multiple of work_mem might\nbe a win overall - or maybe not. I don't have a very good idea how many\npeople will get bitten by this, and how many will get speedup (and how\nsignificant the speedup would be).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jul 2020 02:30:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I don't know, but one of the main arguments against simply suggesting\n> people to bump up work_mem (if they're hit by the hashagg spill in v13)\n> was that it'd increase overall memory usage for them. It seems strange\n> to then propose a new GUC set to a default that would result in higher\n> memory usage *for everyone*.\n\nIt seems like a lot of the disagreement here is focused on Peter's\nproposal to make hash_mem_multiplier default to 2.0. But it doesn't\nseem to me that that's a critical element of the proposal. Why not just\nmake it default to 1.0, thus keeping the default behavior identical\nto what it is now?\n\nIf we find that's a poor default, we can always change it later;\nbut it seems to me that the evidence for a higher default is\na bit thin at this point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Jul 2020 20:47:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "I would be okay with a default of 1.0.\n\nPeter Geoghegan\n(Sent from my phone)\n\nI would be okay with a default of 1.0.Peter Geoghegan(Sent from my phone)", "msg_date": "Sat, 11 Jul 2020 19:34:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jul 12, 2020 at 2:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > hmm yeah. It's unfortunate, but I'm not sure how I'd have implemented\n> > it differently. The problem is made worse by the fact that we'll only\n> > release the memory for the hash table during ExecEndHashJoin(). If the\n> > planner had some ability to provide the executor with knowledge that\n> > the node would never be rescanned, then the executor could release the\n> > memory for the hash table after the join is complete.\n>\n> EXEC_FLAG_REWIND seems to fit the bill already?\n\nFWIW I have a patch that does exactly that, which I was planning to\nsubmit for CF2 along with some other patches that estimate and measure\npeak executor memory usage.\n\n\n", "msg_date": "Sun, 12 Jul 2020 14:46:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > I don't know, but one of the main arguments against simply suggesting\n> > people to bump up work_mem (if they're hit by the hashagg spill in v13)\n> > was that it'd increase overall memory usage for them. It seems strange\n> > to then propose a new GUC set to a default that would result in higher\n> > memory usage *for everyone*.\n>\n> It seems like a lot of the disagreement here is focused on Peter's\n> proposal to make hash_mem_multiplier default to 2.0. But it doesn't\n> seem to me that that's a critical element of the proposal. Why not just\n> make it default to 1.0, thus keeping the default behavior identical\n> to what it is now?\n>\n\nIf we don't default it to something other than 1.0 we might as well just\nmake it memory units and let people decide precisely what they want to use\ninstead of adding the complexity of a multiplier.\n\n\n> If we find that's a poor default, we can always change it later;\n> but it seems to me that the evidence for a higher default is\n> a bit thin at this point.\n>\n\nSo \"your default is 1.0 unless you installed the new database on or after\n13.4 in which case it's 2.0\"?\n\nI'd rather have it be just memory units defaulting to -1 meaning \"use\nwork_mem\". In the unlikely scenario we decide post-release to want a\nmultiplier > 1.0 we can add the GUC with that default at that point. The\nmultiplier would want to be ignored if hash_mem if set to anything other\nthan -1.\n\nDavid J.\n\nOn Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I don't know, but one of the main arguments against simply suggesting\n> people to bump up work_mem (if they're hit by the hashagg spill in v13)\n> was that it'd increase overall memory usage for them. It seems strange\n> to then propose a new GUC set to a default that would result in higher\n> memory usage *for everyone*.\n\nIt seems like a lot of the disagreement here is focused on Peter's\nproposal to make hash_mem_multiplier default to 2.0.  But it doesn't\nseem to me that that's a critical element of the proposal.  Why not just\nmake it default to 1.0, thus keeping the default behavior identical\nto what it is now?\n\nIf we don't default it to something other than 1.0 we might as well just make it memory units and let people decide precisely what they want to use instead of adding the complexity of a multiplier.\n\n \nIf we find that's a poor default, we can always change it later;\nbut it seems to me that the evidence for a higher default is\na bit thin at this point.So \"your default is 1.0 unless you installed the new database on or after 13.4 in which case it's 2.0\"?I'd rather have it be just memory units defaulting to -1 meaning \"use work_mem\".  In the unlikely scenario we decide post-release to want a multiplier > 1.0 we can add the GUC with that default at that point.  The multiplier would want to be ignored if hash_mem if set to anything other than -1.David J.", "msg_date": "Sat, 11 Jul 2020 21:28:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It seems like a lot of the disagreement here is focused on Peter's\n>> proposal to make hash_mem_multiplier default to 2.0. But it doesn't\n>> seem to me that that's a critical element of the proposal. Why not just\n>> make it default to 1.0, thus keeping the default behavior identical\n>> to what it is now?\n\n> If we don't default it to something other than 1.0 we might as well just\n> make it memory units and let people decide precisely what they want to use\n> instead of adding the complexity of a multiplier.\n\nNot sure how that follows? The advantage of a multiplier is that it\ntracks whatever people might do to work_mem automatically. In general\nI'd view work_mem as the base value that people twiddle to control\nexecutor memory consumption. Having to also twiddle this other value\ndoesn't seem especially user-friendly.\n\n>> If we find that's a poor default, we can always change it later;\n>> but it seems to me that the evidence for a higher default is\n>> a bit thin at this point.\n\n> So \"your default is 1.0 unless you installed the new database on or after\n> 13.4 in which case it's 2.0\"?\n\nWhat else would be new? See e.g. 848ae330a. (Note I'm not suggesting\nthat we'd change it in a minor release.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Jul 2020 00:37:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Saturday, July 11, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It seems like a lot of the disagreement here is focused on Peter's\n> >> proposal to make hash_mem_multiplier default to 2.0. But it doesn't\n> >> seem to me that that's a critical element of the proposal. Why not just\n> >> make it default to 1.0, thus keeping the default behavior identical\n> >> to what it is now?\n>\n> > If we don't default it to something other than 1.0 we might as well just\n> > make it memory units and let people decide precisely what they want to\n> use\n> > instead of adding the complexity of a multiplier.\n>\n> Not sure how that follows? The advantage of a multiplier is that it\n> tracks whatever people might do to work_mem automatically.\n\n\n>\nI was thinking that setting -1 would basically do that.\n\n\n> In general\n> I'd view work_mem as the base value that people twiddle to control\n> executor memory consumption. Having to also twiddle this other value\n> doesn't seem especially user-friendly.\n\n\nI’ll admit I don’t have a feel for what is or is not user-friendly when\nsetting these GUCs in a session to override the global defaults. But as\nfar as the global defaults I say it’s a wash between (32mb, -1) -> (32mb,\n48mb) and (32mb, 1.0) -> (32mb, 1.5)\n\nIf you want 96mb for the session/query hash setting it to 96mb is\ninvariant, whilesetting it to 3.0 means it can change in the future if the\nsystem work_mem changes. Knowing the multiplier is 1.5 and choosing 64mb\nfor work_mem in the session is possible but also mutable and has\nside-effects. If the user is going to set both values to make it invariant\nwe are back to it being a wash.\n\nI don’t believe using a multiplier will promote better comprehension for\nwhy this setting exists compared to “-1 means use work_mem but you can\noverride a subset if you want.”\n\nIs having a session level memory setting be mutable something we want to\nintroduce?\n\nIs it more user-friendly?\n\n>> If we find that's a poor default, we can always change it later;\n> >> but it seems to me that the evidence for a higher default is\n> >> a bit thin at this point.\n>\n> > So \"your default is 1.0 unless you installed the new database on or after\n> > 13.4 in which case it's 2.0\"?\n>\n> What else would be new? See e.g. 848ae330a. (Note I'm not suggesting\n> that we'd change it in a minor release.)\n>\n\nMinor release update is what I had thought, and to an extent was making\npossible by not using the multiplier upfront.\n\nI agree options are wide open come v14 and beyond.\n\nDavid J.\n\nOn Saturday, July 11, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It seems like a lot of the disagreement here is focused on Peter's\n>> proposal to make hash_mem_multiplier default to 2.0.  But it doesn't\n>> seem to me that that's a critical element of the proposal.  Why not just\n>> make it default to 1.0, thus keeping the default behavior identical\n>> to what it is now?\n\n> If we don't default it to something other than 1.0 we might as well just\n> make it memory units and let people decide precisely what they want to use\n> instead of adding the complexity of a multiplier.\n\nNot sure how that follows?  The advantage of a multiplier is that it\ntracks whatever people might do to work_mem automatically.I was thinking that setting -1 would basically do that.  In general\nI'd view work_mem as the base value that people twiddle to control\nexecutor memory consumption.  Having to also twiddle this other value\ndoesn't seem especially user-friendly.I’ll admit I don’t have a feel for what is or is not user-friendly when setting these GUCs in a session to override the global defaults.  But as far as the global defaults I say it’s a wash between (32mb, -1) -> (32mb, 48mb) and (32mb, 1.0) -> (32mb, 1.5)If you want 96mb for the session/query hash setting it to 96mb is invariant, whilesetting it to 3.0 means it can change in the future if the system work_mem changes.  Knowing the multiplier is 1.5 and choosing 64mb for work_mem in the session is possible but also mutable and has side-effects.  If the user is going to set both values to make it invariant we are back to it being a wash.I don’t believe using a multiplier will promote better comprehension for why this setting exists compared to “-1 means use work_mem but you can override a subset if you want.”Is having a session level memory setting be mutable something we want to introduce?Is it more user-friendly?\n>> If we find that's a poor default, we can always change it later;\n>> but it seems to me that the evidence for a higher default is\n>> a bit thin at this point.\n\n> So \"your default is 1.0 unless you installed the new database on or after\n> 13.4 in which case it's 2.0\"?\n\nWhat else would be new?  See e.g. 848ae330a.  (Note I'm not suggesting\nthat we'd change it in a minor release.)\nMinor release update is what I had thought, and to an extent was making possible by not using the multiplier upfront.I agree options are wide open come v14 and beyond.David J.", "msg_date": "Sat, 11 Jul 2020 22:26:22 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 08:47:54PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I don't know, but one of the main arguments against simply suggesting\n>> people to bump up work_mem (if they're hit by the hashagg spill in v13)\n>> was that it'd increase overall memory usage for them. It seems strange\n>> to then propose a new GUC set to a default that would result in higher\n>> memory usage *for everyone*.\n>\n>It seems like a lot of the disagreement here is focused on Peter's\n>proposal to make hash_mem_multiplier default to 2.0. But it doesn't\n>seem to me that that's a critical element of the proposal. Why not just\n>make it default to 1.0, thus keeping the default behavior identical\n>to what it is now?\n>\n>If we find that's a poor default, we can always change it later;\n>but it seems to me that the evidence for a higher default is\n>a bit thin at this point.\n>\n\nYou're right, I was specifically pushing against that aspect of the\nproposal. Sorry for not making that clearer, I assumed it's clear from\nthe context of this (sub)thread.\n\nI agree making it 1.0 (or equal to work_mem, if it's not a multiplier)\nby default, but allowing it to be increased if needed would address most\nof the spilling issues.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jul 2020 14:30:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 10:26:22PM -0700, David G. Johnston wrote:\n>On Saturday, July 11, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> > On Sat, Jul 11, 2020 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >> It seems like a lot of the disagreement here is focused on Peter's\n>> >> proposal to make hash_mem_multiplier default to 2.0. But it doesn't\n>> >> seem to me that that's a critical element of the proposal. Why not just\n>> >> make it default to 1.0, thus keeping the default behavior identical\n>> >> to what it is now?\n>>\n>> > If we don't default it to something other than 1.0 we might as well just\n>> > make it memory units and let people decide precisely what they want to\n>> use\n>> > instead of adding the complexity of a multiplier.\n>>\n>> Not sure how that follows? The advantage of a multiplier is that it\n>> tracks whatever people might do to work_mem automatically.\n>\n>\n>>\n>I was thinking that setting -1 would basically do that.\n>\n\nI think Tom meant that the multiplier would automatically track any\nchanges to work_mem, and adjust the hash_mem accordingly. With -1 (and\nthe GUC in units) you could only keep it exactly equal to work_mem, but\nthen as soon as you change it you'd have to update both.\n\n>> In general\n>> I'd view work_mem as the base value that people twiddle to control\n>> executor memory consumption. Having to also twiddle this other value\n>> doesn't seem especially user-friendly.\n>\n>\n>I’ll admit I don’t have a feel for what is or is not user-friendly when\n>setting these GUCs in a session to override the global defaults. But as\n>far as the global defaults I say it’s a wash between (32mb, -1) -> (32mb,\n>48mb) and (32mb, 1.0) -> (32mb, 1.5)\n>\n>If you want 96mb for the session/query hash setting it to 96mb is\n>invariant, whilesetting it to 3.0 means it can change in the future if the\n>system work_mem changes. Knowing the multiplier is 1.5 and choosing 64mb\n>for work_mem in the session is possible but also mutable and has\n>side-effects. If the user is going to set both values to make it invariant\n>we are back to it being a wash.\n>\n>I don’t believe using a multiplier will promote better comprehension for\n>why this setting exists compared to “-1 means use work_mem but you can\n>override a subset if you want.”\n>\n>Is having a session level memory setting be mutable something we want to\n>introduce?\n>\n>Is it more user-friendly?\n>\n\nI still think it should be in simple units, TBH. We already have\nsomewhat similar situation with cost parameters, where we often say that\nseq_page_cost = 1.0 is the baseline for the other cost parameters, yet\nwe have not coded that as multipliers.\n\n>>> If we find that's a poor default, we can always change it later;\n>> >> but it seems to me that the evidence for a higher default is\n>> >> a bit thin at this point.\n>>\n>> > So \"your default is 1.0 unless you installed the new database on or after\n>> > 13.4 in which case it's 2.0\"?\n>>\n>> What else would be new? See e.g. 848ae330a. (Note I'm not suggesting\n>> that we'd change it in a minor release.)\n>>\n>\n>Minor release update is what I had thought, and to an extent was making\n>possible by not using the multiplier upfront.\n>\n>I agree options are wide open come v14 and beyond.\n>\n>David J.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jul 2020 14:36:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 11, 2020 at 3:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I don't see hash_mem as being any kind of proper fix- it's just punting\n> > to the user saying \"we can't figure this out, how about you do it\" and,\n> > worse, it's in conflict with how we already ask the user that question.\n> > Turning it into a multiplier doesn't change that either.\n>\n> Have you got a better proposal that is reasonably implementable for v13?\n> (I do not accept the argument that \"do nothing\" is a better proposal.)\n>\n> I agree that hash_mem is a stopgap, whether it's a multiplier or no,\n> but at this point it seems difficult to avoid inventing a stopgap.\n> Getting rid of the process-global work_mem setting is a research project,\n> and one I wouldn't even count on having results from for v14. In the\n> meantime, it seems dead certain that there are applications for which\n> the current behavior will be problematic.\n>\n\nIf this is true then certainly it adds more weight to the argument for\nhaving a solution like hash_mem or some other escape-hatch. I know it\nwould be difficult to get the real-world data but why not try TPC-H or\nsimilar workloads at a few different scale_factor/size? I was\nchecking some old results with me for TPC-H runs and I found that many\nof the plans were using Finalize GroupAggregate and Partial\nGroupAggregate kinds of plans, there were few where I saw Partial\nHashAggregate being used but it appears on a random check that\nGroupAggregate seems to be used more. It could be that after\nparallelism GroupAggregate plans are getting preference but I am not\nsure about this. However, even if that is not true, I think after the\nparallel aggregates the memory-related thing is taken care of to some\nextent automatically because I think after that each worker doing\npartial aggregation can be allowed to consume work_mem memory. So,\nprobably the larger aggregates which are going to give better\nperformance by consuming more memory would already be parallelized and\nwould have given the desired results. Now, allowing aggregates to use\nmore memory via hash_mem kind of thing is beneficial in non-parallel\ncases but for cases where parallelism is used it could be worse\nbecause now each work will be entitled to use more memory.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:39:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-04-07 20:20, Jeff Davis wrote:\n> Now that we have Disk-based Hash Aggregation, there are a lot more\n> situations where the planner can choose HashAgg. The\n> enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n> costing. If false, it only generates a HashAgg path if it thinks it\n> will fit in work_mem, similar to the old behavior (though it wlil now\n> spill to disk if the planner was wrong about it fitting in work_mem).\n> The current default is true.\n\nI have an anecdote that might be related to this discussion.\n\nI was running an unrelated benchmark suite. With PostgreSQL 12, one \nquery ran out of memory. With PostgreSQL 13, the same query instead ran \nout of disk space. I bisected this to the introduction of disk-based \nhash aggregation. Of course, the very point of that feature is to \neliminate the out of memory and make use of disk space instead. But \nrunning out of disk space is likely to be a worse experience than \nrunning out of memory. Also, while it's relatively easy to limit memory \nuse both in PostgreSQL and in the kernel, it is difficult or impossible \nto limit disk space use in a similar way.\n\nI don't have a solution or proposal here, I just want to mention this as \na possibility and suggest that we look out for similar experiences.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 13:51:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, 13 Jul 2020 at 23:51, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I have an anecdote that might be related to this discussion.\n>\n> I was running an unrelated benchmark suite. With PostgreSQL 12, one\n> query ran out of memory. With PostgreSQL 13, the same query instead ran\n> out of disk space. I bisected this to the introduction of disk-based\n> hash aggregation. Of course, the very point of that feature is to\n> eliminate the out of memory and make use of disk space instead. But\n> running out of disk space is likely to be a worse experience than\n> running out of memory. Also, while it's relatively easy to limit memory\n> use both in PostgreSQL and in the kernel, it is difficult or impossible\n> to limit disk space use in a similar way.\n\nIsn't that what temp_file_limit is for?\n\nDavid\n\n\n", "msg_date": "Tue, 14 Jul 2020 00:16:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Sat, Jul 11, 2020 at 7:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Have you got a better proposal that is reasonably implementable for v13?\n> > > (I do not accept the argument that \"do nothing\" is a better proposal.)\n> \n> > So, no, I don't agree that 'do nothing' (except ripping out the one GUC\n> > that was already added) is a worse proposal than adding another work_mem\n> > like thing that's only for some nodes types.\n> \n> The question was \"Have you got a better proposal that is reasonably\n> implementable for v13?\".\n> \n> This is anecdotal, but just today somebody on Twitter reported\n> *increasing* work_mem to stop getting OOMs from group aggregate +\n> sort:\n> \n> https://twitter.com/theDressler/status/1281942941133615104\n\nYes, increasing work_mem isn't unusual, at all. What that tweet shows\nthat I don't think folks who are suggesting things like setting this\nfactor to 2.0 is that people may have a work_mem configured in the\ngigabytes- meaning that a 2.0 value would result in a work_mem of 5GB\nand a hash_mem of 10GB. Now, I'm all for telling people to review their\nconfigurations between major versions, but that's a large difference\nthat's going to be pretty deeply hidden in a 'multiplier' setting.\n\nI'm still wholly unconvinced that we need such a setting, just to be\nclear, but I don't think there's any way it'd be reasonable to have it\nset to something other than \"whatever work_mem is\" by default- and it\nneeds to actually be \"what work_mem is\" and not \"have the same default\nvalue\" or *everyone* would have to configure it.\n\n> It was possible to fix the problem in this instance, since evidently\n> there wasn't anything else that really did try to consume ~5 GB of\n> work_mem memory. Evidently the memory isn't available in any general\n> sense, so there are no OOMs now. Nevertheless, we can expect OOMs on\n> this server just as soon as there is a real need to do a ~5GB sort,\n> regardless of anything else.\n\nEh? That's not at all what it looks like- they were getting OOM's\nbecause they set work_mem to be higher than the actual amount of memory\nthey had and the Sort before the GroupAgg was actually trying to use all\nthat memory. The HashAgg ended up not needing that much memory because\nthe aggregated set wasn't actually that large. If anything, this shows\nexactly what Jeff's fine work here is (hopefully) going to give us- the\noption to plan a HashAgg in such cases, since we can accept spilling to\ndisk if we end up underestimate, or take advantage of that HashAgg\nbeing entirely in memory if we overestimate.\n\n> I don't think that this kind of perverse effect is uncommon. Hash\n> aggregate can naturally be far faster than group agg + sort, Hash agg\n> can naturally use a lot less memory in many cases, and we have every\n> reason to think that grouping estimates are regularly totally wrong.\n\nI'm confused as to why we're talking about the relative performance of a\nHashAgg vs. a Sort+GroupAgg- of course the HashAgg is going to be faster\nif it's got enough memory, but that's a constraint we have to consider\nand deal with because, otherwise, the query can end up failing and\npotentially impacting other queries or activity on the system, including\nresulting in the entire database system falling over due to the OOM\nKiller firing and killing a process and the database ending up\nrestarting and going through crash recovery, which is going to be quite\na bit worse than performance maybe not being great.\n\n> You're significantly underestimating the risk.\n\nOf... what? That we'll end up getting worse performance because we\nunderestimated the size of the result set and we end up spilling to\ndisk with the HashAgg? I think I'm giving that risk the amount of\nconcern it deserves- which is, frankly, not very much. Users who run\ninto that issue, as this tweet *also* showed, are familiar with work_mem\nand can tune it to address that. This reaction to demand a new GUC to\nbreak up work_mem into pieces strikes me as unjustified, and doing so\nduring beta makes it that much worse.\n\nHaving looked back, I'm not sure that I'm really in the minority\nregarding the proposal to add this at this time either- there's been a\nfew different comments that it's too late for v13 and/or that we should\nsee if we actually end up with users seriously complaining about the\nlack of a separate way to specify the memory for a given node type,\nand/or that if we're going to do this then we should have a broader set\nof options covering other nodes types too, all of which are positions\nthat I agree with.\n\nThanks,\n\nStephen", "msg_date": "Mon, 13 Jul 2020 09:13:42 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-07-13 14:16, David Rowley wrote:\n> Isn't that what temp_file_limit is for?\n\nYeah, I guess that is so rarely used that I had forgotten about it. So \nmaybe that is also something that more users will want to be aware of.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 15:13:46 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 01:51:42PM +0200, Peter Eisentraut wrote:\n>On 2020-04-07 20:20, Jeff Davis wrote:\n>>Now that we have Disk-based Hash Aggregation, there are a lot more\n>>situations where the planner can choose HashAgg. The\n>>enable_hashagg_disk GUC, if set to true, chooses HashAgg based on\n>>costing. If false, it only generates a HashAgg path if it thinks it\n>>will fit in work_mem, similar to the old behavior (though it wlil now\n>>spill to disk if the planner was wrong about it fitting in work_mem).\n>>The current default is true.\n>\n>I have an anecdote that might be related to this discussion.\n>\n>I was running an unrelated benchmark suite. With PostgreSQL 12, one \n>query ran out of memory. With PostgreSQL 13, the same query instead \n>ran out of disk space. I bisected this to the introduction of \n>disk-based hash aggregation. Of course, the very point of that \n>feature is to eliminate the out of memory and make use of disk space \n>instead. But running out of disk space is likely to be a worse \n>experience than running out of memory. Also, while it's relatively \n>easy to limit memory use both in PostgreSQL and in the kernel, it is \n>difficult or impossible to limit disk space use in a similar way.\n>\n\nWhy is running out of disk space worse experience than running out of\nmemory?\n\nSure, it'll take longer and ultimately the query fails (and if it fills\nthe device used by the WAL then it may also cause shutdown of the main\ninstance due to inability to write WAL). But that can be prevented by\nmoving the temp tablespace and/or setting the temp file limit, as\nalready mentioned.\n\nWith OOM, if the kernel OOM killer decides to act, it may easily bring\ndown the instance too, and there are much less options to prevent that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 13 Jul 2020 16:11:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 14 Jul 2020 at 01:13, Stephen Frost <sfrost@snowman.net> wrote:\n> Yes, increasing work_mem isn't unusual, at all. What that tweet shows\n> that I don't think folks who are suggesting things like setting this\n> factor to 2.0 is that people may have a work_mem configured in the\n> gigabytes- meaning that a 2.0 value would result in a work_mem of 5GB\n> and a hash_mem of 10GB. Now, I'm all for telling people to review their\n> configurations between major versions, but that's a large difference\n> that's going to be pretty deeply hidden in a 'multiplier' setting.\n\nI think Peter seems to be fine with setting the default to 1.0, per [0].\n\nThis thread did split off a while back into \"Default setting for\nenable_hashagg_disk (hash_mem)\", I did try and summarise who sits\nwhere on this in [19].\n\nI think it would be good if we could try to move towards getting\nconsensus here rather than reiterating our arguments over and over.\n\nUpdated summary:\n* For hash_mem = Tomas [7], Justin [16]\n* For hash_mem_multiplier with a default > 1.0 = DavidG [21]\n* For hash_mem_multiplier with default = 1.0 = PeterG [15][0], Tom [20][24]\n* hash_mem out of scope for PG13 = Bruce [8], Andres [9]\n* hashagg_mem default to -1 meaning use work_mem = DavidR [23] (2nd preference)\n* Escape hatch that can be removed later when we get something better\n= Jeff [11], DavidR [12], Pavel [13], Andres [14], Justin [1]\n* Add enable_hashagg_spill = Tom [2] (I'm unclear on this proposal.\nDoes it affect the planner or executor or both?) (updated opinion in\n[20])\n* Maybe do nothing until we see how things go during beta = Bruce [3], Amit [10]\n* Just let users set work_mem = Stephen [21], Alvaro [4] (Alvaro\nchanged his mind after Andres pointed out that changes other nodes in\nthe plan too [25])\n* Swap enable_hashagg for a GUC that specifies when spilling should\noccur. -1 means work_mem = Robert [17], Amit [18]\n* hash_mem does not solve the problem = Tomas [6] (changed his mind in [7])\n\nPerhaps people who have managed to follow this thread but not chip in\nyet can reply quoting the option above that they'd be voting for. Or\nif you're ok changing your mind to some option that has more votes\nthan the one your name is already against. That might help move this\nalong.\n\nDavid\n\n[0] https://www.postgresql.org/message-id/CAH2-Wz=VV6EKFGUJDsHEqyvRk7pCO36BvEoF5sBQry_O6R2=nw@mail.gmail.com\n[1] https://www.postgresql.org/message-id/20200624031443.GV4107@telsasoft.com\n[2] https://www.postgresql.org/message-id/2214502.1593019796@sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/20200625182512.GC12486@momjian.us\n[4] https://www.postgresql.org/message-id/20200625224422.GA9653@alvherre.pgsql\n[5] https://www.postgresql.org/message-id/CAA4eK1K0cgk_8hRyxsvppgoh_Z-NY+UZTcFWB2we6baJ9DXCQw@mail.gmail.com\n[6] https://www.postgresql.org/message-id/20200627104141.gq7d3hm2tvoqgjjs@development\n[7] https://www.postgresql.org/message-id/20200629212229.n3afgzq6xpxrr4cu@development\n[8] https://www.postgresql.org/message-id/20200703030001.GD26235@momjian.us\n[9] https://www.postgresql.org/message-id/20200707171216.jqxrld2jnxwf5ozv@alap3.anarazel.de\n[10] https://www.postgresql.org/message-id/CAA4eK1KfPi6iz0hWxBLZzfVOG_NvOVJL=9UQQirWLpaN=kANTQ@mail.gmail.com\n[11] https://www.postgresql.org/message-id/8bff2e4e8020c3caa16b61a46918d21b573eaf78.camel@j-davis.com\n[12] https://www.postgresql.org/message-id/CAApHDvqFZikXhAGW=UKZKq1_FzHy+XzmUzAJiNj6RWyTHH4UfA@mail.gmail.com\n[13] https://www.postgresql.org/message-id/CAFj8pRBf1w4ndz-ynd+mUpTfiZfbs7+CPjc4ob8v9d3X0MscCg@mail.gmail.com\n[14] https://www.postgresql.org/message-id/20200624191433.5gnqgrxfmucexldm@alap3.anarazel.de\n[15] https://www.postgresql.org/message-id/CAH2-WzmD+i1pG6rc1+Cjc4V6EaFJ_qSuKCCHVnH=oruqD-zqow@mail.gmail.com\n[16] https://www.postgresql.org/message-id/20200703024649.GJ4107@telsasoft.com\n[17] https://www.postgresql.org/message-id/CA+TgmobyV9+T-Wjx-cTPdQuRCgt1THz1mL3v1NXC4m4G-H6Rcw@mail.gmail.com\n[18] https://www.postgresql.org/message-id/CAA4eK1K0cgk_8hRyxsvppgoh_Z-NY+UZTcFWB2we6baJ9DXCQw@mail.gmail.com\n[19] https://www.postgresql.org/message-id/CAApHDvrP1FiEv4AQL2ZscbHi32W+Gp01j+qnhwou7y7p-QFj_w@mail.gmail.com\n[20] https://www.postgresql.org/message-id/2107841.1594403217@sss.pgh.pa.us\n[21] https://www.postgresql.org/message-id/20200710141714.GI12375@tamriel.snowman.net\n[22] https://www.postgresql.org/message-id/CAKFQuwa2gwLa0b%2BmQv5r5A_Q0XWsA2%3D1zQ%2BZ5m4pQprxh-aM4Q%40mail.gmail.com\n[23] https://www.postgresql.org/message-id/CAApHDvpxbHHP566rRjJWgnfS0YOxR53EZTz5LHH-jcEKvqdj4g@mail.gmail.com\n[24] https://www.postgresql.org/message-id/2463591.1594514874@sss.pgh.pa.us\n[25] https://www.postgresql.org/message-id/20200625225853.GA11137%40alvherre.pgsql\n\n\n", "msg_date": "Tue, 14 Jul 2020 02:25:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 2020-07-14 at 02:25 +1200, David Rowley wrote:\n> Updated summary:\n> * For hash_mem = Tomas [7], Justin [16]\n> * For hash_mem_multiplier with a default > 1.0 = DavidG [21]\n> * For hash_mem_multiplier with default = 1.0 = PeterG [15][0], Tom\n> [20][24]\n\nI am OK with these options, but I still prefer a simple escape hatch.\n\n> * Maybe do nothing until we see how things go during beta = Bruce\n> [3], Amit [10]\n> * Just let users set work_mem = Stephen [21], Alvaro [4] (Alvaro\n> changed his mind after Andres pointed out that changes other nodes in\n> the plan too [25])\n\nI am not on board with these options.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 07:51:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-07-13 16:11, Tomas Vondra wrote:\n> Why is running out of disk space worse experience than running out of\n> memory?\n> \n> Sure, it'll take longer and ultimately the query fails (and if it fills\n> the device used by the WAL then it may also cause shutdown of the main\n> instance due to inability to write WAL). But that can be prevented by\n> moving the temp tablespace and/or setting the temp file limit, as\n> already mentioned.\n> \n> With OOM, if the kernel OOM killer decides to act, it may easily bring\n> down the instance too, and there are much less options to prevent that.\n\nWell, that's an interesting point. Depending on the OS setup, by \ndefault an out of memory might actually be worse if the OOM killer \nstrikes in an unfortunate way. That didn't happen to me in my tests, so \nthe OS must have been configured differently by default.\n\nSo maybe a lesson here is that just like we have been teaching users to \nadjust the OOM killer, we have to teach them now that setting the temp \nfile limit might become more important.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 17:12:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 6:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Yes, increasing work_mem isn't unusual, at all.\n\nIt's unusual as a way of avoiding OOMs!\n\n> Eh? That's not at all what it looks like- they were getting OOM's\n> because they set work_mem to be higher than the actual amount of memory\n> they had and the Sort before the GroupAgg was actually trying to use all\n> that memory. The HashAgg ended up not needing that much memory because\n> the aggregated set wasn't actually that large. If anything, this shows\n> exactly what Jeff's fine work here is (hopefully) going to give us- the\n> option to plan a HashAgg in such cases, since we can accept spilling to\n> disk if we end up underestimate, or take advantage of that HashAgg\n> being entirely in memory if we overestimate.\n\nI very specifically said that it wasn't a case where something like\nhash_mem would be expected to make all the difference.\n\n> Having looked back, I'm not sure that I'm really in the minority\n> regarding the proposal to add this at this time either- there's been a\n> few different comments that it's too late for v13 and/or that we should\n> see if we actually end up with users seriously complaining about the\n> lack of a separate way to specify the memory for a given node type,\n> and/or that if we're going to do this then we should have a broader set\n> of options covering other nodes types too, all of which are positions\n> that I agree with.\n\nBy proposing to do nothing at all, you are very clearly in a small\nminority. While (for example) I might have debated the details with\nDavid Rowley a lot recently, and you couldn't exactly say that we're\nin agreement, our two positions are nevertheless relatively close\ntogether.\n\nAFAICT, the only other person that has argued that we should do\nnothing (have no new GUC) is Bruce, which was a while ago now. (Amit\nsaid something similar, but has since softened his opinion [1]).\n\n[1] https://postgr.es.m/m/CAA4eK1+KMSQuOq5Gsj-g-pYec_8zgGb4K=xRznbCccnaumFqSA@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Jul 2020 09:20:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jul-13, Jeff Davis wrote:\n\n> On Tue, 2020-07-14 at 02:25 +1200, David Rowley wrote:\n> > Updated summary:\n> > * For hash_mem = Tomas [7], Justin [16]\n> > * For hash_mem_multiplier with a default > 1.0 = DavidG [21]\n> > * For hash_mem_multiplier with default = 1.0 = PeterG [15][0], Tom\n> > [20][24]\n> \n> I am OK with these options, but I still prefer a simple escape hatch.\n\nI'm in favor of hash_mem_multiplier. I think a >1 default is more\nsensible than =1 in the long run, but if strategic vote is what we're\ndoing, then I support the =1 option.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 12:47:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I'm in favor of hash_mem_multiplier. I think a >1 default is more\n> sensible than =1 in the long run, but if strategic vote is what we're\n> doing, then I support the =1 option.\n\nFWIW, I also think that we'll eventually end up with >1 default.\nBut the evidence to support that isn't really there yet, so\nI'm good with 1.0 default to start with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:38:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:47:36PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-13, Jeff Davis wrote:\n> \n> > On Tue, 2020-07-14 at 02:25 +1200, David Rowley wrote:\n> > > Updated summary:\n> > > * For hash_mem = Tomas [7], Justin [16]\n> > > * For hash_mem_multiplier with a default > 1.0 = DavidG [21]\n> > > * For hash_mem_multiplier with default = 1.0 = PeterG [15][0], Tom\n> > > [20][24]\n> > \n> > I am OK with these options, but I still prefer a simple escape hatch.\n> \n> I'm in favor of hash_mem_multiplier. I think a >1 default is more\n> sensible than =1 in the long run, but if strategic vote is what we're\n> doing, then I support the =1 option.\n\nI recanted and support hash_mem_multiplier (or something supporting that\nbehavior, even if it also supports an absolute/scalar value).\nhttps://www.postgresql.org/message-id/20200703145620.GK4107@telsasoft.com\n\n1.0 (or -1) is fine, possibly to be >= 1.0 in master at a later date.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Jul 2020 13:43:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 7:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think it would be good if we could try to move towards getting\n> consensus here rather than reiterating our arguments over and over.\n\n+1\n\n> Updated summary:\n> * For hash_mem = Tomas [7], Justin [16]\n> * For hash_mem_multiplier with a default > 1.0 = DavidG [21]\n> * For hash_mem_multiplier with default = 1.0 = PeterG [15][0], Tom [20][24]\n> * hash_mem out of scope for PG13 = Bruce [8], Andres [9]\n> * hashagg_mem default to -1 meaning use work_mem = DavidR [23] (2nd preference)\n> * Escape hatch that can be removed later when we get something better\n> = Jeff [11], DavidR [12], Pavel [13], Andres [14], Justin [1]\n> * Add enable_hashagg_spill = Tom [2] (I'm unclear on this proposal.\n> Does it affect the planner or executor or both?) (updated opinion in\n> [20])\n> * Maybe do nothing until we see how things go during beta = Bruce [3], Amit [10]\n> * Just let users set work_mem = Stephen [21], Alvaro [4] (Alvaro\n> changed his mind after Andres pointed out that changes other nodes in\n> the plan too [25])\n> * Swap enable_hashagg for a GUC that specifies when spilling should\n> occur. -1 means work_mem = Robert [17], Amit [18]\n> * hash_mem does not solve the problem = Tomas [6] (changed his mind in [7])\n\nI don't think that hashagg_mem needs to be considered here, because\nyou were the only one that spoke out in favor of that idea, and it's\nyour second preference in any case (maybe Tom was in favor of such a\nthing at one point, but he clearly favors hash_mem/hash_mem_multiplier\nnow so it hardly matters). I don't think that hashagg_mem represents a\nmeaningful compromise between the escape hatch and\nhash_mem/hash_mem_multiplier in any case. (I would *prefer* the escape\nhatch to hashagg_mem, since at least the escape hatch is an \"honest\"\nescape hatch.)\n\nISTM that there are three basic approaches to resolving this open item\nthat remain:\n\n1. Do nothing.\n\n2. Add an escape hatch.\n\n3. Add hash_mem/hash_mem_multiplier.\n\nMany people (e.g., Tom, Jeff, you, Andres, myself) have clearly\nindicated that doing nothing is simply a non-starter. It's not just\nthat it doesn't get a lot of votes -- it's something that is strongly\nopposed. We can rule it out right away.\n\nThis is where it gets harder. Many of us have views that are won't\neasily fit into buckets. For example, even though I myself proposed\nhash_mem/hash_mem_multiplier, I've said that I can live with the\nescape hatch. Similarly, Jeff favors the escape hatch, but has said\nthat he can live with hash_mem/hash_mem_multiplier. And, Andres said\nto me privately that he thinks that hash_mem could be a good idea,\neven though he opposes it now due to release management\nconsiderations.\n\nEven still, I think that it's possible to divide people into two camps\non this without grossly misrepresenting anybody.\n\nPrimarily in favor of escape hatch:\n\nJeff,\nDavidR,\nPavel,\nAndres,\nRobert ??,\nAmit ??\n\nPrimarily in favor of hash_mem/hash_mem_multiplier:\n\nPeterG,\nTom,\nAlvaro,\nTomas,\nJustin,\nDavidG,\nJonathan Katz\n\nThere are clear problems with this summary, including for example the\nfact that Robert weighed in before the hash_mem/hash_mem_multiplier\nproposal was even on the table. What he actually said about it [1]\nseems closer to hash_mem, so I feel that putting him in that bucket is\na conservative assumption on my part. Same goes for Amit, who warmed\nto the idea of hash_mem_multiplier recently. (Though I probably got\nsome detail wrong, in which case please correct me.)\n\nISTM that there is a majority of opinion in favor of\nhash_mem/hash_mem_multiplier. If you assume that I have this wrong,\nand that we're simply deadlocked, then it becomes a matter for the\nRMT. I strongly doubt that that changes the overall outcome, since\nthis year's RMT members happen to all be in favor of the\nhash_mem/hash_mem_multiplier proposal on an individual basis.\n\n[1] https://www.postgresql.org/message-id/CA+TgmobyV9+T-Wjx-cTPdQuRCgt1THz1mL3v1NXC4m4G-H6Rcw@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Jul 2020 11:50:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n>\n> Primarily in favor of escape hatch:\n>\n> Jeff,\n> DavidR,\n> Pavel,\n> Andres,\n> Robert ??,\n> Amit ??\n>\n>\nTo be clear, by \"escape hatch\" you mean \"add a GUC that instructs the\nPostgreSQL executor to ignore hash_mem when deciding whether to spill the\ncontents of the hash table to disk - IOW to never spill the contents of a\nhash table to disk\"? If so that seems separate from whether to add a\nhash_mem GUC to provide finer grained control - people may well want both.\n\nPrimarily in favor of hash_mem/hash_mem_multiplier:\n>\n> PeterG,\n> Tom,\n> Alvaro,\n> Tomas,\n> Justin,\n> DavidG,\n> Jonathan Katz\n>\n>\nI would prefer DavidJ as an abbreviation - my middle initial can be dropped\nwhen referring to me.\n\nDavid J.\n\nOn Mon, Jul 13, 2020 at 11:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\nPrimarily in favor of escape hatch:\n\nJeff,\nDavidR,\nPavel,\nAndres,\nRobert ??,\nAmit ??\nTo be clear, by \"escape hatch\" you mean \"add a GUC that instructs the PostgreSQL executor to ignore hash_mem when deciding whether to spill the contents of the hash table to disk - IOW to never spill the contents of a hash table to disk\"?  If so that seems separate from whether to add a hash_mem GUC to provide finer grained control - people may well want both.\nPrimarily in favor of hash_mem/hash_mem_multiplier:\n\nPeterG,\nTom,\nAlvaro,\nTomas,\nJustin,\nDavidG,\nJonathan Katz\nI would prefer DavidJ as an abbreviation - my middle initial can be dropped when referring to me.David J.", "msg_date": "Mon, 13 Jul 2020 12:57:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:57 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> To be clear, by \"escape hatch\" you mean \"add a GUC that instructs the PostgreSQL executor to ignore hash_mem when deciding whether to spill the contents of the hash table to disk - IOW to never spill the contents of a hash table to disk\"?\n\nYes, that's what that means.\n\n> If so that seems separate from whether to add a hash_mem GUC to provide finer grained control - people may well want both.\n\nThey might want the escape hatch too, as an additional measure, but my\nassumption is that anybody in favor of the\nhash_mem/hash_mem_multiplier proposal takes that position because they\nthink that it's the principled solution. That's the kind of subtlety\nthat is bound to get lost when summarizing general sentiment at a high\nlevel. In any case no individual has seriously argued that there is a\nsimultaneous need for both -- at least not yet.\n\nThis thread is already enormous, and very hard to keep up with. I'm\ntrying to draw a line under the discussion. For my part, I have\ncompromised on the important question of the default value of\nhash_mem_multiplier -- I am writing a new version of the patch that\nmakes the default 1.0 (i.e. no behavioral changes by default).\n\n> I would prefer DavidJ as an abbreviation - my middle initial can be dropped when referring to me.\n\nSorry about that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Jul 2020 13:12:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> To be clear, by \"escape hatch\" you mean \"add a GUC that instructs the\n> PostgreSQL executor to ignore hash_mem when deciding whether to spill the\n> contents of the hash table to disk - IOW to never spill the contents of a\n> hash table to disk\"? If so that seems separate from whether to add a\n> hash_mem GUC to provide finer grained control - people may well want both.\n\nIf we define the problem narrowly as \"allow people to get back exactly\nthe pre-v13 behavior\", then yeah you'd need an escape hatch of that\nsort. We should not, however, forget that the pre-v13 behavior is\npretty darn problematic. It's hard to see why anyone would really\nwant to get back exactly \"never spill even if it leads to OOM\".\n\nThe proposals for allowing a higher-than-work_mem, but not infinite,\nspill boundary seem to me to be a reasonable way to accommodate cases\nwhere the old behavior is accidentally preferable to what v13 does\nright now. Moreover, such a knob seems potentially useful in its\nown right, at least as a stopgap until we figure out how to generalize\nor remove work_mem. (Which might be a long time.)\n\nI'm not unalterably opposed to providing an escape hatch of the other\nsort, but right now I think the evidence for needing it isn't there.\nIf we get field complaints that can't be resolved with the \"raise the\nspill threshold by X\" approach, we could reconsider. But that approach\nseems a whole lot less brittle than \"raise the spill threshold to\ninfinity\", so I think we should start with the former type of fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:20:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 7, 2020 at 04:18:21PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-07, Amit Kapila wrote:\n> \n> > I don't think this is true. We seem to have introduced three new guc\n> > variables in a 9.3.3 minor release.\n> \n> Yeah, backporting GUCs is not a big deal. Sure, the GUC won't appear in\n> postgresql.conf files generated by initdb prior to the release that\n> introduces it. But users that need it can just edit their .confs and\n> add the appropriate line, or just do ALTER SYSTEM after the minor\n> upgrade. For people that don't need it, it would have a reasonable\n> default (probably work_mem, so that behavior doesn't change on the minor\n> upgrade).\n\nI am creating a new thread to discuss the question raised by Alvaro of\nhow many ALTER SYSTEM settings are lost during major upgrades. Do we\nproperly document that users should migrate their postgresql.conf _and_\npostgresql.auto.conf files during major upgrades? I personally never\nthought of this until now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 19:58:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "ALTER SYSTEM between upgrades" }, { "msg_contents": "On Mon, Jul 13, 2020 at 9:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jul 13, 2020 at 6:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Yes, increasing work_mem isn't unusual, at all.\n>\n> It's unusual as a way of avoiding OOMs!\n>\n> > Eh? That's not at all what it looks like- they were getting OOM's\n> > because they set work_mem to be higher than the actual amount of memory\n> > they had and the Sort before the GroupAgg was actually trying to use all\n> > that memory. The HashAgg ended up not needing that much memory because\n> > the aggregated set wasn't actually that large. If anything, this shows\n> > exactly what Jeff's fine work here is (hopefully) going to give us- the\n> > option to plan a HashAgg in such cases, since we can accept spilling to\n> > disk if we end up underestimate, or take advantage of that HashAgg\n> > being entirely in memory if we overestimate.\n>\n> I very specifically said that it wasn't a case where something like\n> hash_mem would be expected to make all the difference.\n>\n> > Having looked back, I'm not sure that I'm really in the minority\n> > regarding the proposal to add this at this time either- there's been a\n> > few different comments that it's too late for v13 and/or that we should\n> > see if we actually end up with users seriously complaining about the\n> > lack of a separate way to specify the memory for a given node type,\n> > and/or that if we're going to do this then we should have a broader set\n> > of options covering other nodes types too, all of which are positions\n> > that I agree with.\n>\n> By proposing to do nothing at all, you are very clearly in a small\n> minority. While (for example) I might have debated the details with\n> David Rowley a lot recently, and you couldn't exactly say that we're\n> in agreement, our two positions are nevertheless relatively close\n> together.\n>\n> AFAICT, the only other person that has argued that we should do\n> nothing (have no new GUC) is Bruce, which was a while ago now. (Amit\n> said something similar, but has since softened his opinion [1]).\n>\n\nTo be clear, my vote for PG13 is not to do anything till we have clear\nevidence of regressions. In the email you quoted, I was trying to say\nthat due to parallelism we might not have the problem for which we are\nplanning to provide an escape-hatch or hash_mem GUC. I think the\nreason for the delay in getting to the agreement is that there is no\nclear evidence for the problem (user-reported cases or results of some\nbenchmarks like TPC-H) unless I have missed something.\n\nHaving said that, I understand that we have to reach some conclusion\nto close this open item and if the majority of people are in-favor of\nescape-hatch or hash_mem solution then we have to do one of those.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 08:08:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "> On 14 Jul 2020, at 01:58, Bruce Momjian <bruce@momjian.us> wrote:\n\n> I am creating a new thread to discuss the question raised by Alvaro of\n> how many ALTER SYSTEM settings are lost during major upgrades. Do we\n> properly document that users should migrate their postgresql.conf _and_\n> postgresql.auto.conf files during major upgrades? I personally never\n> thought of this until now.\n\nTransferring postgresql.conf is discussed to some degree in the documentation\nfor pg_upgrade:\n\n 11. Restore pg_hba.conf\n\tIf you modified pg_hba.conf, restore its original settings. It might\n\talso be necessary to adjust other configuration files in the new\n\tcluster to match the old cluster, e.g. postgresql.conf.\n\n.. as well as upgrading via pg_dumpall:\n\n 4. Restore your previous pg_hba.conf and any postgresql.conf\n modifications.\n\nOne can argue whether those bulletpoints are sufficient for stressing the\nimportance, but it's at least mentioned. There is however no mention of\npostgresql.auto.conf which clearly isn't helping anyone, so we should fix that.\n\nTaking that a step further, maybe we should mention additional config files\nwhich could be included via include directives? There are tools out there who\navoid changing the users postgresql.conf by injecting an include directive\ninstead; they might've placed the included file alongside postgresql.conf.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 14 Jul 2020 12:52:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ALTER SYSTEM between upgrades" }, { "msg_contents": "On Mon, Jul 13, 2020 at 2:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Primarily in favor of escape hatch:\n>\n> Jeff,\n> DavidR,\n> Pavel,\n> Andres,\n> Robert ??,\n> Amit ??\n>\n> Primarily in favor of hash_mem/hash_mem_multiplier:\n>\n> PeterG,\n> Tom,\n> Alvaro,\n> Tomas,\n> Justin,\n> DavidG,\n> Jonathan Katz\n>\n> There are clear problems with this summary, including for example the\n> fact that Robert weighed in before the hash_mem/hash_mem_multiplier\n> proposal was even on the table. What he actually said about it [1]\n> seems closer to hash_mem, so I feel that putting him in that bucket is\n> a conservative assumption on my part. Same goes for Amit, who warmed\n> to the idea of hash_mem_multiplier recently. (Though I probably got\n> some detail wrong, in which case please correct me.)\n\nMy view is:\n\n- I thought the problem we were trying to solve here was that, in v12,\nif the planner thinks that your hashagg will fit in memory when really\nit doesn't, you will get good performance because we'll cheat; in v13,\nyou'll get VERY bad performance because we won't.\n\n- So, if hash_mem_multiplier affects both planning and execution, it\ndoesn't really solve the problem. Neither does adjusting the existing\nwork_mem setting. Imagine that you have two queries. The planner\nthinks Q1 will use 1GB of memory for a HashAgg but it will actually\nneed 2GB. It thinks Q2 will use 1.5GB for a HashAgg but it will\nactually need 3GB. If you plan using a 1GB memory limit, Q1 will pick\na HashAgg and perform terribly when it spills. Q2 will pick a\nGroupAggregate which will be OK but not great. If you plan with a 2GB\nmemory limit, Q1 will pick a HashAgg and will not spill so now it will\nbe in great shape. But Q2 will pick a HashAgg and then spill so it\nwill stink. Oops.\n\n- An escape hatch that prevents spilling at execution time *does*\nsolve this problem, but now suppose we add a Q3 which the planner\nthinks will use 512MB of memory but at execution time it will actually\nconsume 512GB due to the row count estimate being 1024x off. So if you\nenable the escape hatch to get back to a situation where Q1 and Q2\nboth perform acceptably, then Q3 makes your system OOM.\n\n- If you were to instead introduce a GUC like what I proposed before,\nwhich allows the execution-time memory usage to exceed what was\nplanned, but only by a certain margin, then you can set\nhash_mem_execution_overrun_multiplier_thingy=2.5 and call it a day.\nNow, no matter how you set work_mem, you're fine. Depending on the\nvalue you choose for work_mem, you may get group aggregates for some\nof the queries. If you set it large enough that you get hash\naggregates, then Q1 and Q2 will avoid spilling (which works but is\nslow) because the overrun is less than 2x. Q3 will spill, so you won't\nOOM. Wahoo!\n\n- I do agree in general that it makes more sense to allow\nhash_work_mem > sort_work_mem, and even to make that the default.\nAllowing the same budget for both is unreasonable, because I think we\nhave good evidence that inadequate memory has a severe impact on\nhashing operations but usually only a fairly mild effect on sorting\noperations, except in the case where the underrun is severe. That is,\nif you need 1GB of memory for a sort and you only get 768MB, the\nslowdown is much much less severe than if the same thing happens for a\nhash. If you have 10MB of memory, both are going to suck, but that's\nkinda unavoidable.\n\n- If you hold my feet to the fire and ask me to choose between a\nBoolean escape hatch (rather than a multiplier-based one) and\nhash_mem_multiplier, gosh, I don't know. I guess the Boolean escape\nhatch? I mean it's a pretty bad solution, but at least if I have that\nI can get both Q1 and Q2 to perform well at the same time, and I guess\nI'm no worse off than I was in v12. The hash_mem_multiplier thing,\nassuming it affects both planning and execution, seems like a very\ngood idea in general, but I guess I don't see how it helps with this\nproblem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:46:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> - I thought the problem we were trying to solve here was that, in v12,\n> if the planner thinks that your hashagg will fit in memory when really\n> it doesn't, you will get good performance because we'll cheat; in v13,\n> you'll get VERY bad performance because we won't.\n\nThat is the problem we started out with. I propose to solve a broader\nproblem that I believe mostly encompasses the original problem (it's\nan \"inventor's paradox\" situation). Although the exact degree to which\nit truly addresses the original problem will vary across\ninstallations, I believe that it will go a very long way towards\ncutting down on problems for users upgrading to Postgres 13 generally.\n\n> - So, if hash_mem_multiplier affects both planning and execution, it\n> doesn't really solve the problem. Neither does adjusting the existing\n> work_mem setting. Imagine that you have two queries. The planner\n> thinks Q1 will use 1GB of memory for a HashAgg but it will actually\n> need 2GB. It thinks Q2 will use 1.5GB for a HashAgg but it will\n> actually need 3GB. If you plan using a 1GB memory limit, Q1 will pick\n> a HashAgg and perform terribly when it spills. Q2 will pick a\n> GroupAggregate which will be OK but not great. If you plan with a 2GB\n> memory limit, Q1 will pick a HashAgg and will not spill so now it will\n> be in great shape. But Q2 will pick a HashAgg and then spill so it\n> will stink. Oops.\n\nMaybe I missed your point here. The problem is not so much that we'll\nget HashAggs that spill -- there is nothing intrinsically wrong with\nthat. While it's true that the I/O pattern is not as sequential as a\nsimilar group agg + sort, that doesn't seem like the really important\nfactor here. The really important factor is that in-memory HashAggs\ncan be blazingly fast relative to *any* alternative strategy -- be it\na HashAgg that spills, or a group aggregate + sort that doesn't spill,\nwhatever. We're mostly concerned about keeping the one available fast\nstrategy than we are about getting a new, generally slow strategy.\n\nThere will be no problems at all unless and until we're short on\nmemory, because you can just increase work_mem and everything works\nout, regardless of the details. Obviously the general problems we\nanticipate only crop up when increasing work_mem stops being a viable\nDBA strategy.\n\nBy teaching the system to have at least a crude appreciation of the\nvalue of memory when hashing vs when sorting, the system is often able\nto give much more memory to Hash aggs (and hash joins). Increasing\nhash_mem_multiplier (maybe while also decreasing work_mem) will be\nbeneficial when we take memory from things that don't really need so\nmuch, like sorts (or even CTE tuplestores) -- we reduce the memory\npressure without paying a commensurate price in system throughput\n(maybe even only a very small hit). As a bonus, everything going\nfaster may actually *reduce* the memory usage for the system as a\nwhole, even as individual queries use more memory.\n\nUnder this scheme, it may well not matter that you cannot cheat\n(Postgres 12 style) anymore, because you'll be able to use the memory\nthat is available sensibly -- regardless of whether or not the group\nestimates are very good (we have to have more than zero faith in the\nestimates -- they can be bad without being terrible). Maybe no amount\nof tuning can ever restore the desirable Postgres 12 performance\ncharacteristics you came to rely on, but remaining \"regressions\" are\nprobably cases where the user was flying pretty close to the sun\nOOM-wise all along. They may have been happy with Postgres 12, but at\na certain point that really is something that you have to view as a\nfool's paradise, even if like me you happen to be a memory Keynesian.\n\nReally big outliers tend to be rare and therefore something that the\nuser can afford to have go slower. It's the constant steady stream of\nmedium-sized hash aggs that we mostly need to worry about. To the\nextent that that's true, hash_mem_multiplier addresses the problem on\nthe table.\n\n> - An escape hatch that prevents spilling at execution time *does*\n> solve this problem, but now suppose we add a Q3 which the planner\n> thinks will use 512MB of memory but at execution time it will actually\n> consume 512GB due to the row count estimate being 1024x off. So if you\n> enable the escape hatch to get back to a situation where Q1 and Q2\n> both perform acceptably, then Q3 makes your system OOM.\n\nRight. Nothing stops these two things from being true at the same time.\n\n> - If you were to instead introduce a GUC like what I proposed before,\n> which allows the execution-time memory usage to exceed what was\n> planned, but only by a certain margin, then you can set\n> hash_mem_execution_overrun_multiplier_thingy=2.5 and call it a day.\n> Now, no matter how you set work_mem, you're fine. Depending on the\n> value you choose for work_mem, you may get group aggregates for some\n> of the queries. If you set it large enough that you get hash\n> aggregates, then Q1 and Q2 will avoid spilling (which works but is\n> slow) because the overrun is less than 2x. Q3 will spill, so you won't\n> OOM. Wahoo!\n\nBut we'll have to live with that kludge for a long time, and haven't\nnecessarily avoided any risk compared to the hash_mem_multiplier\nalternative. I think that having a shadow memory limit for the\nexecutor is pretty ugly.\n\nI'm trying to come up with a setting that can sensibly be tuned at the\nsystem level. Not an escape hatch, which seems worth avoiding.\nAdmittedly, this is not without its downsides.\n\n> - If you hold my feet to the fire and ask me to choose between a\n> Boolean escape hatch (rather than a multiplier-based one) and\n> hash_mem_multiplier, gosh, I don't know. I guess the Boolean escape\n> hatch? I mean it's a pretty bad solution, but at least if I have that\n> I can get both Q1 and Q2 to perform well at the same time, and I guess\n> I'm no worse off than I was in v12.\n\nFortunately you don't have to choose. Doing both together might make\nsense, to cover any remaining user apps that still experience problems\nafter tuning hash_mem_multiplier. We can take a wait and see approach\nto this, as Tom suggested recently.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:49:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 13, 2020 at 9:47 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I'm in favor of hash_mem_multiplier. I think a >1 default is more\n> sensible than =1 in the long run, but if strategic vote is what we're\n> doing, then I support the =1 option.\n\nAttached is a WIP patch implementing hash_mem_multiplier, with 1.0 as\nthe GUC's default value (i.e. the patch introduces no behavioral\nchanges by default). The first patch in the series renames some local\nvariables whose name is made ambiguous by the second, main patch.\n\nSince the patch doesn't add a new work_mem-style GUC, but existing\nconsumers of work_mem expect something like that, the code is\nstructured in a way that allows the planner and executor to pretend\nthat there really is a work_mem-style GUC called hash_mem, which they\ncan determine the value of by calling the get_hash_mem() function.\nThis seemed like the simplest approach overall. I placed the\nget_hash_mem() function in nodeHash.c, which is a pretty random place\nfor it. If anybody has any better ideas about where it should live,\nplease say so.\n\nISTM that the planner changes are where there's mostly likely to be\nproblems. Reviewers should examine consider_groupingsets_paths() in\ndetail.\n\n--\nPeter Geoghegan", "msg_date": "Tue, 14 Jul 2020 21:12:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, 2020-07-14 at 21:12 -0700, Peter Geoghegan wrote:\n> Attached is a WIP patch implementing hash_mem_multiplier, with 1.0 as\n> the GUC's default value (i.e. the patch introduces no behavioral\n> changes by default). The first patch in the series renames some local\n> variables whose name is made ambiguous by the second, main patch.\n\nThe idea is growing on me a bit. It doesn't give exactly v12 behavior,\nbut it does offer another lever that might tackle a lot of the\npractical cases. If I were making the decision alone, I'd still choose\nthe escape hatch based on simplicity, but I'm fine with this approach\nas well.\n\nThe patch itself looks reasonable to me. I don't see a lot of obvious\ndangers, but perhaps someone would like to take a closer look at the\nplanner changes as you suggest.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:13:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 17, 2020 at 5:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The idea is growing on me a bit. It doesn't give exactly v12 behavior,\n> but it does offer another lever that might tackle a lot of the\n> practical cases.\n\nCool.\n\n> If I were making the decision alone, I'd still choose\n> the escape hatch based on simplicity, but I'm fine with this approach\n> as well.\n\nThere is also the separate question of what to do about the\nhashagg_avoid_disk_plan GUC (this is a separate open item that\nrequires a separate resolution). Tom leans slightly towards removing\nit now. Is your position about the same as before?\n\n> The patch itself looks reasonable to me. I don't see a lot of obvious\n> dangers, but perhaps someone would like to take a closer look at the\n> planner changes as you suggest.\n\nIt would be good to get further input on the patch from somebody else,\nparticularly the planner aspects.\n\nMy intention is to commit the patch myself. I was the primary advocate\nfor hash_mem_multiplier, so it seems as if I should own it. (You may\nhave noticed that I just pushed the preparatory\nlocal-variable-renaming patch, to get that piece out of the way.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 Jul 2020 18:38:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, 2020-07-17 at 18:38 -0700, Peter Geoghegan wrote:\n> There is also the separate question of what to do about the\n> hashagg_avoid_disk_plan GUC (this is a separate open item that\n> requires a separate resolution). Tom leans slightly towards removing\n> it now. Is your position about the same as before?\n\nYes, I think we should have that GUC (hashagg_avoid_disk_plan) for at\nleast one release.\n\nClealy, a lot of plans will change. For any GROUP BY where there are a\nlot of groups, there was only one choice in v12 and now there are two\nchoices in v13. Obviously I think most of those changes will be for the\nbetter, but some regressions are bound to happen. Giving users some\ntime to adjust, and for us to tune the cost model based on user\nfeedback, seems prudent.\n\nAre there other examples of widespread changes in plans where we\n*didn't* have a GUC? There are many GUCs for controlling parallism,\nJIT, etc.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 18 Jul 2020 11:16:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Fri, 2020-07-17 at 18:38 -0700, Peter Geoghegan wrote:\n>> There is also the separate question of what to do about the\n>> hashagg_avoid_disk_plan GUC (this is a separate open item that\n>> requires a separate resolution). Tom leans slightly towards removing\n>> it now. Is your position about the same as before?\n\n> Yes, I think we should have that GUC (hashagg_avoid_disk_plan) for at\n> least one release.\n\nYou'e being optimistic about it being possible to remove a GUC once\nwe ship it. That seems to be a hard sell most of the time.\n\nI'm honestly a bit baffled about the level of fear being expressed\naround this feature. We have *frequently* made changes that would\nchange query plans, perhaps not 100.00% for the better, and never\nbefore have we had this kind of bikeshedding about whether it was\nnecessary to be able to turn it off. I think the entire discussion\nis way out ahead of any field evidence that we need such a knob.\nIn the absence of evidence, our default position ought to be to\nkeep it simple, not to accumulate backwards-compatibility kluges.\n\n(The only reason I'm in favor of heap_mem[_multiplier] is that it\nseems like it might be possible to use it to get *better* plans\nthan before. I do not see it as a backwards-compatibility knob.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:30:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 18, 2020 at 11:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > Yes, I think we should have that GUC (hashagg_avoid_disk_plan) for at\n> > least one release.\n>\n> You'e being optimistic about it being possible to remove a GUC once\n> we ship it. That seems to be a hard sell most of the time.\n\nYou've said that you're +0.5 on removing this GUC, while Jeff seems to\nbe about -0.5 (at least that's my take). It's hard to see a way\ntowards closing out the hashagg_avoid_disk_plan open item if that's\nour starting point.\n\nThe \"do we need to keep hashagg_avoid_disk_plan?\" question is\nfundamentally a value judgement IMV. I believe that you both\nunderstand each other's perspectives. I also suspect that no pragmatic\ncompromise will be possible -- we can either have the\nhashagg_avoid_disk_plan GUC or not have it. ISTM that we're\ndeadlocked, at least in a technical or procedural sense.\n\nDoes that understanding seem accurate to you both?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 18 Jul 2020 14:17:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 2020-07-18 at 14:30 -0400, Tom Lane wrote:\n> You'e being optimistic about it being possible to remove a GUC once\n> we ship it. That seems to be a hard sell most of the time.\n\nIf nothing else, a repeat of this thread in a year or two to discuss\nremoving a GUC doesn't seem appealing.\n\n> I think the entire discussion\n> is way out ahead of any field evidence that we need such a knob.\n> In the absence of evidence, our default position ought to be to\n> keep it simple, not to accumulate backwards-compatibility kluges.\n\nFair enough. I think that was where Stephen and Amit were coming from,\nas well.\n\nWhat is your opinion about pessimizing the HashAgg disk costs (not\naffecting HashAgg plans expected to stay in memory)? Tomas Vondra\npresented some evidence that Sort had some better IO patterns in some\ncases that weren't easily reflected in a principled way in the cost\nmodel.\n\nThat would lessen the number of changed plans, but we could easily\nremove the pessimization without controversy later if it turned out to\nbe unnecessary, or if we further optimize HashAgg IO.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 18 Jul 2020 15:04:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> What is your opinion about pessimizing the HashAgg disk costs (not\n> affecting HashAgg plans expected to stay in memory)? Tomas Vondra\n> presented some evidence that Sort had some better IO patterns in some\n> cases that weren't easily reflected in a principled way in the cost\n> model.\n\nHm, was that in some other thread? I didn't find any such info\nin a quick look through this one.\n\n> That would lessen the number of changed plans, but we could easily\n> remove the pessimization without controversy later if it turned out to\n> be unnecessary, or if we further optimize HashAgg IO.\n\nTrying to improve our cost models under-the-hood seems like a\nperfectly reasonable activity to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 21:15:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > On Fri, 2020-07-17 at 18:38 -0700, Peter Geoghegan wrote:\n> >> There is also the separate question of what to do about the\n> >> hashagg_avoid_disk_plan GUC (this is a separate open item that\n> >> requires a separate resolution). Tom leans slightly towards removing\n> >> it now. Is your position about the same as before?\n> \n> > Yes, I think we should have that GUC (hashagg_avoid_disk_plan) for at\n> > least one release.\n> \n> You'e being optimistic about it being possible to remove a GUC once\n> we ship it. That seems to be a hard sell most of the time.\n\nAgreed.\n\n> I'm honestly a bit baffled about the level of fear being expressed\n> around this feature. We have *frequently* made changes that would\n> change query plans, perhaps not 100.00% for the better, and never\n> before have we had this kind of bikeshedding about whether it was\n> necessary to be able to turn it off. I think the entire discussion\n> is way out ahead of any field evidence that we need such a knob.\n> In the absence of evidence, our default position ought to be to\n> keep it simple, not to accumulate backwards-compatibility kluges.\n\n+100\n\n> (The only reason I'm in favor of heap_mem[_multiplier] is that it\n> seems like it might be possible to use it to get *better* plans\n> than before. I do not see it as a backwards-compatibility knob.)\n\nI still don't think a hash_mem-type thing is really the right direction\nto go in, even if making a distinction between memory used for sorting\nand memory used for hashing is, and I'm of the general opinion that we'd\nbe thinking about doing something better and more appropriate- except\nfor the fact that we're talking about adding this in during beta.\n\nIn other words, if we'd stop trying to shoehorn something in, which\nwe're doing because we're in beta, we'd very likely be talking about all\nof this in a very different way and probably be contemplating something\nlike a query_mem that provides for an overall memory limit and which\nfavors memory for hashing over memory for sorting, etc.\n\nThanks,\n\nStephen", "msg_date": "Sun, 19 Jul 2020 07:38:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> In other words, if we'd stop trying to shoehorn something in, which\n> we're doing because we're in beta, we'd very likely be talking about all\n> of this in a very different way and probably be contemplating something\n> like a query_mem that provides for an overall memory limit and which\n> favors memory for hashing over memory for sorting, etc.\n\nEven if we were at the start of the dev cycle rather than its end,\nI'm not sure I agree. Yes, replacing work_mem with some more-holistic\napproach would be great. But that's a research project, one that\nwe can't be sure will yield fruit on any particular schedule. (Seeing\nthat we've understood this to be a problem for *decades*, I would tend\nto bet on a longer not shorter time frame for a solution.)\n\nI think that if we are worried about hashagg-spill behavior in the near\nterm, we have to have some fix that's not conditional on solving that\nvery large problem. The only other practical alternative is \"do\nnothing for v13\", and I agree with the camp that doesn't like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Jul 2020 10:43:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jul 19, 2020 at 4:38 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> > (The only reason I'm in favor of heap_mem[_multiplier] is that it\n> > seems like it might be possible to use it to get *better* plans\n> > than before. I do not see it as a backwards-compatibility knob.)\n>\n> I still don't think a hash_mem-type thing is really the right direction\n> to go in, even if making a distinction between memory used for sorting\n> and memory used for hashing is, and I'm of the general opinion that we'd\n> be thinking about doing something better and more appropriate- except\n> for the fact that we're talking about adding this in during beta.\n>\n> In other words, if we'd stop trying to shoehorn something in, which\n> we're doing because we're in beta, we'd very likely be talking about all\n> of this in a very different way and probably be contemplating something\n> like a query_mem that provides for an overall memory limit and which\n> favors memory for hashing over memory for sorting, etc.\n>\n\nAt minimum we'd need a patch we would be happy with dropping in should\nthere be user complaints. And once this conversation ends with that in\nhand I have my doubts whether there will be interest, or even project\ndesirability, in working toward a \"better\" solution should this one prove\nitself \"good enough\". And as it seems unlikely that this patch would\nforeclose on other promising solutions, combined with there being a\nnon-trivial behavioral change that we've made, suggests to me that we might\nas well just deploy whatever short-term solution we come up with now.\n\nAs for hashagg_avoid_disk_plan...\n\nThe physical processes we are modelling here:\n1. Processing D amount of records takes M amount of memory\n2. Processing D amount of records in-memory takes T time per record while\ndoing the same on-disk takes V time per record\n3. Processing D amount of records via some other plan has an effective cost\nU\n3. V >> T (is strictly greater than)\n4. Having chosen a value for M that ensures T it is still possible for V to\nend up used\n\nThus:\n\nIf we get D wrong the user can still tweak the system by changing the\nhash_mem_multiplier (this is strictly better than v12 which used work_mem)\n\nSetting hashagg_avoid_disk_plan = off provides a means to move V infinitely\nfar away from T (set to on by default, off reverts to v12 behavior).\n\nThere is no way for the user to move V's relative position toward T (n/a in\nv12)\n\nThe only way to move T is to make it infinitely large by setting\nenable_hashagg = off (same as in v12)\n\nIs hashagg_disk_cost_multiplier = [0.0, 1,000,000,000.0] i.e., (T *\nhashagg_disk_cost_multiplier == V) doable?\n\nIt has a nice symmetry with hash_mem_multiplier and can move V both toward\nand away from T. To the extent T is tunable or not in v12 it can remain\nthe same in v13.\n\nDavid J.\n\nOn Sun, Jul 19, 2020 at 4:38 AM Stephen Frost <sfrost@snowman.net> wrote:> (The only reason I'm in favor of heap_mem[_multiplier] is that it\n> seems like it might be possible to use it to get *better* plans\n> than before.  I do not see it as a backwards-compatibility knob.)\n\nI still don't think a hash_mem-type thing is really the right direction\nto go in, even if making a distinction between memory used for sorting\nand memory used for hashing is, and I'm of the general opinion that we'd\nbe thinking about doing something better and more appropriate- except\nfor the fact that we're talking about adding this in during beta.\n\nIn other words, if we'd stop trying to shoehorn something in, which\nwe're doing because we're in beta, we'd very likely be talking about all\nof this in a very different way and probably be contemplating something\nlike a query_mem that provides for an overall memory limit and which\nfavors memory for hashing over memory for sorting, etc.At minimum we'd need a patch we would be happy with dropping in should there be user complaints.  And once this conversation ends with that in hand I have my doubts whether there will be interest, or even project desirability, in working toward a \"better\" solution should this one prove itself \"good enough\".  And as it seems unlikely that this patch would foreclose on other promising solutions, combined with there being a non-trivial behavioral change that we've made, suggests to me that we might as well just deploy whatever short-term solution we come up with now.As for hashagg_avoid_disk_plan...The physical processes we are modelling here:1. Processing D amount of records takes M amount of memory2. Processing D amount of records in-memory takes T time per record while doing the same on-disk takes V time per record3. Processing D amount of records via some other plan has an effective cost U3. V >> T (is strictly greater than)4. Having chosen a value for M that ensures T it is still possible for V to end up usedThus:If we get D wrong the user can still tweak the system by changing the hash_mem_multiplier (this is strictly better than v12 which used work_mem)Setting hashagg_avoid_disk_plan = off provides a means to move V infinitely far away from T (set to on by default, off reverts to v12 behavior).There is no way for the user to move V's relative position toward T (n/a in v12)The only way to move T is to make it infinitely large by setting enable_hashagg = off (same as in v12)Is hashagg_disk_cost_multiplier = [0.0, 1,000,000,000.0] i.e., (T * hashagg_disk_cost_multiplier == V) doable?It has a nice symmetry with hash_mem_multiplier and can move V both toward and away from T.  To the extent T is tunable or not in v12 it can remain the same in v13.David J.", "msg_date": "Sun, 19 Jul 2020 09:23:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 2020-07-18 at 21:15 -0400, Tom Lane wrote:\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > What is your opinion about pessimizing the HashAgg disk costs (not\n> > affecting HashAgg plans expected to stay in memory)? Tomas Vondra\n> > presented some evidence that Sort had some better IO patterns in\n> > some\n> > cases that weren't easily reflected in a principled way in the cost\n> > model.\n> \n> Hm, was that in some other thread? I didn't find any such info\n> in a quick look through this one.\n\n\nhttps://www.postgresql.org/message-id/2df2e0728d48f498b9d6954b5f9080a34535c385.camel%40j-davis.com\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sun, 19 Jul 2020 14:17:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 18, 2020 at 3:04 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I think the entire discussion\n> > is way out ahead of any field evidence that we need such a knob.\n> > In the absence of evidence, our default position ought to be to\n> > keep it simple, not to accumulate backwards-compatibility kluges.\n>\n> Fair enough. I think that was where Stephen and Amit were coming from,\n> as well.\n\n> That would lessen the number of changed plans, but we could easily\n> remove the pessimization without controversy later if it turned out to\n> be unnecessary, or if we further optimize HashAgg IO.\n\nDoes this mean that we've reached a final conclusion on\nhashagg_avoid_disk_plan for Postgres 13, which is that it should be\nremoved? If so, I'd appreciate it if you took care of it. I don't\nthink that we need to delay its removal until the details of the\nHashAgg cost pessimization are finalized. (I expect that that will be\ntotally uncontroversial.)\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 19 Jul 2020 14:23:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jul 19, 2020 at 02:17:15PM -0700, Jeff Davis wrote:\n>On Sat, 2020-07-18 at 21:15 -0400, Tom Lane wrote:\n>> Jeff Davis <pgsql@j-davis.com> writes:\n>> > What is your opinion about pessimizing the HashAgg disk costs (not\n>> > affecting HashAgg plans expected to stay in memory)? Tomas Vondra\n>> > presented some evidence that Sort had some better IO patterns in\n>> > some\n>> > cases that weren't easily reflected in a principled way in the cost\n>> > model.\n>>\n>> Hm, was that in some other thread? I didn't find any such info\n>> in a quick look through this one.\n>\n>\n>https://www.postgresql.org/message-id/2df2e0728d48f498b9d6954b5f9080a34535c385.camel%40j-davis.com\n>\n\nFWIW the two messages to look at are these two:\n\n1) report with initial data\nhttps://www.postgresql.org/message-id/20200519151202.u2p2gpiawoaznsv2%40development\n\n2) updated stats, with the block pre-allocation and tlist projection\nhttps://www.postgresql.org/message-id/20200521001255.kfaihp3afv6vy6uq%40development\n\nBut I'm not convinced we actually need to tweak the costing - we've\nended up fixing two things, and I think a lot of the differences in I/O\npatterns disappeared thanks to this.\n\nFor sort, the stats of request sizes look like this:\n\n type | bytes | count | pct\n ------+---------+-------+-------\n RA | 131072 | 26034 | 59.92\n RA | 16384 | 6160 | 14.18\n RA | 8192 | 3636 | 8.37\n RA | 32768 | 3406 | 7.84\n RA | 65536 | 3270 | 7.53\n RA | 24576 | 361 | 0.83\n ...\n W | 1310720 | 8070 | 34.26\n W | 262144 | 1213 | 5.15\n W | 524288 | 1056 | 4.48\n W | 1056768 | 689 | 2.93\n W | 786432 | 292 | 1.24\n W | 802816 | 199 | 0.84\n ...\n\nAnd for the hashagg, it looks like this:\n\n type | bytes | count | pct\n ------+---------+--------+--------\n RA | 131072 | 200816 | 70.93\n RA | 8192 | 23640 | 8.35\n RA | 16384 | 19324 | 6.83\n RA | 32768 | 19279 | 6.81\n RA | 65536 | 19273 | 6.81\n ...\n W | 1310720 | 18000 | 65.91\n W | 524288 | 2074 | 7.59\n W | 1048576 | 660 | 2.42\n W | 8192 | 409 | 1.50\n W | 786432 | 354 | 1.30\n ...\n\nso it's actually a tad better than sort, because larger proportion of\nboth reads and writes is in larger chunks (reads 128kB, writes 1280kB).\nI think the device had default read-ahead setting, which I assume\nexplains the 128kB.\n\nFor the statistics of deltas between requests - for sort\n\n type | block_delta | count | pct\n ------+-------------+-------+-------\n RA | 256 | 13432 | 30.91\n RA | 16 | 3291 | 7.57\n RA | 32 | 3272 | 7.53\n RA | 64 | 3266 | 7.52\n RA | 128 | 2877 | 6.62\n RA | 1808 | 1278 | 2.94\n RA | -2320 | 483 | 1.11\n RA | 28928 | 386 | 0.89\n ...\n W | 2560 | 7856 | 33.35\n W | 2064 | 4921 | 20.89\n W | 2080 | 586 | 2.49\n W | 30960 | 300 | 1.27\n W | 2160 | 253 | 1.07\n W | 1024 | 248 | 1.05\n ...\n\nand for hashagg:\n\n type | block_delta | count | pct\n ------+-------------+--------+-------\n RA | 256 | 180955 | 63.91\n RA | 32 | 19274 | 6.81\n RA | 64 | 19273 | 6.81\n RA | 128 | 19264 | 6.80\n RA | 16 | 19203 | 6.78\n RA | 30480 | 9835 | 3.47\n\nAt first this might look worse than sort, but 256 sectors matches the\n128kB from the request size stats, and it's good match (64% vs. 70%).\n\n\nThere's a minor problem here, though - these stats were collected before\nwe fixed the tlist issue, so hashagg was spilling about 10x the amount\nof data compared to sort+groupagg. So maybe that's the first thing we\nshould do, before contemplating changes to the costing - collecting\nfresh data. I can do that, if needed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 20 Jul 2020 02:48:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> There's a minor problem here, though - these stats were collected before\n> we fixed the tlist issue, so hashagg was spilling about 10x the amount\n> of data compared to sort+groupagg. So maybe that's the first thing we\n> should do, before contemplating changes to the costing - collecting\n> fresh data. I can do that, if needed.\n\n+1. I'm not sure if we still need to do anything, but we definitely\ncan't tell on the basis of data that doesn't reliably reflect what\nthe code does now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Jul 2020 09:17:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 20, 2020 at 09:17:21AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> There's a minor problem here, though - these stats were collected before\n>> we fixed the tlist issue, so hashagg was spilling about 10x the amount\n>> of data compared to sort+groupagg. So maybe that's the first thing we\n>> should do, before contemplating changes to the costing - collecting\n>> fresh data. I can do that, if needed.\n>\n>+1. I'm not sure if we still need to do anything, but we definitely\n>can't tell on the basis of data that doesn't reliably reflect what\n>the code does now.\n>\n\nOK, will do. The hardware is busy doing something else at the moment,\nbut I'll do the tests and report results in a couple days.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 20 Jul 2020 19:25:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 14, 2020 at 03:49:40PM -0700, Peter Geoghegan wrote:\n> Maybe I missed your point here. The problem is not so much that we'll\n> get HashAggs that spill -- there is nothing intrinsically wrong with\n> that. While it's true that the I/O pattern is not as sequential as a\n> similar group agg + sort, that doesn't seem like the really important\n> factor here. The really important factor is that in-memory HashAggs\n> can be blazingly fast relative to *any* alternative strategy -- be it\n> a HashAgg that spills, or a group aggregate + sort that doesn't spill,\n> whatever. We're mostly concerned about keeping the one available fast\n> strategy than we are about getting a new, generally slow strategy.\n\nDo we have any data that in-memory HashAggs are \"blazingly fast relative\nto *any* alternative strategy?\" The data I have tested myself and what\nI saw from Tomas was that spilling sort or spilling hash are both 2.5x\nslower. Are we sure the quoted statement is true?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 21 Jul 2020 16:30:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 14, 2020 at 6:49 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Maybe I missed your point here. The problem is not so much that we'll\n> get HashAggs that spill -- there is nothing intrinsically wrong with\n> that. While it's true that the I/O pattern is not as sequential as a\n> similar group agg + sort, that doesn't seem like the really important\n> factor here. The really important factor is that in-memory HashAggs\n> can be blazingly fast relative to *any* alternative strategy -- be it\n> a HashAgg that spills, or a group aggregate + sort that doesn't spill,\n> whatever. We're mostly concerned about keeping the one available fast\n> strategy than we are about getting a new, generally slow strategy.\n\nI don't know; it depends. Like, if the less-sequential I/O pattern\nthat is caused by a HashAgg is not really any slower than a\nSort+GroupAgg, then whatever. The planner might as well try a HashAgg\n- because it will be fast if it stays in memory - and if it doesn't\nwork out, we've lost little by trying. But if a Sort+GroupAgg is\nnoticeably faster than a HashAgg that ends up spilling, then there is\na potential regression. I thought we had evidence that this was a real\nproblem, but if that's not the case, then I think we're fine as far as\nv13 goes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Jul 2020 15:42:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 21, 2020 at 1:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, Jul 14, 2020 at 03:49:40PM -0700, Peter Geoghegan wrote:\n> > Maybe I missed your point here. The problem is not so much that we'll\n> > get HashAggs that spill -- there is nothing intrinsically wrong with\n> > that. While it's true that the I/O pattern is not as sequential as a\n> > similar group agg + sort, that doesn't seem like the really important\n> > factor here. The really important factor is that in-memory HashAggs\n> > can be blazingly fast relative to *any* alternative strategy -- be it\n> > a HashAgg that spills, or a group aggregate + sort that doesn't spill,\n> > whatever. We're mostly concerned about keeping the one available fast\n> > strategy than we are about getting a new, generally slow strategy.\n>\n> Do we have any data that in-memory HashAggs are \"blazingly fast relative\n> to *any* alternative strategy?\" The data I have tested myself and what\n> I saw from Tomas was that spilling sort or spilling hash are both 2.5x\n> slower. Are we sure the quoted statement is true?\n\nI admit that I was unclear in the remarks you quote here. I placed too\nmuch emphasis on the precise cross-over point at which a hash agg that\ndidn't spill in Postgres 12 spills now. That was important to Andres,\nwho was concerned about the added I/O, especially with things like\ncloud providers [1] -- it's not desirable to go from no I/O to lots of\nI/O when upgrading, regardless of how fast your disks for temp files\nare. But that was not the point I was trying to make (though it's a\ngood point, and one that I agree with).\n\nI'll take another shot at it. I'll use with Andres' test case in [1].\nSpecifically this query (I went with this example because it was\nconvenient):\n\nSELECT a, array_agg(b) FROM (SELECT generate_series(1, 10000)) a(a),\n(SELECT generate_series(1, 10000)) b(b) GROUP BY a HAVING\narray_length(array_agg(b), 1) = 0;\n\nThe planner generally prefers a hashagg here, though it's not a\nparticularly sympathetic case for hash agg. For one thing the input to\nthe sort is already sorted. For another, there isn't skew. But the\nplanner seems to have it right, at least when everything fits in\nmemory, because that takes ~17.6 seconds with a group agg + sort vs\n~13.2 seconds with an in-memory hash agg. Importantly, hash agg's peak\nmemory usage is 1443558kB (once we get to the point that no spilling\nis required), whereas for the sort we're using 7833229kB for the\nquicksort. Don't forget that in-memory hash agg is using ~5.4x less\nmemory in this case on account of the way hash agg represents things.\nIt's faster, and much much more efficient once you take a holistic\nview (i.e. something like work done per second per KB of memory).\n\nClearly the precise \"query operation spills\" cross-over point isn't\nthat relevant to query execution time (on my server with a fast nvme\nSSD), because if I give the sort 95% - 99% of the memory it needs to\nbe an in-memory quicksort then it makes a noticeable difference, but\nnot a huge difference. I get one big run and one tiny run in the\ntuplesort. The query itself takes ~23.4 seconds -- higher than 17.6\nseconds, but not all that much higher considering we have to write and\nread ~7GB of data. If I try to do approximately the same thing with\nhash agg (give it very slightly less than optimal memory) I find that\nthe difference is smaller -- it takes ~14.1 seconds (up from ~13.2\nseconds). It looks like my original remarks are totally wrong so far,\nbecause it's as if the performance hit is entirely explainable as the\nextra temp file I/O (right down to the fact that hash agg takes a\nsmaller hit because it has much less to write out to disk). But let's\nkeep going.\n\n= Sort vs Hash =\n\nWe'll focus on how the group agg + sort case behaves as we take memory\naway. What I notice is that it literally doesn't matter how much\nmemory I take away any more (now that the sort has started to spill).\nI said that it was ~23.4 seconds when we have two runs, but if I keep\ntaking memory away so that we get 10 runs it takes 23.2 seconds. If\nthere are 36 runs it takes 22.8 seconds. And if there are 144 runs\n(work_mem is 50MB, down from the \"optimal\" required for the sort to be\ninternal, ~7GB) then it takes 21.9 seconds. So it gets slightly\nfaster, not slower. We really don't need very much memory to do the\nsort in one pass, and it pretty much doesn't matter how many runs we\nneed to merge provided it doesn't get into the thousands, which is\nquite rare (when random I/O from multiple passes finally starts to\nbite).\n\nNow for hash agg -- this is where it gets interesting. If we give it\nabout half the memory it needs (work_mem 700MB) we still have 4\nbatches and it hardly changes -- it takes 19.8 seconds, which is\nslower than the 4 batch case that took 14.1 seconds but not that\nsurprising. 300MB still gets 4 batches which now takes ~23.5 seconds.\n200MB gets 2424 batches and takes ~27.7 seconds -- a big jump! With\n100MB it takes ~31.1 seconds (3340 batches). 50MB it's ~32.8 seconds\n(3591 batches). With 5MB it's ~33.8 seconds, and bizarrely has a drop\nin the number of batches to only 1028. If I put it down to 1MB it's\n~40.7 seconds and has 5604 batches (the number of batches goes up\nagain). (And yes, the planner still chooses a hash agg when work_mem\nis only 1MB.)\n\n= Observations =\n\nHash aggs that have lots of memory (which could still be somewhat less\nthan all the memory they could possibly make use of) *are*\nsignificantly faster in general, and particularly when you consider\nmemory efficiency. They tend to use less memory but be much more\nsensitive to memory availability. And, we see really sharp\ndiscontinuities at certain points as memory is taken away: weird\nbehavior around the number of batches, etc. Group agg + sort, in\ncontrast, is slower initially/with more memory but remarkably\ninsensitive to how much memory it gets, and remarkably predictable\noverall (it actually gets a bit faster with less memory here, but I\nthink it's fair to assume no change once the tuplesort can merge all\nthe runs produced in one merge pass).\n\nThe important point here for me is that a hash agg will almost always\nbenefit from more memory (until everything fits in a single batch and\nwe don't spill at all) -- even a small additional amount consistently\nmakes a difference. Whereas there is a huge \"memory availability\nrange\" for the sort where it just does not make a bit of difference.\nWe are highly incentivized to give hash agg more memory in general,\nbecause it's bound to be faster that way (and usually uses *less*\nmemory, as we see here). But it's not just faster -- it's also more\npredictable and insensitive to an underestimate of the number of\ngroupings. It can therefore ameliorate the problem we have here with\nusers depending on Postgres 12 fast in-memory hash aggregates, without\nany escape hatch kludges.\n\nI don't think that the hash_mem_multiplier patch deserves \"extra\ncredit\" for being generally useful. But I also don't think that it\nshould be punished for it.\n\n[1] https://postgr.es/m/20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Jul 2020 19:54:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 17, 2020 at 5:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The patch itself looks reasonable to me. I don't see a lot of obvious\n> dangers, but perhaps someone would like to take a closer look at the\n> planner changes as you suggest.\n\nAttached is v3 of the hash_mem_multiplier patch series, which now has\na preparatory patch that removes hashagg_avoid_disk_plan. What do you\nthink of this approach, Jeff?\n\nIt seems as if removing hashagg_avoid_disk_plan will necessitate\nremoving various old bits of planner.c that were concerned with\navoiding hash aggs that spill (the bits that hashagg_avoid_disk_plan\nskips in the common case where it's turned off). This makes v3-0001-*\na bit trickier than I imagined it would have to be. At least it lowers\nthe footprint of the hash_mem_multiplier code added by v3-0002-*\n(compared to the last version of the patch).\n\nI find the partial group paths stuff added to planner.c by commit\n4f15e5d09de rather confusing (that commit was preparatory work for the\nmain feature commit e2f1eb0e). Hopefully the\nhash_mem_multiplier-removal patch didn't get anything wrong in this\narea. Perhaps Robert can comment on this as the committer of record\nfor partition-wise grouping/aggregation.\n\nI would like to commit this patch series by next week, and close out\nthe two relevant open items. Separately, I suspect that we'll also\nneed to update the cost model for hash aggs that spill, but that now\nseems like a totally unrelated matter. I'm waiting to hear back from\nTomas about that. Tomas?\n\nThanks\n--\nPeter Geoghegan", "msg_date": "Thu, 23 Jul 2020 16:32:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 20, 2020 at 07:25:39PM +0200, Tomas Vondra wrote:\n>On Mon, Jul 20, 2020 at 09:17:21AM -0400, Tom Lane wrote:\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>There's a minor problem here, though - these stats were collected before\n>>>we fixed the tlist issue, so hashagg was spilling about 10x the amount\n>>>of data compared to sort+groupagg. So maybe that's the first thing we\n>>>should do, before contemplating changes to the costing - collecting\n>>>fresh data. I can do that, if needed.\n>>\n>>+1. I'm not sure if we still need to do anything, but we definitely\n>>can't tell on the basis of data that doesn't reliably reflect what\n>>the code does now.\n>>\n>\n>OK, will do. The hardware is busy doing something else at the moment,\n>but I'll do the tests and report results in a couple days.\n>\n\nHi,\n\nSo let me share some fresh I/O statistics collected on the current code\nusing iosnoop. I've done the tests on two different machines using the\n\"aggregate part\" of TPC-H Q17, i.e. essentially this:\n\n SELECT * FROM (\n SELECT\n l_partkey AS agg_partkey,\n 0.2 * avg(l_quantity) AS avg_quantity\n FROM lineitem GROUP BY l_partkey OFFSET 1000000000\n ) part_agg;\n\nThe OFFSET is there just to ensure we don't need to send anything to\nthe client, etc.\n\nOn the first machine (i5-2500k, 8GB RAM) data was located on a RAID\nof SSD devices, and the temp tablespace was placed on a separate SSD\ndevice. This makes it easy to isolate the I/O requests related to the\nspilling, which is the interesting thing. The data set here is scale\n32GB, most of it in the lineitem table.\n\nOn the second machine (xeon e5-2620, 64GB RAM) was using scale 75GB,\nand I've done two different tests. First, the data was on a SATA RAID\nwhile the temp tablespace was on a NVMe SSD - this allows isolating\nthe I/O requests just like on the first machine, but the durations are\nnot very interesting because the SATA RAID is the bottleneck. Then I've\nswitched the locations (data on SSD, temp files on SATA RAID), which\ngives us some interesting query durations but the multiple devices make\nit difficult to analyze the I/O patterns. So I'll present patterns from\nthe first setup and timings from the second one, hopefully it's not\ncompletely bogus.\n\nIn all cases I've ran the query with a range of work_mem values and\nenable_sort/enable_hashagg settings, and enabled/disabled parallelism,\ncollecting the iosnoop data, query durations, information about cost\nand disk usage. Attached are the explain plans, summary of iosnoop\nstats etc.\n\nI also have a couple observations about hashagg vs. groupagg, and the\nrecent hashagg fixes.\n\n\n1) hashagg vs. groupagg\n\nIf you look at the query duration charts comparing hashagg and groupagg,\nyou can see that in both cases the hashagg is stable and mostly not\ndependent on work_mem. It initially (for low work_mem values) wins, but\nthe sort+groupagg gradually improves and eventually gets faster.\n\nNote: This does not include work_mem large enough to eliminate the need\nfor spilling, which would probably make hashagg much faster.\n\nFor the parallel case the difference is much smaller and groupagg gets\nfaster much sooner. This is probably due to the large number of groups\nin this particular data set.\n\nNow, the I/O patterns - if you look into the iosnoop summaries, there\nare two tables for each config - block stats (request sizes) and delta\nstats (gaps between requests). These tables need to be interpreted in\ncombination - ideally, the blocks should be larger and the gaps should\nmatch the block size.\n\nIIRC it was suggested hashagg does more random I/O than sort, but I\ndon't think the iosnoop data really show that - in fact, the requests\ntend to be larger than for sort, and the deltas match the request sizes\nbetter I think. At least for lower work_mem values. With larger values\nit kinda inverts and sort gets more sequential, but I don't think the\ndifference is very big.\n\nAlso, had it been more random it'd be very obvious from durations with\ntemp tablespace on the SATA RAID, I think.\n\nSo I'm not sure we need to tweak the hashagg costing for this reason.\n\n\n2) hashagg vs. CP_SMALL_TLIST vs. groupagg\n\nI was a bit puzzled because the hashagg timings seemed higher compared\nto the last runs with the CP_SMALL_TLIST fix (which was now reverted\nand replaced by projection right before spilling). But the explanation\nis pretty simple - we spill significantly more data than with the\nCP_SMALL_TLIST patch. And what's also interesting is that in both cases\nwe spill much more data than sort.\n\nThis is illustrated on the \"disk usage\" charts, but let me show some\nnumbers here. These are the \"Disk Usage\" values from explain analyze\n(measured in GB):\n\n 2MB 4MB 8MB 64MB 256MB\n -----------------------------------------------------------\n hash 6.71 6.70 6.73 6.44 5.81\n hash CP_SMALL_TLIST 5.28 5.26 5.24 5.04 4.54\n sort 3.41 3.41 3.41 3.57 3.45\n\nSo sort writes ~3.4GB of data, give or take. But hashagg/master writes\nalmost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\noriginal CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\nmuch more than the 3.4GB of data written by sort (which has to spill\neverything, while hashagg only spills rows not covered by the groups\nthat fit into work_mem).\n\nI initially assumed this is due to writing the hash value to the tapes,\nand the rows are fairly narrow (only about 40B per row), so a 4B hash\ncould make a difference - but certainly not this much. Moreover, that\ndoes not explain the difference between master and the now-reverted\nCP_SMALL_TLIST, I think.\n\n\n3) costing\n\nWhat I find really surprising is the costing - despite writing about\ntwice as much data, the hashagg cost is estimated to be much lower than\nthe sort. For example on the i5 machine, the hashagg cost is ~10M, while\nsort cost is almost 42M. Despite using almost twice as much disk. And\nthe costing is exactly the same for master and the CP_SMALL_TLIST.\n\nI was wondering if this might be due to random_page_cost being too low\nor something, but I very much doubt that. Firstly - this is on SSDs,\nso I really don't want it very high. Secondly, increasing random_page\ncost actually increases both costs.\n\nSo I'm wondering why the hashagg cost is so low, but I haven't looked\ninto that yet.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 24 Jul 2020 03:22:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 23, 2020 at 6:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> So let me share some fresh I/O statistics collected on the current code\n> using iosnoop. I've done the tests on two different machines using the\n> \"aggregate part\" of TPC-H Q17, i.e. essentially this:\n>\n> SELECT * FROM (\n> SELECT\n> l_partkey AS agg_partkey,\n> 0.2 * avg(l_quantity) AS avg_quantity\n> FROM lineitem GROUP BY l_partkey OFFSET 1000000000\n> ) part_agg;\n>\n> The OFFSET is there just to ensure we don't need to send anything to\n> the client, etc.\n\nThanks for testing this.\n\n> So sort writes ~3.4GB of data, give or take. But hashagg/master writes\n> almost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\n> original CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\n> much more than the 3.4GB of data written by sort (which has to spill\n> everything, while hashagg only spills rows not covered by the groups\n> that fit into work_mem).\n\nWhat I find when I run your query (with my own TPC-H DB that is\nsmaller than what you used here -- 59,986,052 lineitem tuples) is that\nthe sort required about 7x more memory than the hash agg to do\neverything in memory: 4,384,711KB for the quicksort vs 630,801KB peak\nhash agg memory usage. I'd be surprised if the ratio was very\ndifferent for you -- but can you check?\n\nI think that there is something pathological about this spill\nbehavior, because it sounds like the precise opposite of what you\nmight expect when you make a rough extrapolation of what disk I/O will\nbe based on the memory used in no-spill cases (as reported by EXPLAIN\nANALYZE).\n\n> What I find really surprising is the costing - despite writing about\n> twice as much data, the hashagg cost is estimated to be much lower than\n> the sort. For example on the i5 machine, the hashagg cost is ~10M, while\n> sort cost is almost 42M. Despite using almost twice as much disk. And\n> the costing is exactly the same for master and the CP_SMALL_TLIST.\n\nThat does make it sound like the costs of the hash agg aren't being\nrepresented. I suppose it isn't clear if this is a costing issue\nbecause it isn't clear if the execution time performance itself is\npathological or is instead something that must be accepted as the cost\nof spilling the hash agg in a general kind of way.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 23 Jul 2020 19:33:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 23, 2020 at 07:33:45PM -0700, Peter Geoghegan wrote:\n>On Thu, Jul 23, 2020 at 6:22 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> So let me share some fresh I/O statistics collected on the current code\n>> using iosnoop. I've done the tests on two different machines using the\n>> \"aggregate part\" of TPC-H Q17, i.e. essentially this:\n>>\n>> SELECT * FROM (\n>> SELECT\n>> l_partkey AS agg_partkey,\n>> 0.2 * avg(l_quantity) AS avg_quantity\n>> FROM lineitem GROUP BY l_partkey OFFSET 1000000000\n>> ) part_agg;\n>>\n>> The OFFSET is there just to ensure we don't need to send anything to\n>> the client, etc.\n>\n>Thanks for testing this.\n>\n>> So sort writes ~3.4GB of data, give or take. But hashagg/master writes\n>> almost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\n>> original CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\n>> much more than the 3.4GB of data written by sort (which has to spill\n>> everything, while hashagg only spills rows not covered by the groups\n>> that fit into work_mem).\n>\n>What I find when I run your query (with my own TPC-H DB that is\n>smaller than what you used here -- 59,986,052 lineitem tuples) is that\n>the sort required about 7x more memory than the hash agg to do\n>everything in memory: 4,384,711KB for the quicksort vs 630,801KB peak\n>hash agg memory usage. I'd be surprised if the ratio was very\n>different for you -- but can you check?\n>\n\nI can check, but it's not quite clear to me what are we looking for?\nIncrease work_mem until there's no need to spill in either case?\n\n>I think that there is something pathological about this spill\n>behavior, because it sounds like the precise opposite of what you\n>might expect when you make a rough extrapolation of what disk I/O will\n>be based on the memory used in no-spill cases (as reported by EXPLAIN\n>ANALYZE).\n>\n\nMaybe, not sure what exactly you think is pathological? The trouble is\nhashagg has to spill input tuples but the memory used in no-spill case\nrepresents aggregated groups, so I'm not sure how you could extrapolate\nfrom that ...\n\nFWIW one more suspicious thing that I forgot to mention is the behavior\nof the \"planned partitions\" depending on work_mem, which looks like\nthis:\n\n 2MB Planned Partitions: 64 HashAgg Batches: 4160\n 4MB Planned Partitions: 128 HashAgg Batches: 16512\n 8MB Planned Partitions: 256 HashAgg Batches: 21488\n 64MB Planned Partitions: 32 HashAgg Batches: 2720\n 256MB Planned Partitions: 8 HashAgg Batches: 8\n\nI'd expect the number of planned partitions to decrease (slowly) as\nwork_mem increases, but it seems to increase initially. Seems a bit\nstrange, but maybe it's expected.\n\n>> What I find really surprising is the costing - despite writing about\n>> twice as much data, the hashagg cost is estimated to be much lower than\n>> the sort. For example on the i5 machine, the hashagg cost is ~10M, while\n>> sort cost is almost 42M. Despite using almost twice as much disk. And\n>> the costing is exactly the same for master and the CP_SMALL_TLIST.\n>\n>That does make it sound like the costs of the hash agg aren't being\n>represented. I suppose it isn't clear if this is a costing issue\n>because it isn't clear if the execution time performance itself is\n>pathological or is instead something that must be accepted as the cost\n>of spilling the hash agg in a general kind of way.\n>\n\nNot sure, but I think we need to spill roughly as much as sort, so it\nseems a bit strange that (a) we're spilling 2x as much data and yet the\ncost is so much lower.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 24 Jul 2020 10:40:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 10:40:47AM +0200, Tomas Vondra wrote:\n>On Thu, Jul 23, 2020 at 07:33:45PM -0700, Peter Geoghegan wrote:\n>>On Thu, Jul 23, 2020 at 6:22 PM Tomas Vondra\n>><tomas.vondra@2ndquadrant.com> wrote:\n>>>So let me share some fresh I/O statistics collected on the current code\n>>>using iosnoop. I've done the tests on two different machines using the\n>>>\"aggregate part\" of TPC-H Q17, i.e. essentially this:\n>>>\n>>> SELECT * FROM (\n>>> SELECT\n>>> l_partkey AS agg_partkey,\n>>> 0.2 * avg(l_quantity) AS avg_quantity\n>>> FROM lineitem GROUP BY l_partkey OFFSET 1000000000\n>>> ) part_agg;\n>>>\n>>>The OFFSET is there just to ensure we don't need to send anything to\n>>>the client, etc.\n>>\n>>Thanks for testing this.\n>>\n>>>So sort writes ~3.4GB of data, give or take. But hashagg/master writes\n>>>almost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\n>>>original CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\n>>>much more than the 3.4GB of data written by sort (which has to spill\n>>>everything, while hashagg only spills rows not covered by the groups\n>>>that fit into work_mem).\n>>\n>>What I find when I run your query (with my own TPC-H DB that is\n>>smaller than what you used here -- 59,986,052 lineitem tuples) is that\n>>the sort required about 7x more memory than the hash agg to do\n>>everything in memory: 4,384,711KB for the quicksort vs 630,801KB peak\n>>hash agg memory usage. I'd be surprised if the ratio was very\n>>different for you -- but can you check?\n>>\n>\n>I can check, but it's not quite clear to me what are we looking for?\n>Increase work_mem until there's no need to spill in either case?\n>\n\nFWIW the hashagg needs about 4775953kB and the sort 33677586kB. So yeah,\nthat's about 7x more. I think that's probably built into the TPC-H data\nset. It'd be easy to construct cases with much higher/lower factors.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 24 Jul 2020 15:18:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, Jul 23, 2020 at 9:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> 2MB 4MB 8MB 64MB 256MB\n> -----------------------------------------------------------\n> hash 6.71 6.70 6.73 6.44 5.81\n> hash CP_SMALL_TLIST 5.28 5.26 5.24 5.04 4.54\n> sort 3.41 3.41 3.41 3.57 3.45\n>\n> So sort writes ~3.4GB of data, give or take. But hashagg/master writes\n> almost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\n> original CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\n> much more than the 3.4GB of data written by sort (which has to spill\n> everything, while hashagg only spills rows not covered by the groups\n> that fit into work_mem).\n>\n> I initially assumed this is due to writing the hash value to the tapes,\n> and the rows are fairly narrow (only about 40B per row), so a 4B hash\n> could make a difference - but certainly not this much. Moreover, that\n> does not explain the difference between master and the now-reverted\n> CP_SMALL_TLIST, I think.\n\nThis is all really good analysis, I think, but this seems like the key\nfinding. It seems like we don't really understand what's actually\ngetting written. Whether we use hash or sort doesn't seem like it\nshould have this kind of impact on how much data gets written, and\nwhether we use CP_SMALL_TLIST or project when needed doesn't seem like\nit should matter like this either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 11:18:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 11:18:48AM -0400, Robert Haas wrote:\n>On Thu, Jul 23, 2020 at 9:22 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> 2MB 4MB 8MB 64MB 256MB\n>> -----------------------------------------------------------\n>> hash 6.71 6.70 6.73 6.44 5.81\n>> hash CP_SMALL_TLIST 5.28 5.26 5.24 5.04 4.54\n>> sort 3.41 3.41 3.41 3.57 3.45\n>>\n>> So sort writes ~3.4GB of data, give or take. But hashagg/master writes\n>> almost 6-7GB of data, i.e. almost twice as much. Meanwhile, with the\n>> original CP_SMALL_TLIST we'd write \"only\" ~5GB of data. That's still\n>> much more than the 3.4GB of data written by sort (which has to spill\n>> everything, while hashagg only spills rows not covered by the groups\n>> that fit into work_mem).\n>>\n>> I initially assumed this is due to writing the hash value to the tapes,\n>> and the rows are fairly narrow (only about 40B per row), so a 4B hash\n>> could make a difference - but certainly not this much. Moreover, that\n>> does not explain the difference between master and the now-reverted\n>> CP_SMALL_TLIST, I think.\n>\n>This is all really good analysis, I think, but this seems like the key\n>finding. It seems like we don't really understand what's actually\n>getting written. Whether we use hash or sort doesn't seem like it\n>should have this kind of impact on how much data gets written, and\n>whether we use CP_SMALL_TLIST or project when needed doesn't seem like\n>it should matter like this either.\n>\n\nI think for CP_SMALL_TLIST at least some of the extra data can be\nattributed to writing the hash value along with the tuple, which sort\nobviously does not do. With the 32GB data set (the i5 machine), there\nare ~20M rows in the lineitem table, and with 4B hash values that's\nabout 732MB of extra data. That's about the 50% of the difference\nbetween sort and CP_SMALL_TLIST, and I'd dare to speculate the other 50%\nis due to LogicalTape internals (pointers to the next block, etc.)\n\nThe question is why master has 2x the overhead of CP_SMALL_TLIST, if\nit's meant to write the same set of columns etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jul 2020 18:01:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 8:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> This is all really good analysis, I think, but this seems like the key\n> finding. It seems like we don't really understand what's actually\n> getting written. Whether we use hash or sort doesn't seem like it\n> should have this kind of impact on how much data gets written, and\n> whether we use CP_SMALL_TLIST or project when needed doesn't seem like\n> it should matter like this either.\n\nIsn't this more or less the expected behavior in the event of\npartitions that are spilled recursively? The case that Tomas tested\nwere mostly cases where work_mem was tiny relative to the data being\naggregated.\n\nThe following is an extract from commit 1f39bce0215 showing some stuff\nadded to the beginning of nodeAgg.c:\n\n+ * We also specify a min and max number of partitions per spill. Too few might\n+ * mean a lot of wasted I/O from repeated spilling of the same tuples. Too\n+ * many will result in lots of memory wasted buffering the spill files (which\n+ * could instead be spent on a larger hash table).\n+ */\n+#define HASHAGG_PARTITION_FACTOR 1.50\n+#define HASHAGG_MIN_PARTITIONS 4\n+#define HASHAGG_MAX_PARTITIONS 1024\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Jul 2020 11:03:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 1:40 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Maybe, not sure what exactly you think is pathological? The trouble is\n> hashagg has to spill input tuples but the memory used in no-spill case\n> represents aggregated groups, so I'm not sure how you could extrapolate\n> from that ...\n\nYeah, but when hash agg enters spill mode it will continue to advance\nthe transition states for groups already in the hash table, which\ncould be quite a significant effect. The peak memory usage for an\nequivalent no-spill hash agg is therefore kind of related to the\namount of I/O needed for spilling.\n\nI suppose that you mostly tested cases where memory was in very short\nsupply, where that breaks down completely. Perhaps it wasn't helpful\nfor me to bring that factor into this discussion -- it's not as if\nthere is any doubt that hash agg is spilling a lot more here in any\ncase.\n\n> Not sure, but I think we need to spill roughly as much as sort, so it\n> seems a bit strange that (a) we're spilling 2x as much data and yet the\n> cost is so much lower.\n\nISTM that the amount of I/O that hash agg performs can vary *very*\nwidely for the same data. This is mostly determined by work_mem, but\nthere are second order effects. OTOH, the amount of I/O that a sort\nmust do is practically fixed. You can quibble with that\ncharacterisation a bit because of multi-pass sorts, but not really --\nmulti-pass sorts are generally quite rare.\n\nI think that we need a more sophisticated cost model for this in\ncost_agg(). Maybe the \"pages_written\" calculation could be pessimized.\nHowever, it doesn't seem like this is precisely an issue with I/O\ncosts.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Jul 2020 11:31:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 11:03:54AM -0700, Peter Geoghegan wrote:\n>On Fri, Jul 24, 2020 at 8:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> This is all really good analysis, I think, but this seems like the key\n>> finding. It seems like we don't really understand what's actually\n>> getting written. Whether we use hash or sort doesn't seem like it\n>> should have this kind of impact on how much data gets written, and\n>> whether we use CP_SMALL_TLIST or project when needed doesn't seem like\n>> it should matter like this either.\n>\n>Isn't this more or less the expected behavior in the event of\n>partitions that are spilled recursively? The case that Tomas tested\n>were mostly cases where work_mem was tiny relative to the data being\n>aggregated.\n>\n>The following is an extract from commit 1f39bce0215 showing some stuff\n>added to the beginning of nodeAgg.c:\n>\n>+ * We also specify a min and max number of partitions per spill. Too few might\n>+ * mean a lot of wasted I/O from repeated spilling of the same tuples. Too\n>+ * many will result in lots of memory wasted buffering the spill files (which\n>+ * could instead be spent on a larger hash table).\n>+ */\n>+#define HASHAGG_PARTITION_FACTOR 1.50\n>+#define HASHAGG_MIN_PARTITIONS 4\n>+#define HASHAGG_MAX_PARTITIONS 1024\n>\n\nMaybe, but we're nowhere close to these limits. See this table which I\nposted earlier:\n\n 2MB Planned Partitions: 64 HashAgg Batches: 4160\n 4MB Planned Partitions: 128 HashAgg Batches: 16512\n 8MB Planned Partitions: 256 HashAgg Batches: 21488\n 64MB Planned Partitions: 32 HashAgg Batches: 2720\n 256MB Planned Partitions: 8 HashAgg Batches: 8\n\nThis is from the non-parallel runs on the i5 machine with 32GB data set,\nthe first column is work_mem. We're nowhere near the 1024 limit, and the\ncardinality estimates are pretty good.\n\nOTOH the number o batches is much higher, so clearly there was some\nrecursive spilling happening. What I find strange is that this grows\nwith work_mem and only starts dropping after 64MB.\n\nAlso, how could the amount of I/O be almost constant in all these cases?\nSurely more recursive spilling should do more I/O, but the Disk Usage\nreported by explain analyze does not show anything like ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jul 2020 21:16:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 12:16 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Maybe, but we're nowhere close to these limits. See this table which I\n> posted earlier:\n>\n> 2MB Planned Partitions: 64 HashAgg Batches: 4160\n> 4MB Planned Partitions: 128 HashAgg Batches: 16512\n> 8MB Planned Partitions: 256 HashAgg Batches: 21488\n> 64MB Planned Partitions: 32 HashAgg Batches: 2720\n> 256MB Planned Partitions: 8 HashAgg Batches: 8\n>\n> This is from the non-parallel runs on the i5 machine with 32GB data set,\n> the first column is work_mem. We're nowhere near the 1024 limit, and the\n> cardinality estimates are pretty good.\n>\n> OTOH the number o batches is much higher, so clearly there was some\n> recursive spilling happening. What I find strange is that this grows\n> with work_mem and only starts dropping after 64MB.\n\nCould that be caused by clustering in the data?\n\nIf the input data is in totally random order then we have a good\nchance of never having to spill skewed \"common\" values. That is, we're\nbound to encounter common values before entering spill mode, and so\nthose common values will continue to be usefully aggregated until\nwe're done with the initial groups (i.e. until the in-memory hash\ntable is cleared in order to process spilled input tuples). This is\ngreat because the common values get aggregated without ever spilling,\nand most of the work is done before we even begin with spilled tuples.\n\nIf, on the other hand, the common values are concentrated together in\nthe input...\n\nAssuming that I have this right, then I would also expect simply\nhaving more memory to ameliorate the problem. If you only have/need 4\nor 8 partitions then you can fit a higher proportion of the total\nnumber of groups for the whole dataset in the hash table (at the point\nwhen you first enter spill mode). I think it follows that the \"nailed\"\nhash table entries/groupings will \"better characterize\" the dataset as\na whole.\n\n> Also, how could the amount of I/O be almost constant in all these cases?\n> Surely more recursive spilling should do more I/O, but the Disk Usage\n> reported by explain analyze does not show anything like ...\n\nNot sure, but might that just be because of the fact that logtape.c\ncan recycle disk space?\n\nAs I said in my last e-mail, it's pretty reasonable to assume that the\nvast majority of external sorts are one-pass. It follows that disk\nusage can be thought of as almost the same thing as total I/O for\ntuplesort. But the same heuristic isn't reasonable when thinking about\nhash agg. Hash agg might write out much less data than the total\nmemory used for the equivalent \"peak optimal nospill\" hash agg case --\nor much more. (Again, reiterating what I said in my last e-mail.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 Jul 2020 12:55:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, 2020-07-24 at 21:16 +0200, Tomas Vondra wrote:\n> Surely more recursive spilling should do more I/O, but the Disk Usage\n> reported by explain analyze does not show anything like ...\n\nI suspect that's because of disk reuse in logtape.c. \n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 25 Jul 2020 09:38:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, Jul 24, 2020 at 12:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Could that be caused by clustering in the data?\n>\n> If the input data is in totally random order then we have a good\n> chance of never having to spill skewed \"common\" values. That is, we're\n> bound to encounter common values before entering spill mode, and so\n> those common values will continue to be usefully aggregated until\n> we're done with the initial groups (i.e. until the in-memory hash\n> table is cleared in order to process spilled input tuples). This is\n> great because the common values get aggregated without ever spilling,\n> and most of the work is done before we even begin with spilled tuples.\n>\n> If, on the other hand, the common values are concentrated together in\n> the input...\n\nI still don't know if that was a factor in your example, but I can\nclearly demonstrate that the clustering of data can matter a lot to\nhash aggs in Postgres 13. I attach a contrived example where it makes\na *huge* difference.\n\nI find that the sorted version of the aggregate query takes\nsignificantly longer to finish, and has the following spill\ncharacteristics:\n\n\"Peak Memory Usage: 205086kB Disk Usage: 2353920kB HashAgg Batches: 2424\"\n\nNote that the planner doesn't expect any partitions here, but we still\nget 2424 batches -- so the planner seems to get it totally wrong.\nOTOH, the same query against a randomized version of the same data (no\nlonger in sorted order, no clustering) works perfectly with the same\nwork_mem (200MB):\n\n\"Peak Memory Usage: 1605334kB\"\n\nHash agg avoids spilling entirely (so the planner gets it right this\ntime around). It even uses notably less memory.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 25 Jul 2020 09:39:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 9:39 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> \"Peak Memory Usage: 1605334kB\"\n>\n> Hash agg avoids spilling entirely (so the planner gets it right this\n> time around). It even uses notably less memory.\n\nI guess that this is because the reported memory usage doesn't reflect\nthe space used for transition state, which is presumably most of the\ntotal -- array_agg() is used in the query.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 10:07:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Fri, 2020-07-24 at 10:40 +0200, Tomas Vondra wrote:\n> FWIW one more suspicious thing that I forgot to mention is the\n> behavior\n> of the \"planned partitions\" depending on work_mem, which looks like\n> this:\n> \n> 2MB Planned Partitions: 64 HashAgg Batches: 4160\n> 4MB Planned Partitions: 128 HashAgg Batches: 16512\n> 8MB Planned Partitions: 256 HashAgg Batches: 21488\n> 64MB Planned Partitions: 32 HashAgg Batches: 2720\n> 256MB Planned Partitions: 8 HashAgg Batches: 8\n> \n> I'd expect the number of planned partitions to decrease (slowly) as\n> work_mem increases, but it seems to increase initially. Seems a bit\n> strange, but maybe it's expected.\n\nThe space for open-partition buffers is also limited to about 25% of\nmemory. Each open partition takes BLCKSZ memory, so those numbers are\nexactly what I'd expect (64*8192 = 512kB).\n\nThere's also another effect at work that can cause the total number of\nbatches to be higher for larger work_mem values: when we do recurse, we\nagain need to estimate the number of partitions needed. Right now, we\noverestimate the number of partitions needed (to be conservative),\nwhich leads to a wider fan-out and lots of tiny partitions, and\ntherefore more batches.\n\nI think we can improve this by using something like a HyperLogLog on\nthe hash values of the spilled tuples to get a better estimate for the\nnumber of groups (and therefore the number of partitions) that we need\nwhen we recurse, which would reduce the number of overall batches at\nhigher work_mem settings. But I didn't get a chance to implement that\nyet.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 25 Jul 2020 10:23:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Thu, 2020-07-23 at 19:33 -0700, Peter Geoghegan wrote:\n> That does make it sound like the costs of the hash agg aren't being\n> represented. I suppose it isn't clear if this is a costing issue\n> because it isn't clear if the execution time performance itself is\n> pathological or is instead something that must be accepted as the\n> cost\n> of spilling the hash agg in a general kind of way.\n\nI have a feeling that this is mostly a costing problem. Sort uses its\nmemory in two different phases:\n\n 1. when writing the sorted runs, it needs the memory to hold the run\nbefore sorting it, and only a single buffer for the output tape; and\n 2. when merging, it needs a lot of read buffers\n\nBut in HashAgg, it needs to hold all of the groups in memory *at the\nsame time* as it needs a lot of output buffers (one for each\npartition). This doesn't matter a lot at high values of work_mem,\nbecause the buffers will only be 8MB at most.\n\nI did attempt to cost this properly: hash_agg_set_limits() takes into\naccount the memory the partitions will use, and the remaining memory is\nwhat's used in cost_agg(). But there's a lot of room for error in\nthere.\n\nIf someone sees an obvious error in the costing, please let me know.\nOtherwise, I think it will just take some time to make it better\nreflect reality in a variety of cases. For v13, and we will either need\nto live with it, or pessimize the costing for HashAgg until we get it\nright.\n\nMany costing issues can deal with a lot of slop -- e.g. HashJoin vs\nMergeJoin -- because a small factor often doesn't make the difference\nbetween plans. But HashAgg and Sort are more competitive with each\nother, so costing needs to be more precise.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 25 Jul 2020 10:40:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 10:23 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> There's also another effect at work that can cause the total number of\n> batches to be higher for larger work_mem values: when we do recurse, we\n> again need to estimate the number of partitions needed. Right now, we\n> overestimate the number of partitions needed (to be conservative),\n> which leads to a wider fan-out and lots of tiny partitions, and\n> therefore more batches.\n\nWhat worries me a bit is the sharp discontinuities when spilling with\nsignificantly less work_mem than the \"optimal\" amount. For example,\nwith Tomas' TPC-H query (against my smaller TPC-H dataset), I find\nthat setting work_mem to 6MB looks like this:\n\n -> HashAggregate (cost=2700529.47..3020654.22 rows=1815500\nwidth=40) (actual time=21039.788..32278.703 rows=2000000 loops=1)\n Output: lineitem.l_partkey, (0.2 * avg(lineitem.l_quantity))\n Group Key: lineitem.l_partkey\n Planned Partitions: 128 Peak Memory Usage: 6161kB Disk\nUsage: 2478080kB HashAgg Batches: 128\n\n(And we have a sensible looking number of batches that match the\nnumber of planned partitions with higher work_mem settings, too.)\n\nHowever, if I set work_mem to 5MB (or less), it looks like this:\n\n -> HashAggregate (cost=2700529.47..3020654.22 rows=1815500\nwidth=40) (actual time=20849.490..37027.533 rows=2000000 loops=1)\n Output: lineitem.l_partkey, (0.2 * avg(lineitem.l_quantity))\n Group Key: lineitem.l_partkey\n Planned Partitions: 128 Peak Memory Usage: 5393kB Disk\nUsage: 2482152kB HashAgg Batches: 11456\n\nSo the number of partitions is still 128, but the number of batches\nexplodes to 11,456 all at once. My guess that this is because the\nrecursive hash aggregation misbehaves in a self-similar fashion once a\ncertain tipping point has been reached. I expect that the exact nature\nof that tipping point is very complicated, and generally dependent on\nthe dataset, clustering, etc. But I don't think that this kind of\neffect will be uncommon.\n\n(FWIW this example requires ~620MB work_mem to complete without\nspilling at all -- so it's kind of extreme, though not quite as\nextreme as many of the similar test results from Tomas.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 11:05:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 2020-07-25 at 11:05 -0700, Peter Geoghegan wrote:\n> What worries me a bit is the sharp discontinuities when spilling with\n> significantly less work_mem than the \"optimal\" amount. For example,\n> with Tomas' TPC-H query (against my smaller TPC-H dataset), I find\n> that setting work_mem to 6MB looks like this:\n\n...\n\n> Planned Partitions: 128 Peak Memory Usage: 6161kB Disk\n> Usage: 2478080kB HashAgg Batches: 128\n\n...\n\n> Planned Partitions: 128 Peak Memory Usage: 5393kB Disk\n> Usage: 2482152kB HashAgg Batches: 11456\n\n...\n\n> My guess that this is because the\n> recursive hash aggregation misbehaves in a self-similar fashion once\n> a\n> certain tipping point has been reached.\n\nIt looks like it might be fairly easy to use HyperLogLog as an\nestimator for the recursive step. That should reduce the\noverpartitioning, which I believe is the cause of this discontinuity.\n\nIt's not clear to me that overpartitioning is a real problem in this\ncase -- but I think the fact that it's causing confusion is enough\nreason to see if we can fix it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 25 Jul 2020 13:10:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 1:10 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Sat, 2020-07-25 at 11:05 -0700, Peter Geoghegan wrote:\n> > What worries me a bit is the sharp discontinuities when spilling with\n> > significantly less work_mem than the \"optimal\" amount. For example,\n> > with Tomas' TPC-H query (against my smaller TPC-H dataset), I find\n> > that setting work_mem to 6MB looks like this:\n>\n> ...\n>\n> > Planned Partitions: 128 Peak Memory Usage: 6161kB Disk\n> > Usage: 2478080kB HashAgg Batches: 128\n>\n> ...\n>\n> > Planned Partitions: 128 Peak Memory Usage: 5393kB Disk\n> > Usage: 2482152kB HashAgg Batches: 11456\n\n> It's not clear to me that overpartitioning is a real problem in this\n> case -- but I think the fact that it's causing confusion is enough\n> reason to see if we can fix it.\n\nI'm not sure about that either.\n\nFWIW I notice that when I reduce work_mem a little further (to 3MB)\nwith the same query, the number of partitions is still 128, while the\nnumber of run time batches is 16,512 (an increase from 11,456 from 6MB\nwork_mem). I notice that 16512/128 is 129, which hints at the nature\nof what's going on with the recursion. I guess it would be ideal if\nthe growth in batches was more gradual as I subtract memory.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 13:27:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 2020-07-25 at 13:27 -0700, Peter Geoghegan wrote:\n> It's not clear to me that overpartitioning is a real problem in\n> > this\n> > case -- but I think the fact that it's causing confusion is enough\n> > reason to see if we can fix it.\n> \n> I'm not sure about that either.\n> \n> FWIW I notice that when I reduce work_mem a little further (to 3MB)\n> with the same query, the number of partitions is still 128, while the\n> number of run time batches is 16,512 (an increase from 11,456 from\n> 6MB\n> work_mem). I notice that 16512/128 is 129, which hints at the nature\n> of what's going on with the recursion. I guess it would be ideal if\n> the growth in batches was more gradual as I subtract memory.\n\nI wrote a quick patch to use HyperLogLog to estimate the number of\ngroups contained in a spill file. It seems to reduce the\noverpartitioning effect, and is a more principled approach than what I\nwas doing before.\n\nIt does seem to hurt the runtime slightly when spilling to disk in some\ncases. I haven't narrowed down whether this is because we end up\nrecursing multiple times, or if it's just more efficient to\noverpartition, or if the cost of doing the HLL itself is significant.\n\nRegards,\n\tJeff Davis", "msg_date": "Sat, 25 Jul 2020 16:56:51 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 10:07:37AM -0700, Peter Geoghegan wrote:\n>On Sat, Jul 25, 2020 at 9:39 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>> \"Peak Memory Usage: 1605334kB\"\n>>\n>> Hash agg avoids spilling entirely (so the planner gets it right this\n>> time around). It even uses notably less memory.\n>\n>I guess that this is because the reported memory usage doesn't reflect\n>the space used for transition state, which is presumably most of the\n>total -- array_agg() is used in the query.\n>\n\nI'm not sure what you mean by \"reported memory usage doesn't reflect the\nspace used for transition state\"? Surely it does include that, we've\nbuilt the memory accounting stuff pretty much exactly to do that.\n\nI think it's pretty clear what's happening - in the sorted case there's\nonly a single group getting new values at any moment, so when we decide\nto spill we'll only add rows to that group and everything else will be\nspilled to disk.\n\nIn the unsorted case however we manage to initialize all groups in the\nhash table, but at that point the groups are tiny an fit into work_mem.\nAs we process more and more data the groups grow, but we can't evict\nthem - at the moment we don't have that capability. So we end up\nprocessing everything in memory, but significantly exceeding work_mem.\n\n\nFWIW all my tests are done on the same TPC-H data set clustered by\nl_shipdate (so probably random with respect to other columns).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 26 Jul 2020 02:05:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 5:05 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I'm not sure what you mean by \"reported memory usage doesn't reflect the\n> space used for transition state\"? Surely it does include that, we've\n> built the memory accounting stuff pretty much exactly to do that.\n>\n> I think it's pretty clear what's happening - in the sorted case there's\n> only a single group getting new values at any moment, so when we decide\n> to spill we'll only add rows to that group and everything else will be\n> spilled to disk.\n\nRight.\n\n> In the unsorted case however we manage to initialize all groups in the\n> hash table, but at that point the groups are tiny an fit into work_mem.\n> As we process more and more data the groups grow, but we can't evict\n> them - at the moment we don't have that capability. So we end up\n> processing everything in memory, but significantly exceeding work_mem.\n\nwork_mem was set to 200MB, which is more than the reported \"Peak\nMemory Usage: 1605334kB\". So either the random case significantly\nexceeds work_mem and the \"Peak Memory Usage\" accounting is wrong\n(because it doesn't report this excess), or the random case really\ndoesn't exceed work_mem but has a surprising advantage over the sorted\ncase.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 17:13:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 4:56 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I wrote a quick patch to use HyperLogLog to estimate the number of\n> groups contained in a spill file. It seems to reduce the\n> overpartitioning effect, and is a more principled approach than what I\n> was doing before.\n\nThis pretty much fixes the issue that I observed with overparitioning.\nAt least in the sense that the number of partitions grows more\npredictably -- even when the number of partitions planned is reduced\nthe change in the number of batches seems smooth-ish. It \"looks nice\".\n\n> It does seem to hurt the runtime slightly when spilling to disk in some\n> cases. I haven't narrowed down whether this is because we end up\n> recursing multiple times, or if it's just more efficient to\n> overpartition, or if the cost of doing the HLL itself is significant.\n\nI'm glad that this better principled approach is possible. It's hard\nto judge how much of a problem this really is, though. We'll need to\nthink about this aspect some more.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 17:31:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 5:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm glad that this better principled approach is possible. It's hard\n> to judge how much of a problem this really is, though. We'll need to\n> think about this aspect some more.\n\nBTW, your HLL patch ameliorates the problem with my extreme \"sorted vs\nrandom input\" test case from this morning [1] (the thing that I just\ndiscussed with Tomas). Without the HLL patch the sorted case had 2424\nbatches. With the HLL patch it has 20. That at least seems like a\nnotable improvement.\n\n[1] https://postgr.es/m/CAH2-Wz=ur7MQKpaUZJP=Adtg0TPMx5M_WoNE=ke2vUU=amdjPQ@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 25 Jul 2020 17:52:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, Jul 25, 2020 at 05:13:00PM -0700, Peter Geoghegan wrote:\n>On Sat, Jul 25, 2020 at 5:05 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> I'm not sure what you mean by \"reported memory usage doesn't reflect the\n>> space used for transition state\"? Surely it does include that, we've\n>> built the memory accounting stuff pretty much exactly to do that.\n>>\n>> I think it's pretty clear what's happening - in the sorted case there's\n>> only a single group getting new values at any moment, so when we decide\n>> to spill we'll only add rows to that group and everything else will be\n>> spilled to disk.\n>\n>Right.\n>\n>> In the unsorted case however we manage to initialize all groups in the\n>> hash table, but at that point the groups are tiny an fit into work_mem.\n>> As we process more and more data the groups grow, but we can't evict\n>> them - at the moment we don't have that capability. So we end up\n>> processing everything in memory, but significantly exceeding work_mem.\n>\n>work_mem was set to 200MB, which is more than the reported \"Peak\n>Memory Usage: 1605334kB\". So either the random case significantly\n\nThat's 1.6GB, if I read it right. Which is more than 200MB ;-)\n\n>exceeds work_mem and the \"Peak Memory Usage\" accounting is wrong\n>(because it doesn't report this excess), or the random case really\n>doesn't exceed work_mem but has a surprising advantage over the sorted\n>case.\n>\n>-- \n>Peter Geoghegan\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 26 Jul 2020 20:34:06 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sun, Jul 26, 2020 at 11:34 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> That's 1.6GB, if I read it right. Which is more than 200MB ;-)\n\nSigh. That solves that \"mystery\": the behavior that my sorted vs\nrandom example exhibited is a known limitation in hash aggs that spill\n(and an acceptable one). The memory usage is reported on accurately by\nEXPLAIN ANALYZE.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jul 2020 08:38:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jul-23, Peter Geoghegan wrote:\n\n> Attached is v3 of the hash_mem_multiplier patch series, which now has\n> a preparatory patch that removes hashagg_avoid_disk_plan.\n\nI notice you put the prototype for get_hash_mem in nodeHash.h. This\nwould be fine if not for the fact that optimizer needs to call the\nfunction too, which means now optimizer have to include executor headers\n-- not a great thing. I'd move the prototype elsewhere to avoid this,\nand I think miscadmin.h is a decent place for the prototype, next to\nwork_mem and m_w_m. It remains strange to have the function in executor\nimplementation, but I don't offhand see a better place, so maybe it's\nokay where it is.\n\nOther than that admittedly trivial complaint, I found nothing to\ncomplain about in this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 13:30:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 27, 2020 at 10:30 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Jul-23, Peter Geoghegan wrote:\n> I notice you put the prototype for get_hash_mem in nodeHash.h. This\n> would be fine if not for the fact that optimizer needs to call the\n> function too, which means now optimizer have to include executor headers\n> -- not a great thing. I'd move the prototype elsewhere to avoid this,\n> and I think miscadmin.h is a decent place for the prototype, next to\n> work_mem and m_w_m.\n\nThe location of get_hash_mem() is awkward, but there is no obvious alternative.\n\nAre you proposing that I just put the prototype in miscadmin.h, while\nleaving the implementation where it is (in nodeHash.c)? Maybe that\nsounds like an odd question, but bear in mind that the natural place\nto put the implementation of a function declared in miscadmin.h is\neither utils/init/postinit.c or utils/init/miscinit.c -- moving the\nimplementation of get_hash_mem() to either of those two files seems\nworse to me.\n\nThat said, there is an existing oddball case in miscadmin.h, right at\nthe end -- the two functions whose implementation is in\naccess/transam/xlog.c. So I can see an argument for adding another\noddball case (i.e. moving the prototype to the end of miscadmin.h\nwithout changing anything else).\n\n> Other than that admittedly trivial complaint, I found nothing to\n> complain about in this patch.\n\nGreat. Thanks for the review.\n\nMy intention is to commit hash_mem_multiplier on Wednesday. We need to\nmove on from this, and get the release out the door.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jul 2020 11:00:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jul-27, Peter Geoghegan wrote:\n\n> On Mon, Jul 27, 2020 at 10:30 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > On 2020-Jul-23, Peter Geoghegan wrote:\n> > I notice you put the prototype for get_hash_mem in nodeHash.h. This\n> > would be fine if not for the fact that optimizer needs to call the\n> > function too, which means now optimizer have to include executor headers\n> > -- not a great thing. I'd move the prototype elsewhere to avoid this,\n> > and I think miscadmin.h is a decent place for the prototype, next to\n> > work_mem and m_w_m.\n> \n> The location of get_hash_mem() is awkward,\n\nYes.\n\n> but there is no obvious alternative.\n\nAgreed.\n\n> Are you proposing that I just put the prototype in miscadmin.h, while\n> leaving the implementation where it is (in nodeHash.c)?\n\nYes, that's in the part of my reply you didn't quote:\n\n: It remains strange to have the function in executor\n: implementation, but I don't offhand see a better place, so maybe it's\n: okay where it is.\n\n> [...] moving the implementation of get_hash_mem() to either of those\n> two files seems worse to me.\n\nSure.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 14:24:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 27, 2020 at 11:24 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > Are you proposing that I just put the prototype in miscadmin.h, while\n> > leaving the implementation where it is (in nodeHash.c)?\n>\n> Yes, that's in the part of my reply you didn't quote:\n>\n> : It remains strange to have the function in executor\n> : implementation, but I don't offhand see a better place, so maybe it's\n> : okay where it is.\n\nGot it.\n\nI tried putting the prototype in miscadmin.h, and I now agree that\nthat's the best way to do it -- that's how I do it in the attached\nrevision. No other changes.\n\nThe v4-0001-Remove-hashagg_avoid_disk_plan-GUC.patch changes are\nsurprisingly complicated. It would be nice if you could take a look at\nthat aspect (or confirm that it's included in your review).\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 27 Jul 2020 11:30:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On 2020-Jul-27, Peter Geoghegan wrote:\n\n> The v4-0001-Remove-hashagg_avoid_disk_plan-GUC.patch changes are\n> surprisingly complicated. It would be nice if you could take a look at\n> that aspect (or confirm that it's included in your review).\n\nI think you mean \"it replaces surprisingly complicated code with\nstraightforward code\". Right? Because in the previous code, there was\na lot of effort going into deciding whether the path needed to be\ngenerated; the new code just generates the path always.\n\nSimilarly the code to decide allow_hash in create_distinct_path, which\nused to be nontrivial, could (if you wanted) be simplified down to a\nsingle boolean condition. Previously, it was nontrivial only because\nit needed to consider memory usage -- not anymore.\n\nBut maybe you're talking about something more subtle that I'm just too\nblind to see.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 27 Jul 2020 15:52:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 27, 2020 at 12:52 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Jul-27, Peter Geoghegan wrote:\n> > The v4-0001-Remove-hashagg_avoid_disk_plan-GUC.patch changes are\n> > surprisingly complicated. It would be nice if you could take a look at\n> > that aspect (or confirm that it's included in your review).\n>\n> I think you mean \"it replaces surprisingly complicated code with\n> straightforward code\". Right? Because in the previous code, there was\n> a lot of effort going into deciding whether the path needed to be\n> generated; the new code just generates the path always.\n\nYes, that's what I meant.\n\nIt's a bit tricky. For example, I have removed a redundant\n\"cheapest_total_path != NULL\" test in create_partial_grouping_paths()\n(two, actually). But these two tests were always redundant. I have to\nwonder if I missed the point. Though it seems likely that that was\njust an accident. Accretions of code over time made the code work like\nthat; nothing more.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jul 2020 13:01:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, 2020-07-27 at 11:30 -0700, Peter Geoghegan wrote:\n> The v4-0001-Remove-hashagg_avoid_disk_plan-GUC.patch changes are\n> surprisingly complicated. It would be nice if you could take a look\n> at\n> that aspect (or confirm that it's included in your review).\n\nI noticed that one of the conditionals, \"cheapest_total_path != NULL\",\nwas already redundant with the outer conditional before your patch. I\nguess that was just a mistake which your patch corrects along the way?\n\nAnyway, the patch looks good to me. We can have a separate discussion\nabout pessimizing the costing, if necessary.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 27 Jul 2020 17:10:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 27, 2020 at 5:10 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I noticed that one of the conditionals, \"cheapest_total_path != NULL\",\n> was already redundant with the outer conditional before your patch. I\n> guess that was just a mistake which your patch corrects along the way?\n\nMakes sense.\n\n> Anyway, the patch looks good to me. We can have a separate discussion\n> about pessimizing the costing, if necessary.\n\nPushed the hashagg_avoid_disk_plan patch -- thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jul 2020 17:55:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Sat, 2020-07-25 at 17:52 -0700, Peter Geoghegan wrote:\n> BTW, your HLL patch ameliorates the problem with my extreme \"sorted\n> vs\n> random input\" test case from this morning [1] (the thing that I just\n> discussed with Tomas). Without the HLL patch the sorted case had 2424\n> batches. With the HLL patch it has 20. That at least seems like a\n> notable improvement.\n\nCommitted.\n\nThough I did notice some overhead for spilled-but-still-in-memory cases\ndue to addHyperLogLog() itself. It seems that it can be mostly\neliminated with [1], though I'll wait to see if there's an objection\nbecause that would affect other users of HLL.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/17068336d300fab76dd6131cbe1996df450dde38.camel@j-davis.com\n\n\n\n\n", "msg_date": "Wed, 29 Jul 2020 10:13:23 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Mon, Jul 27, 2020 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Pushed the hashagg_avoid_disk_plan patch -- thanks!\n\nPushed the hash_mem_multiplier patch as well -- thanks again!\n\nAs I've said before, I am not totally opposed to adding a true escape\nhatch. That has not proven truly necessary just yet. For now, my\nworking assumption is that the problem on the table has been\naddressed.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 17:40:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Default setting for enable_hashagg_disk" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:52:23PM +0200, Daniel Gustafsson wrote:\n> > On 14 Jul 2020, at 01:58, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> > I am creating a new thread to discuss the question raised by Alvaro of\n> > how many ALTER SYSTEM settings are lost during major upgrades. Do we\n> > properly document that users should migrate their postgresql.conf _and_\n> > postgresql.auto.conf files during major upgrades? I personally never\n> > thought of this until now.\n> \n> Transferring postgresql.conf is discussed to some degree in the documentation\n> for pg_upgrade:\n> \n> 11. Restore pg_hba.conf\n> \tIf you modified pg_hba.conf, restore its original settings. It might\n> \talso be necessary to adjust other configuration files in the new\n> \tcluster to match the old cluster, e.g. postgresql.conf.\n> \n> .. as well as upgrading via pg_dumpall:\n> \n> 4. Restore your previous pg_hba.conf and any postgresql.conf\n> modifications.\n> \n> One can argue whether those bulletpoints are sufficient for stressing the\n> importance, but it's at least mentioned. There is however no mention of\n> postgresql.auto.conf which clearly isn't helping anyone, so we should fix that.\n> \n> Taking that a step further, maybe we should mention additional config files\n> which could be included via include directives? There are tools out there who\n> avoid changing the users postgresql.conf by injecting an include directive\n> instead; they might've placed the included file alongside postgresql.conf.\n\nI have developed the attached pg_upgrade doc patch to address this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 25 Aug 2020 14:59:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ALTER SYSTEM between upgrades" }, { "msg_contents": "> On 25 Aug 2020, at 21:30, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Jul 14, 2020 at 12:52:23PM +0200, Daniel Gustafsson wrote:\n>>>> On 14 Jul 2020, at 01:58, Bruce Momjian <bruce@momjian.us> wrote:\n>>> \n>>> I am creating a new thread to discuss the question raised by Alvaro of\n>>> how many ALTER SYSTEM settings are lost during major upgrades. Do we\n>>> properly document that users should migrate their postgresql.conf _and_\n>>> postgresql.auto.conf files during major upgrades? I personally never\n>>> thought of this until now.\n>> \n>> Transferring postgresql.conf is discussed to some degree in the documentation\n>> for pg_upgrade:\n>> \n>> 11. Restore pg_hba.conf\n>> If you modified pg_hba.conf, restore its original settings. It might\n>> also be necessary to adjust other configuration files in the new\n>> cluster to match the old cluster, e.g. postgresql.conf.\n>> \n>> .. as well as upgrading via pg_dumpall:\n>> \n>> 4. Restore your previous pg_hba.conf and any postgresql.conf\n>> modifications.\n>> \n>> One can argue whether those bulletpoints are sufficient for stressing the\n>> importance, but it's at least mentioned. There is however no mention of\n>> postgresql.auto.conf which clearly isn't helping anyone, so we should fix that.\n>> \n>> Taking that a step further, maybe we should mention additional config files\n>> which could be included via include directives? There are tools out there who\n>> avoid changing the users postgresql.conf by injecting an include directive\n>> instead; they might've placed the included file alongside postgresql.conf.\n> \n> I have developed the attached pg_upgrade doc patch to address this.\n\nLGTM, thanks!\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Aug 2020 00:22:25 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ALTER SYSTEM between upgrades" }, { "msg_contents": "On Wed, Aug 26, 2020 at 12:22:25AM +0200, Daniel Gustafsson wrote:\n> >> One can argue whether those bulletpoints are sufficient for stressing the\n> >> importance, but it's at least mentioned. There is however no mention of\n> >> postgresql.auto.conf which clearly isn't helping anyone, so we should fix that.\n> >> \n> >> Taking that a step further, maybe we should mention additional config files\n> >> which could be included via include directives? There are tools out there who\n> >> avoid changing the users postgresql.conf by injecting an include directive\n> >> instead; they might've placed the included file alongside postgresql.conf.\n> > \n> > I have developed the attached pg_upgrade doc patch to address this.\n> \n> LGTM, thanks!\n\nPatch applied,\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 17:36:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ALTER SYSTEM between upgrades" } ]
[ { "msg_contents": "Support FETCH FIRST WITH TIES\n\nWITH TIES is an option to the FETCH FIRST N ROWS clause (the SQL\nstandard's spelling of LIMIT), where you additionally get rows that\ncompare equal to the last of those N rows by the columns in the\nmandatory ORDER BY clause.\n\nThere was a proposal by Andrew Gierth to implement this functionality in\na more powerful way that would yield more features, but the other patch\nhad not been finished at this time, so we decided to use this one for\nnow in the spirit of incremental development.\n\nAuthor: Surafel Temesgen <surafel3000@gmail.com>\nReviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\nReviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nDiscussion: https://postgr.es/m/CALAY4q9ky7rD_A4vf=FVQvCGngm3LOes-ky0J6euMrg=_Se+ag@mail.gmail.com\nDiscussion: https://postgr.es/m/87o8wvz253.fsf@news-spur.riddles.org.uk\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/357889eb17bb9c9336c4f324ceb1651da616fe57\n\nModified Files\n--------------\ndoc/src/sgml/ref/select.sgml | 15 +--\nsrc/backend/catalog/sql_features.txt | 2 +-\nsrc/backend/executor/nodeLimit.c | 160 +++++++++++++++++++++++++++---\nsrc/backend/nodes/copyfuncs.c | 7 ++\nsrc/backend/nodes/equalfuncs.c | 2 +\nsrc/backend/nodes/outfuncs.c | 7 ++\nsrc/backend/nodes/readfuncs.c | 6 ++\nsrc/backend/optimizer/plan/createplan.c | 45 ++++++++-\nsrc/backend/optimizer/plan/planner.c | 1 +\nsrc/backend/optimizer/util/pathnode.c | 2 +\nsrc/backend/parser/analyze.c | 21 ++--\nsrc/backend/parser/gram.y | 117 +++++++++++++++++-----\nsrc/backend/parser/parse_clause.c | 15 ++-\nsrc/backend/utils/adt/ruleutils.c | 27 ++++--\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/nodes/execnodes.h | 5 +\nsrc/include/nodes/nodes.h | 13 +++\nsrc/include/nodes/parsenodes.h | 2 +\nsrc/include/nodes/pathnodes.h | 1 +\nsrc/include/nodes/plannodes.h | 5 +\nsrc/include/optimizer/pathnode.h | 1 +\nsrc/include/optimizer/planmain.h | 5 +-\nsrc/include/parser/parse_clause.h | 3 +-\nsrc/test/regress/expected/limit.out | 167 ++++++++++++++++++++++++++++++++\nsrc/test/regress/sql/limit.sql | 48 +++++++++\n25 files changed, 610 insertions(+), 69 deletions(-)", "msg_date": "Tue, 07 Apr 2020 20:25:43 +0000", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "pgsql: Support FETCH FIRST WITH TIES" }, { "msg_contents": "Hi Álvaro,\n\nOn Tue, Apr 7, 2020 at 10:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Support FETCH FIRST WITH TIES\n>\n> WITH TIES is an option to the FETCH FIRST N ROWS clause (the SQL\n> standard's spelling of LIMIT), where you additionally get rows that\n> compare equal to the last of those N rows by the columns in the\n> mandatory ORDER BY clause.\n>\n> There was a proposal by Andrew Gierth to implement this functionality in\n> a more powerful way that would yield more features, but the other patch\n> had not been finished at this time, so we decided to use this one for\n> now in the spirit of incremental development.\n>\n> Author: Surafel Temesgen <surafel3000@gmail.com>\n> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\n> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> Discussion: https://postgr.es/m/CALAY4q9ky7rD_A4vf=FVQvCGngm3LOes-ky0J6euMrg=_Se+ag@mail.gmail.com\n> Discussion: https://postgr.es/m/87o8wvz253.fsf@news-spur.riddles.org.uk\n>\n> Branch\n> ------\n> master\n>\n> Details\n> -------\n> https://git.postgresql.org/pg/commitdiff/357889eb17bb9c9336c4f324ceb1651da616fe57\n>\n> Modified Files\n> --------------\n> doc/src/sgml/ref/select.sgml | 15 +--\n> src/backend/catalog/sql_features.txt | 2 +-\n> src/backend/executor/nodeLimit.c | 160 +++++++++++++++++++++++++++---\n> src/backend/nodes/copyfuncs.c | 7 ++\n> src/backend/nodes/equalfuncs.c | 2 +\n> src/backend/nodes/outfuncs.c | 7 ++\n> src/backend/nodes/readfuncs.c | 6 ++\n> src/backend/optimizer/plan/createplan.c | 45 ++++++++-\n> src/backend/optimizer/plan/planner.c | 1 +\n> src/backend/optimizer/util/pathnode.c | 2 +\n> src/backend/parser/analyze.c | 21 ++--\n> src/backend/parser/gram.y | 117 +++++++++++++++++-----\n> src/backend/parser/parse_clause.c | 15 ++-\n> src/backend/utils/adt/ruleutils.c | 27 ++++--\n> src/include/catalog/catversion.h | 2 +-\n> src/include/nodes/execnodes.h | 5 +\n> src/include/nodes/nodes.h | 13 +++\n> src/include/nodes/parsenodes.h | 2 +\n> src/include/nodes/pathnodes.h | 1 +\n> src/include/nodes/plannodes.h | 5 +\n> src/include/optimizer/pathnode.h | 1 +\n> src/include/optimizer/planmain.h | 5 +-\n> src/include/parser/parse_clause.h | 3 +-\n> src/test/regress/expected/limit.out | 167 ++++++++++++++++++++++++++++++++\n> src/test/regress/sql/limit.sql | 48 +++++++++\n> 25 files changed, 610 insertions(+), 69 deletions(-)\n>\n\nFTR I now get the following when compiling with -Wimplicit-fallthrough -Werror:\n\nnodeLimit.c: In function ‘ExecLimit’:\nnodeLimit.c:136:7: error: this statement may fall through\n[-Werror=implicit-fallthrough=]\n 136 | if (ScanDirectionIsForward(direction))\n | ^\nnodeLimit.c:216:3: note: here\n 216 | case LIMIT_WINDOWEND_TIES:\n | ^~~~\n\nIt seems that this fall-through comment:\n\n[...]\n else\n {\n node->lstate = LIMIT_WINDOWEND_TIES;\n /* fall-through */\n }\n[...]\n\nIs not recognized by my compiler (gcc (Gentoo 9.3.0 p1) 9.3.0). If\nthat's something we should fix, PFA a naive patch for that.", "msg_date": "Sat, 11 Apr 2020 15:38:18 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Support FETCH FIRST WITH TIES" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Tue, Apr 7, 2020 at 10:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Support FETCH FIRST WITH TIES\n\n> FTR I now get the following when compiling with -Wimplicit-fallthrough -Werror:\n\nYeah, assorted buildfarm animals are mentioning that too. I wonder\nif we should add that to the default warning options selected by\nconfigure? I don't remember if that's been discussed before.\n\n> It seems that this fall-through comment:\n> /* fall-through */\n> Is not recognized by my compiler (gcc (Gentoo 9.3.0 p1) 9.3.0). If\n> that's something we should fix, PFA a naive patch for that.\n\nHmm, I feel like this logic is baroque enough to need more of a rewrite\nthan that :-(. But not sure exactly what would be better, so your\npatch seems reasonable for now. The comments could use some help too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 14:47:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Support FETCH FIRST WITH TIES" }, { "msg_contents": "On Sat, Apr 11, 2020 at 02:47:34PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Tue, Apr 7, 2020 at 10:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> Support FETCH FIRST WITH TIES\n> \n> > FTR I now get the following when compiling with -Wimplicit-fallthrough -Werror:\n> \n> Yeah, assorted buildfarm animals are mentioning that too. I wonder\n> if we should add that to the default warning options selected by\n> configure? I don't remember if that's been discussed before.\n> \n\nI'm all for it. It seems like a trap easy to catch up early, and we do want to\nenforce it anyway. I'm attaching a simple patch for that if needed, hopefully\nwith the correct autoconf version.\n\n> > It seems that this fall-through comment:\n> > /* fall-through */\n> > Is not recognized by my compiler (gcc (Gentoo 9.3.0 p1) 9.3.0). If\n> > that's something we should fix, PFA a naive patch for that.\n> \n> Hmm, I feel like this logic is baroque enough to need more of a rewrite\n> than that :-(. But not sure exactly what would be better, so your\n> patch seems reasonable for now. The comments could use some help too.\n\nYes I just checked the state machine to make sure that the fallthrough was\nexpected, but the comments are now way better, thanks!", "msg_date": "Sun, 12 Apr 2020 10:18:25 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Apr 11, 2020 at 02:47:34PM -0400, Tom Lane wrote:\n>> Yeah, assorted buildfarm animals are mentioning that too. I wonder\n>> if we should add that to the default warning options selected by\n>> configure? I don't remember if that's been discussed before.\n\n> I'm all for it. It seems like a trap easy to catch up early, and we do want to\n> enforce it anyway. I'm attaching a simple patch for that if needed, hopefully\n> with the correct autoconf version.\n\nPoking around in the archives, it seems like the only previous formal\nproposal to add -Wimplicit-fallthrough was in the context of a much\nmore aggressive proposal to make a lot of non-Wall warnings into\nerrors [1], which people did not like.\n\n-Wimplicit-fallthrough does have some issues, eg it seems that it's\napplied at a stage where gcc hasn't yet figured out that elog(ERROR)\ndoesn't return, so you need to add breaks after those. But we had\nsort of agreed that we could have it on-by-default in one relevant\ndiscussion [2], and then failed to pull the trigger.\n\nIf we do this, I suggest we use -Wimplicit-fallthrough=4, which\nuses a more-restrictive-than-default definition of how a \"fallthrough\"\ncomment can be spelled. The default, per the gcc manual, is\n\n * -Wimplicit-fallthrough=3 case sensitively matches one of the\n following regular expressions:\n\n *<\"-fallthrough\">\n *<\"@fallthrough@\">\n *<\"lint -fallthrough[ \\t]*\">\n *<\"[ \\t.!]*(ELSE,? |INTENTIONAL(LY)? )?FALL(S |\n |-)?THR(OUGH|U)[ \\t.!]*(-[^\\n\\r]*)?\">\n *<\"[ \\t.!]*(Else,? |Intentional(ly)? )?Fall((s |\n |-)[Tt]|t)hr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?\">\n *<\"[ \\t.!]*([Ee]lse,? |[Ii]ntentional(ly)? )?fall(s |\n |-)?thr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?\">\n\nwhich to my eyes is not exactly encouraging a project-standard style\nfor these, plus it seems like it might accept some things we'd rather\nit didn't. The only more-restrictive alternative, short of disabling\nthe comments altogether, is\n\n * -Wimplicit-fallthrough=4 case sensitively matches one of the\n following regular expressions:\n\n *<\"-fallthrough\">\n *<\"@fallthrough@\">\n *<\"lint -fallthrough[ \\t]*\">\n *<\"[ \\t]*FALLTHR(OUGH|U)[ \\t]*\">\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/B9FB1155-B39D-43C9-A7E6-B67E1C59E4CE%40gmail.com\n[2] https://www.postgresql.org/message-id/flat/E1fDenm-0000C8-IJ%40gemulon.postgresql.org\n\n\n", "msg_date": "Sun, 12 Apr 2020 10:55:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "\n\n> On Apr 12, 2020, at 7:55 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Poking around in the archives, it seems like the only previous formal\n> proposal to add -Wimplicit-fallthrough was in the context of a much\n> more aggressive proposal to make a lot of non-Wall warnings into\n> errors [1], which people did not like.\n\nThat was from me.\n\n> The only more-restrictive alternative, short of disabling\n> the comments altogether, is\n> \n> * -Wimplicit-fallthrough=4 case sensitively matches one of the\n> following regular expressions:\n> \n> *<\"-fallthrough\">\n> *<\"@fallthrough@\">\n> *<\"lint -fallthrough[ \\t]*\">\n> *<\"[ \\t]*FALLTHR(OUGH|U)[ \\t]*\">\n> \n> Thoughts?\n\nNaturally, I'm +1 for this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 12 Apr 2020 08:25:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On Sun, Apr 12, 2020 at 5:25 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Apr 12, 2020, at 7:55 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Poking around in the archives, it seems like the only previous formal\n> > proposal to add -Wimplicit-fallthrough was in the context of a much\n> > more aggressive proposal to make a lot of non-Wall warnings into\n> > errors [1], which people did not like.\n>\n> That was from me.\n>\n> > The only more-restrictive alternative, short of disabling\n> > the comments altogether, is\n> >\n> > * -Wimplicit-fallthrough=4 case sensitively matches one of the\n> > following regular expressions:\n> >\n> > *<\"-fallthrough\">\n> > *<\"@fallthrough@\">\n> > *<\"lint -fallthrough[ \\t]*\">\n> > *<\"[ \\t]*FALLTHR(OUGH|U)[ \\t]*\">\n> >\n> > Thoughts?\n>\n> Naturally, I'm +1 for this.\n\n+1 too, obviously.", "msg_date": "Sun, 12 Apr 2020 17:42:28 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Do we intend to see this done in the current cycle?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 19:31:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Do we intend to see this done in the current cycle?\n\nI don't have an objection to doing it now. It's just a new compiler\nwarning option, it shouldn't be able to break any code. (Plus there's\nplenty of time to revert, if somehow it causes a problem.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 22:46:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On 2020-Apr-12, Tom Lane wrote:\n\n> The only more-restrictive alternative, short of disabling\n> the comments altogether, is\n> \n> * -Wimplicit-fallthrough=4 case sensitively matches one of the\n> following regular expressions:\n> \n> *<\"-fallthrough\">\n> *<\"@fallthrough@\">\n> *<\"lint -fallthrough[ \\t]*\">\n> *<\"[ \\t]*FALLTHR(OUGH|U)[ \\t]*\">\n> \n> Thoughts?\n\nThis doesn't allow whitespace between \"fall\" and \"through\", which means\nwe generate 217 such warnings currently. Or we can just use\n-Wimplicit-fallthrough=3, which does allow whitespace (among other\ndetritus).\n\nFor my own reference, the manual is at\nhttps://gcc.gnu.org/onlinedocs/gcc-8.3.0/gcc/Warning-Options.html#index-Wimplicit-fallthrough\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 May 2020 19:39:03 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On 2020-May-06, Alvaro Herrera wrote:\n\n> This doesn't allow whitespace between \"fall\" and \"through\", which means\n> we generate 217 such warnings currently. Or we can just use\n> -Wimplicit-fallthrough=3, which does allow whitespace (among other\n> detritus).\n\nIf we're OK with patching all those places, I volunteer to do so. Any\nobjections? Or I can keep it at level 3, which can be done with minimal\npatching.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 10 May 2020 23:59:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-06, Alvaro Herrera wrote:\n>> This doesn't allow whitespace between \"fall\" and \"through\", which means\n>> we generate 217 such warnings currently. Or we can just use\n>> -Wimplicit-fallthrough=3, which does allow whitespace (among other\n>> detritus).\n\n> If we're OK with patching all those places, I volunteer to do so. Any\n> objections? Or I can keep it at level 3, which can be done with minimal\n> patching.\n\nIf we're gonna do it at all, I think we should go for the level 4\nwarnings. Level 3's idea of a fallthrough comment is too liberal\nfor my tastes...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 00:47:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On Mon, May 11, 2020 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-May-06, Alvaro Herrera wrote:\n> >> This doesn't allow whitespace between \"fall\" and \"through\", which means\n> >> we generate 217 such warnings currently. Or we can just use\n> >> -Wimplicit-fallthrough=3, which does allow whitespace (among other\n> >> detritus).\n>\n> > If we're OK with patching all those places, I volunteer to do so. Any\n> > objections? Or I can keep it at level 3, which can be done with minimal\n> > patching.\n>\n> If we're gonna do it at all, I think we should go for the level 4\n> warnings. Level 3's idea of a fallthrough comment is too liberal\n> for my tastes...\n\nHere's a patch that also takes care of cleaning all warning due to the\nlevel 4 setting (at least the one I got with my other config options).\nI went with \"FALLTHRU\" as this seemed more used.\n\nNote that timezone/zic.c would also have to be changed.", "msg_date": "Mon, 11 May 2020 13:29:02 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "\nOn 5/11/20 7:29 AM, Julien Rouhaud wrote:\n> On Mon, May 11, 2020 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> On 2020-May-06, Alvaro Herrera wrote:\n>>>> This doesn't allow whitespace between \"fall\" and \"through\", which means\n>>>> we generate 217 such warnings currently. Or we can just use\n>>>> -Wimplicit-fallthrough=3, which does allow whitespace (among other\n>>>> detritus).\n>>> If we're OK with patching all those places, I volunteer to do so. Any\n>>> objections? Or I can keep it at level 3, which can be done with minimal\n>>> patching.\n>> If we're gonna do it at all, I think we should go for the level 4\n>> warnings. Level 3's idea of a fallthrough comment is too liberal\n>> for my tastes...\n> Here's a patch that also takes care of cleaning all warning due to the\n> level 4 setting (at least the one I got with my other config options).\n> I went with \"FALLTHRU\" as this seemed more used.\n>\n> Note that timezone/zic.c would also have to be changed.\n\n\n\nSince this is external code maybe we should leave that at level 3? I\nthink that should be doable via a Makefile override.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 11 May 2020 08:07:27 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On Mon, May 11, 2020 at 2:07 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 5/11/20 7:29 AM, Julien Rouhaud wrote:\n> > On Mon, May 11, 2020 at 6:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> >>> On 2020-May-06, Alvaro Herrera wrote:\n> >>>> This doesn't allow whitespace between \"fall\" and \"through\", which means\n> >>>> we generate 217 such warnings currently. Or we can just use\n> >>>> -Wimplicit-fallthrough=3, which does allow whitespace (among other\n> >>>> detritus).\n> >>> If we're OK with patching all those places, I volunteer to do so. Any\n> >>> objections? Or I can keep it at level 3, which can be done with minimal\n> >>> patching.\n> >> If we're gonna do it at all, I think we should go for the level 4\n> >> warnings. Level 3's idea of a fallthrough comment is too liberal\n> >> for my tastes...\n> > Here's a patch that also takes care of cleaning all warning due to the\n> > level 4 setting (at least the one I got with my other config options).\n> > I went with \"FALLTHRU\" as this seemed more used.\n> >\n> > Note that timezone/zic.c would also have to be changed.\n>\n>\n>\n> Since this is external code maybe we should leave that at level 3? I\n> think that should be doable via a Makefile override.\n\nYes that was my concern. The best way I found to avoid changing zic.c\nis something like that in src/timezone/Makefile, before the zic\ntarget:\n\nifneq (,$(filter -Wimplicit-fallthrough=4,$(CFLAGS)))\nCFLAGS := $(filter-out -Wimplicit-fallthrough=4,$(CFLAGS))\nCFLAGS += '-Wimplicit-fallthrough=3'\nendif\n\n\n", "msg_date": "Mon, 11 May 2020 14:53:59 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> Note that timezone/zic.c would also have to be changed.\n\nWhy? It uses \"fallthrough\" which is a legal spelling per level 4.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 09:41:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On Mon, May 11, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Note that timezone/zic.c would also have to be changed.\n>\n> Why? It uses \"fallthrough\" which is a legal spelling per level 4.\n\nGCC documentation mentions [ \\t]*FALLTHR(OUGH|U)[ \\t]* for level 4\n(out of the view other alternatives), which AFAICT is case sensitive\n(level 3 has fall(s | |-)?thr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?).\n\n\n", "msg_date": "Mon, 11 May 2020 16:07:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, May 11, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Why? It uses \"fallthrough\" which is a legal spelling per level 4.\n\n> GCC documentation mentions [ \\t]*FALLTHR(OUGH|U)[ \\t]* for level 4\n> (out of the view other alternatives), which AFAICT is case sensitive\n> (level 3 has fall(s | |-)?thr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?).\n\nOh, I'd missed that that was case sensitive. Ugh --- that seems\nunreasonable. Maybe we'd better settle for level 3 after all;\nI don't think there's much room to doubt the intentions of a\ncomment spelled that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 10:40:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On Mon, May 11, 2020 at 4:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Mon, May 11, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Why? It uses \"fallthrough\" which is a legal spelling per level 4.\n>\n> > GCC documentation mentions [ \\t]*FALLTHR(OUGH|U)[ \\t]* for level 4\n> > (out of the view other alternatives), which AFAICT is case sensitive\n> > (level 3 has fall(s | |-)?thr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?).\n>\n> Oh, I'd missed that that was case sensitive. Ugh --- that seems\n> unreasonable. Maybe we'd better settle for level 3 after all;\n> I don't think there's much room to doubt the intentions of a\n> comment spelled that way.\n\nAgreed.\n\n\n", "msg_date": "Mon, 11 May 2020 21:46:32 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On 2020-May-11, Julien Rouhaud wrote:\n\n> On Mon, May 11, 2020 at 4:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > On Mon, May 11, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Why? It uses \"fallthrough\" which is a legal spelling per level 4.\n> >\n> > > GCC documentation mentions [ \\t]*FALLTHR(OUGH|U)[ \\t]* for level 4\n> > > (out of the view other alternatives), which AFAICT is case sensitive\n> > > (level 3 has fall(s | |-)?thr(ough|u)[ \\t.!]*(-[^\\n\\r]*)?).\n> >\n> > Oh, I'd missed that that was case sensitive. Ugh --- that seems\n> > unreasonable. Maybe we'd better settle for level 3 after all;\n> > I don't think there's much room to doubt the intentions of a\n> > comment spelled that way.\n> \n> Agreed.\n\nPushed, thanks.\n\nI ended up using level 4 and dialling back to 3 for zic.c only\n(different make trickery though). I also settled on FALLTHROUGH rather\nthan FALLTHRU because the latter seems ugly as a spelling to me. I'm\nnot a fan of the uppercase, but the alternative would be to add a - or\n@s.\n\nI get no warnings with this (gcc 8), but ccache seems to save warnings\nin one run so that they can be thrown in a later one. I'm not sure what\nto make of that, but ccache -d proved that beyond reasonable doubt and\nccache -clear got rid of the lot.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 16:14:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "On 2020-May-12, Alvaro Herrera wrote:\n\n> I get no warnings with this (gcc 8), but ccache seems to save warnings\n> in one run so that they can be thrown in a later one. I'm not sure what\n> to make of that, but ccache -d proved that beyond reasonable doubt and\n> ccache -clear got rid of the lot.\n\nFixed one straggler in contrib, and while testing it I realized why\nccache doesn't pay attention to the changes I was doing in the file:\nccache compares the *preprocessed* version of the file and only if that\ndiffers from the version that was cached last, ccache sends the new one\nto the compiler; and of course these comments are not present in the\npreprocessed version, so changing only the comment accomplishes nothing.\nYou have to touch one byte outside of any comments.\n\nI bet this is going to bite someone ... maybe we'd be better off going\nall the way to -Wimplicit-fallthrough=5 and use the\n__attribute__((fallthrough)) stuff instead.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 16:25:24 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I ended up using level 4 and dialling back to 3 for zic.c only\n> (different make trickery though).\n\nMeh ... if we're going to use level 4, I'm inclined to just change zic.c\nto match. It's not like we're using the upstream code verbatim anyway.\nWe could easily add s/fallthrough/FALLTHROUGH/ to the conversion recipe.\n\n> I get no warnings with this (gcc 8), but ccache seems to save warnings\n> in one run so that they can be thrown in a later one. I'm not sure what\n> to make of that, but ccache -d proved that beyond reasonable doubt and\n> ccache -clear got rid of the lot.\n\nSounds like a ccache bug: maybe it's not accounting for different\nfallthrough warning levels. ccache knows a *ton* about gcc options,\nso I'd not be surprised if it's doing something special with this one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 16:59:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Fixed one straggler in contrib, and while testing it I realized why\n> ccache doesn't pay attention to the changes I was doing in the file:\n> ccache compares the *preprocessed* version of the file and only if that\n> differs from the version that was cached last, ccache sends the new one\n> to the compiler; and of course these comments are not present in the\n> preprocessed version, so changing only the comment accomplishes nothing.\n> You have to touch one byte outside of any comments.\n\nUgh. So the only way ccache could avoid this is to drop the\npreprocessed-file comparison check if -Wimplicit-fallthrough is on.\nDoesn't really sound like something we'd want to ask them to do.\n\n> I bet this is going to bite someone ... maybe we'd be better off going\n> all the way to -Wimplicit-fallthrough=5 and use the\n> __attribute__((fallthrough)) stuff instead.\n\nI'm not really in favor of the __attribute__ solution --- seems too\ngcc-specific. FALLTHROUGH-type comments are understood by other\nsorts of tools besides gcc.\n\nIn practice, it doesn't seem like this'll be a huge problem once\nwe're past the initial fixup stage. We can revisit it later if\nthat prediction proves wrong, of course.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 17:12:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags (was Re: pgsql:\n Support FETCH FIRST WITH TIES)" }, { "msg_contents": "At Tue, 12 May 2020 17:12:51 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Fixed one straggler in contrib, and while testing it I realized why\n> > ccache doesn't pay attention to the changes I was doing in the file:\n> > ccache compares the *preprocessed* version of the file and only if that\n> > differs from the version that was cached last, ccache sends the new one\n> > to the compiler; and of course these comments are not present in the\n> > preprocessed version, so changing only the comment accomplishes nothing.\n> > You have to touch one byte outside of any comments.\n> \n> Ugh. So the only way ccache could avoid this is to drop the\n> preprocessed-file comparison check if -Wimplicit-fallthrough is on.\n> Doesn't really sound like something we'd want to ask them to do.\n> \n> > I bet this is going to bite someone ... maybe we'd be better off going\n> > all the way to -Wimplicit-fallthrough=5 and use the\n> > __attribute__((fallthrough)) stuff instead.\n> \n> I'm not really in favor of the __attribute__ solution --- seems too\n> gcc-specific. FALLTHROUGH-type comments are understood by other\n> sorts of tools besides gcc.\n> \n> In practice, it doesn't seem like this'll be a huge problem once\n> we're past the initial fixup stage. We can revisit it later if\n> that prediction proves wrong, of course.\n\nFWIW, I got a warning for jsonpath_gram.c.\n\n> jsonpath_gram.c:1026:16: warning: this statement may fall through [-Wimplicit-fallthrough=]\n> if (*++yyp != '\\\\')\n> ^\n> jsonpath_gram.c:1029:11: note: here\n> default:\n> ^~~~~~~\n\njsonpath_gram.c:1025\n> case '\\\\':\n> if (*++yyp != '\\\\')\n> goto do_not_strip_quotes;\n> /* Fall through. */\n> default:\n\nIt is generated code by bison. \n\n$ bison --version\nbison (GNU Bison) 3.0.4\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 May 2020 17:13:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On Wed, May 13, 2020 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 12 May 2020 17:12:51 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > Fixed one straggler in contrib, and while testing it I realized why\n> > > ccache doesn't pay attention to the changes I was doing in the file:\n> > > ccache compares the *preprocessed* version of the file and only if that\n> > > differs from the version that was cached last, ccache sends the new one\n> > > to the compiler; and of course these comments are not present in the\n> > > preprocessed version, so changing only the comment accomplishes\n> nothing.\n> > > You have to touch one byte outside of any comments.\n> >\n> > Ugh. So the only way ccache could avoid this is to drop the\n> > preprocessed-file comparison check if -Wimplicit-fallthrough is on.\n> > Doesn't really sound like something we'd want to ask them to do.\n> >\n> > > I bet this is going to bite someone ... maybe we'd be better off going\n> > > all the way to -Wimplicit-fallthrough=5 and use the\n> > > __attribute__((fallthrough)) stuff instead.\n> >\n> > I'm not really in favor of the __attribute__ solution --- seems too\n> > gcc-specific. FALLTHROUGH-type comments are understood by other\n> > sorts of tools besides gcc.\n> >\n> > In practice, it doesn't seem like this'll be a huge problem once\n> > we're past the initial fixup stage. We can revisit it later if\n> > that prediction proves wrong, of course.\n>\n> FWIW, I got a warning for jsonpath_gram.c.\n>\n> > jsonpath_gram.c:1026:16: warning: this statement may fall through\n> [-Wimplicit-fallthrough=]\n> > if (*++yyp != '\\\\')\n> > ^\n> > jsonpath_gram.c:1029:11: note: here\n> > default:\n> > ^~~~~~~\n>\n> jsonpath_gram.c:1025\n> > case '\\\\':\n> > if (*++yyp != '\\\\')\n> > goto do_not_strip_quotes;\n> > /* Fall through. */\n> > default:\n>\n> It is generated code by bison.\n>\n> $ bison --version\n> bison (GNU Bison) 3.0.4\n>\n>\nI just found this just serval minutes ago. Upgrading your bison to the\nlatest\nversion (3.6) is ok. I'd like we have a better way to share this knowledge\nthrough.\nI spend ~30 minutes to troubleshooting this issue.\n\nBest Regards\nAndy Fan\n\nOn Wed, May 13, 2020 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 12 May 2020 17:12:51 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Fixed one straggler in contrib, and while testing it I realized why\n> > ccache doesn't pay attention to the changes I was doing in the file:\n> > ccache compares the *preprocessed* version of the file and only if that\n> > differs from the version that was cached last, ccache sends the new one\n> > to the compiler; and of course these comments are not present in the\n> > preprocessed version, so changing only the comment accomplishes nothing.\n> > You have to touch one byte outside of any comments.\n> \n> Ugh.  So the only way ccache could avoid this is to drop the\n> preprocessed-file comparison check if -Wimplicit-fallthrough is on.\n> Doesn't really sound like something we'd want to ask them to do.\n> \n> > I bet this is going to bite someone ... maybe we'd be better off going\n> > all the way to -Wimplicit-fallthrough=5 and use the\n> > __attribute__((fallthrough)) stuff instead.\n> \n> I'm not really in favor of the __attribute__ solution --- seems too\n> gcc-specific.  FALLTHROUGH-type comments are understood by other\n> sorts of tools besides gcc.\n> \n> In practice, it doesn't seem like this'll be a huge problem once\n> we're past the initial fixup stage.  We can revisit it later if\n> that prediction proves wrong, of course.\n\nFWIW, I got a warning for jsonpath_gram.c.\n\n> jsonpath_gram.c:1026:16: warning: this statement may fall through [-Wimplicit-fallthrough=]\n>              if (*++yyp != '\\\\')\n>                 ^\n> jsonpath_gram.c:1029:11: note: here\n>            default:\n>            ^~~~~~~\n\njsonpath_gram.c:1025\n>           case '\\\\':\n>             if (*++yyp != '\\\\')\n>               goto do_not_strip_quotes;\n>             /* Fall through.  */\n>           default:\n\nIt is generated code by bison. \n\n$ bison --version\nbison (GNU Bison) 3.0.4I just found this just serval minutes ago.  Upgrading your bison to the latest version (3.6) is ok. I'd like we have a better way to share this knowledge through.I spend ~30 minutes to troubleshooting this issue. Best RegardsAndy Fan", "msg_date": "Wed, 13 May 2020 16:17:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "At Wed, 13 May 2020 16:17:50 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in \n> On Wed, May 13, 2020 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > > jsonpath_gram.c:1026:16: warning: this statement may fall through\n> > [-Wimplicit-fallthrough=]\n> > > if (*++yyp != '\\\\')\n> > > ^\n> > > jsonpath_gram.c:1029:11: note: here\n> > > default:\n...\n> > It is generated code by bison.\n> >\n> > $ bison --version\n> > bison (GNU Bison) 3.0.4\n> >\n> >\n> I just found this just serval minutes ago. Upgrading your bison to the\n> latest\n> version (3.6) is ok. I'd like we have a better way to share this knowledge\n> through.\n> I spend ~30 minutes to troubleshooting this issue.\n\nThanks. I'm happy to know that! But AFAICS 3.0.4 is the current\nversion of bison in AppStream and PowerTools of CentOS8...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 May 2020 19:15:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On Wed, May 13, 2020 at 12:16 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 13 May 2020 16:17:50 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in\n> > On Wed, May 13, 2020 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > wrote:\n> > > > jsonpath_gram.c:1026:16: warning: this statement may fall through\n> > > [-Wimplicit-fallthrough=]\n> > > > if (*++yyp != '\\\\')\n> > > > ^\n> > > > jsonpath_gram.c:1029:11: note: here\n> > > > default:\n> ...\n> > > It is generated code by bison.\n> > >\n> > > $ bison --version\n> > > bison (GNU Bison) 3.0.4\n> > >\n> > >\n> > I just found this just serval minutes ago. Upgrading your bison to the\n> > latest\n> > version (3.6) is ok. I'd like we have a better way to share this knowledge\n> > through.\n> > I spend ~30 minutes to troubleshooting this issue.\n>\n> Thanks. I'm happy to know that! But AFAICS 3.0.4 is the current\n> version of bison in AppStream and PowerTools of CentOS8...\n\nFTR I'm using bison 3.5 and I didn't hit any issue. However that may\nbe because of ccache, as mentioned by Alvaro.\n\n\n", "msg_date": "Wed, 13 May 2020 13:35:18 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> FWIW, I got a warning for jsonpath_gram.c.\n\nUgh. Confirmed here on Fedora 30 (bison 3.0.5).\n\n> I just found this just serval minutes ago. Upgrading your bison to the\n> latest version (3.6) is ok. I'd like we have a better way to share this\n> knowledge through. I spend ~30 minutes to troubleshooting this issue.\n\nI fear that is going to mean that we revert this patch.\nWe are *NOT* moving the minimum bison requirement for this,\nespecially not to a bleeding-edge bison version.\n\nAlternatively, it might work to go back down to warning level 3;\nI see that the code in question has\n\n\t/* Fall through. */\n\nwhich I believe would work at the lower warning level. But that\nraises the question of how far back bison generates code that\nis clean --- and, again, I'm not willing to move the minimum\nbison requirement. (On the other hand, if you have an old bison,\nyou likely also have an old gcc that doesn't know this warning\nswitch, so maybe it'd be all right in practice?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 10:02:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On 2020-May-13, Tom Lane wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> >> FWIW, I got a warning for jsonpath_gram.c.\n> \n> Ugh. Confirmed here on Fedora 30 (bison 3.0.5).\n\nUgh.\n\n> > I just found this just serval minutes ago. Upgrading your bison to the\n> > latest version (3.6) is ok. I'd like we have a better way to share this\n> > knowledge through. I spend ~30 minutes to troubleshooting this issue.\n> \n> I fear that is going to mean that we revert this patch.\n\nOr we can filter-out the -Wimplicit-fallthrough, or change to level 3,\nfor bison-emitted files.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 11:23:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I fear that is going to mean that we revert this patch.\n\n> Or we can filter-out the -Wimplicit-fallthrough, or change to level 3,\n> for bison-emitted files.\n\nLet's just go to level 3 overall (and revert the changes you made for\nlevel 4 compliance --- they're more likely to cause back-patching\npain than do anything useful).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 11:34:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On 2020-May-13, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> >> I fear that is going to mean that we revert this patch.\n> \n> > Or we can filter-out the -Wimplicit-fallthrough, or change to level 3,\n> > for bison-emitted files.\n> \n> Let's just go to level 3 overall (and revert the changes you made for\n> level 4 compliance --- they're more likely to cause back-patching\n> pain than do anything useful).\n\nOk.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 12:04:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On 2020-May-13, Alvaro Herrera wrote:\n\n> On 2020-May-13, Tom Lane wrote:\n\n> > Let's just go to level 3 overall (and revert the changes you made for\n> > level 4 compliance --- they're more likely to cause back-patching\n> > pain than do anything useful).\n> \n> Ok.\n\nAnd done.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 15:32:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" }, { "msg_contents": "On Wed, May 13, 2020 at 10:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> >> FWIW, I got a warning for jsonpath_gram.c.\n>\n> Ugh. Confirmed here on Fedora 30 (bison 3.0.5).\n>\n> > I just found this just serval minutes ago. Upgrading your bison to the\n> > latest version (3.6) is ok. I'd like we have a better way to share this\n> > knowledge through. I spend ~30 minutes to troubleshooting this issue.\n>\n> I fear that is going to mean that we revert this patch.\n> We are *NOT* moving the minimum bison requirement for this,\n> especially not to a bleeding-edge bison version.\n\n\nYes, I didn't mean revert the patch, but I was thinking moving the minimum\nbison. But since down to the warning level 3 also resolved the issue,\nlooks it is a better way to do it.\n\n (On the other hand, if you have an old bison,\n>\nyou likely also have an old gcc that doesn't know this warning\n> switch, so maybe it'd be all right in practice?)\n>\n>\nI just use an old bision and a newer gcc:( and I used \"echo \"COPT=-Wall\n-Werror\"\n> src/Makefile.custom\" which is same as our cfbot system. Thank you all\nfor so quick\nfix!\n\nBest Regards\nAndy Fan\n\nOn Wed, May 13, 2020 at 10:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> FWIW, I got a warning for jsonpath_gram.c.\n\nUgh.  Confirmed here on Fedora 30 (bison 3.0.5).\n\n> I just found this just serval minutes ago.  Upgrading your bison to the\n> latest version (3.6) is ok. I'd like we have a better way to share this\n> knowledge through.  I spend ~30 minutes to troubleshooting this issue.\n\nI fear that is going to mean that we revert this patch.\nWe are *NOT* moving the minimum bison requirement for this,\nespecially not to a bleeding-edge bison version.Yes,  I didn't mean revert the patch, but I was thinking moving the minimumbison.  But since down to the warning level 3 also resolved the issue,looks it is a better way to do it.   (On the other hand, if you have an old bison,\nyou likely also have an old gcc that doesn't know this warning\nswitch, so maybe it'd be all right in practice?)\n I just use an old bision and a newer gcc:(  and I used \"echo \"COPT=-Wall -Werror\" > src/Makefile.custom\" which is same as our cfbot system.  Thank you all for so quickfix!Best RegardsAndy Fan", "msg_date": "Fri, 15 May 2020 08:24:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add \"-Wimplicit-fallthrough\" to default flags" } ]
[ { "msg_contents": "Hi,\n\nNot sure what changed, but I'm seeing this failure:\n\nparse_coerce.c: In function ‘coerce_type’:\nparse_coerce.c:345:9: warning: implicit declaration of function ‘datumIsEqual’ [-Wimplicit-function-declaration]\n 345 | if (!datumIsEqual(newcon->constvalue, val2, false, newcon->constlen))\n | ^~~~~~~~~~~~\n\nNot sure if this because of compiler version (I'm on gcc 9.2.1) or\nsomething else - I don't see any obvious changes to relevant parts of\nthe code, but I haven't dug too much.\n\nSimply including 'utils/datum.h' resolves the issue.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 22:54:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "implicit declaration of datumIsEqual in parse_coerce.c" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Not sure what changed, but I'm seeing this failure:\n\n> parse_coerce.c: In function ‘coerce_type’:\n> parse_coerce.c:345:9: warning: implicit declaration of function ‘datumIsEqual’ [-Wimplicit-function-declaration]\n> 345 | if (!datumIsEqual(newcon->constvalue, val2, false, newcon->constlen))\n> | ^~~~~~~~~~~~\n\nThat's inside \"#ifdef RANDOMIZE_ALLOCATED_MEMORY\", which probably\nexplains why most of us aren't seeing it. My guess is somebody\nremoved an #include without realizing that this chunk of code\nneeded it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 17:16:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: implicit declaration of datumIsEqual in parse_coerce.c" }, { "msg_contents": "On Tue, Apr 07, 2020 at 05:16:58PM -0400, Tom Lane wrote:\n> That's inside \"#ifdef RANDOMIZE_ALLOCATED_MEMORY\", which probably\n> explains why most of us aren't seeing it. My guess is somebody\n> removed an #include without realizing that this chunk of code\n> needed it.\n\n[cough]\n\ncommit: 4dbcb3f844eca4a401ce06aa2781bd9a9be433e9\nauthor: Tom Lane <tgl@sss.pgh.pa.us>\ndate: Sat, 14 Mar 2020 14:42:22 -0400\nRestructure polymorphic-type resolution in funcapi.c.\n[...]\n@@ -26,7 +25,6 @@\n #include \"parser/parse_relation.h\"\n #include \"parser/parse_type.h\"\n #include \"utils/builtins.h\"\n-#include \"utils/datum.h\"\n #include \"utils/lsyscache.h\"\n--\nMichael", "msg_date": "Wed, 8 Apr 2020 11:32:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: implicit declaration of datumIsEqual in parse_coerce.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Apr 07, 2020 at 05:16:58PM -0400, Tom Lane wrote:\n>> That's inside \"#ifdef RANDOMIZE_ALLOCATED_MEMORY\", which probably\n>> explains why most of us aren't seeing it. My guess is somebody\n>> removed an #include without realizing that this chunk of code\n>> needed it.\n\n> [cough]\n\nBleagh. Either of you want to put it back? (Maybe with a comment\nthis time, like \"needed for datumIsEqual()\".)\n\nCuriously, there are no buildfarm warnings about this, even though we have\nat least one member running with RANDOMIZE_ALLOCATED_MEMORY. Wonder why?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 22:39:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: implicit declaration of datumIsEqual in parse_coerce.c" }, { "msg_contents": "On Tue, Apr 07, 2020 at 10:39:30PM -0400, Tom Lane wrote:\n> > On Tue, Apr 07, 2020 at 05:16:58PM -0400, Tom Lane wrote:\n> >> That's inside \"#ifdef RANDOMIZE_ALLOCATED_MEMORY\", which probably\n> >> explains why most of us aren't seeing it. My guess is somebody\n> >> removed an #include without realizing that this chunk of code\n> >> needed it.\n\n> Curiously, there are no buildfarm warnings about this, even though we have\n> at least one member running with RANDOMIZE_ALLOCATED_MEMORY. Wonder why?\n\nThe RANDOMIZE_ALLOCATED_MEMORY buildfarm members use xlc, which disables this\nwarning by default. (Given flag -qinfo=pro, it would warn.)\n\n\n", "msg_date": "Sat, 2 May 2020 23:48:36 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: implicit declaration of datumIsEqual in parse_coerce.c" } ]
[ { "msg_contents": "Allow users to limit storage reserved by replication slots\n\nReplication slots are useful to retain data that may be needed by a\nreplication system. But experience has shown that allowing them to\nretain excessive data can lead to the primary failing because of running\nout of space. This new feature allows the user to configure a maximum\namount of space to be reserved using the new option\nmax_slot_wal_keep_size. Slots that overrun that space are invalidated\nat checkpoint time, enabling the storage to be released.\n\nAuthor: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>\nReviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>\nReviewed-by: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\nReviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\nDiscussion: https://postgr.es/m/20170228.122736.123383594.horiguchi.kyotaro@lab.ntt.co.jp\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c6550776394e25c1620bc8258427c8f1d448080d\n\nModified Files\n--------------\ndoc/src/sgml/catalogs.sgml | 38 +++++\ndoc/src/sgml/config.sgml | 23 +++\ndoc/src/sgml/high-availability.sgml | 8 +-\nsrc/backend/access/transam/xlog.c | 145 ++++++++++++++---\nsrc/backend/catalog/system_views.sql | 4 +-\nsrc/backend/replication/logical/logicalfuncs.c | 2 +-\nsrc/backend/replication/slot.c | 100 +++++++++++-\nsrc/backend/replication/slotfuncs.c | 44 ++++-\nsrc/backend/replication/walsender.c | 4 +-\nsrc/backend/utils/misc/guc.c | 13 ++\nsrc/backend/utils/misc/postgresql.conf.sample | 1 +\nsrc/include/access/xlog.h | 14 ++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 6 +-\nsrc/include/replication/slot.h | 11 +-\nsrc/test/recovery/t/019_replslot_limit.pl | 217 +++++++++++++++++++++++++\nsrc/test/regress/expected/rules.out | 6 +-\n17 files changed, 595 insertions(+), 43 deletions(-)", "msg_date": "Tue, 07 Apr 2020 22:39:19 +0000", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "pgsql: Allow users to limit storage reserved by replication slots" }, { "msg_contents": "On 2020-Apr-07, Alvaro Herrera wrote:\n\n> src/test/recovery/t/019_replslot_limit.pl | 217 +++++++++++++++++++++++++\n\nI fixed the perlcritic complaint from buildfarm member crake, but\nthere's a new one in francolin:\n\n# Failed test 'check that the slot state changes to \"reserved\"'\n# at t/019_replslot_limit.pl line 125.\n# got: '0/15000D8|reserved|216 bytes'\n# expected: '0/1500000|reserved|216 bytes'\n\n# Failed test 'check that the slot state changes to \"lost\"'\n# at t/019_replslot_limit.pl line 135.\n# got: '0/15000D8|lost|t'\n# expected: '0/1500000|lost|t'\n# Looks like you failed 2 tests of 13.\n[23:07:28] t/019_replslot_limit.pl .............. \n\nwhere the Perl code is:\n\n $start_lsn = $node_master->lsn('write');\n $node_master->wait_for_catchup($node_standby, 'replay', $start_lsn);\n $node_standby->stop;\n\n # Advance WAL again without checkpoint, reducing remain by 6 MB.\n advance_wal($node_master, 6);\n\n # Slot gets into 'reserved' state\n $result = $node_master->safe_psql('postgres', \"SELECT restart_lsn, wal_status, pg_size_pretty(restart_lsn - min_safe_lsn) as remain FROM pg_replication_slots WHERE slot_name = 'rep1'\");\n is($result, \"$start_lsn|reserved|216 bytes\", 'check that the slot state changes to \"reserved\"');\n\n0xD8 is 216, so this seems to be saying that the checkpoint record was\nskipped by the restart_lsn. I'm not clear exactly why that happened ...\nis this saying that a checkpoint occurred?\n\nOne easy fix would be to remove the \"restart_lsn\" output column from the\nquery, but do we lose test specificity? (I think the answer is no.)\n\nHowever, even with that change, we're still testing that a checkpoint is\n216 bytes ... in other words, whenever someone changes the definition of\nstruct CheckPoint, this test will fail. That seems unnecessary and\nunfriendly. I'm not sure how to improve that without also removing that\ncolumn.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 19:26:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow users to limit storage reserved by replication slots" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I fixed the perlcritic complaint from buildfarm member crake, but\n> there's a new one in francolin:\n\nOther buildfarm members are showing related-but-different failures.\nI think this test is just plain unstable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 21:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow users to limit storage reserved by replication slots" }, { "msg_contents": "Hi, \n\nOn April 7, 2020 6:13:51 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I fixed the perlcritic complaint from buildfarm member crake, but\n>> there's a new one in francolin:\n>\n>Other buildfarm members are showing related-but-different failures.\n>I think this test is just plain unstable.\n\nI have not looked at the source, but the error messages show LSNs and bytes. I can't really imagine how that could be made stable.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 07 Apr 2020 19:10:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow users to limit storage reserved by replication slots" }, { "msg_contents": "On Tue, Apr 07, 2020 at 07:10:07PM -0700, Andres Freund wrote:\n> I have not looked at the source, but the error messages show LSNs\n> and bytes. I can't really imagine how that could be made stable.\n\nAnother bad news is that this is page-size dependent. What if you\nremoved pg_size_pretty() and replaced it with a condition that returns\na boolean status in the result itself?\n--\nMichael", "msg_date": "Wed, 8 Apr 2020 11:36:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow users to limit storage reserved by replication slots" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> However, even with that change, we're still testing that a checkpoint is\n> 216 bytes ... in other words, whenever someone changes the definition of\n> struct CheckPoint, this test will fail. That seems unnecessary and\n> unfriendly. I'm not sure how to improve that without also removing that\n> column.\n\nI read florican's results as showing that sizeof(CheckPoint) is already\ndifferent on 32-bit machines than 64-bit; it's repeatably getting this:\n\n# Failed test 'check that the slot state changes to \"reserved\"'\n# at t/019_replslot_limit.pl line 125.\n# got: '0/15000C0|reserved|192 bytes'\n# expected: '0/15000C0|reserved|216 bytes'\n\nThis test case was *not* well thought out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 22:58:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow users to limit storage reserved by replication slots" } ]
[ { "msg_contents": "Dear hackers!\nDue to changes in PG13 RUM extension had errors on compiling.\nI propose a short patch to correct this.\n\nBest regards,\n\nPavel Borisov", "msg_date": "Wed, 8 Apr 2020 13:13:45 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] RUM Postgres 13 patch" }, { "msg_contents": "Hi, Pavel!\n\nOn Wed, Apr 8, 2020 at 12:14 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> Due to changes in PG13 RUM extension had errors on compiling.\n> I propose a short patch to correct this.\n\nRUM is an extension managed by Postgres Pro and not discussed in\npgsql-hackers mailing lists. Please, make a pull request on github.\nhttps://github.com/postgrespro/rum\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 12:28:32 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] RUM Postgres 13 patch" } ]
[ { "msg_contents": "Hello\n\nThe following patch, which we added to build mingw-postgresql on Fedora, \nadds some missing libraries to Libs.private of libpq.pc, discovered when \nattempting to statically link with libpq:\n\n-lz: is required by -lcrypto\n-liconv: is required by -lintl (though possibly depends on whether \ngettext was compiled with iconv support)\n\nThanks\nSandro\n\n\ndiff -rupN postgresql-11.5/src/interfaces/libpq/Makefile \npostgresql-11.5-new/src/interfaces/libpq/Makefile\n--- postgresql-11.5/src/interfaces/libpq/Makefile    2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/Makefile 2020-04-07 \n13:49:00.801203610 +0200\n@@ -80,10 +80,10 @@ endif\n  ifneq ($(PORTNAME), win32)\n  SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto \n-lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv \n-lintl, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)\n  else\n-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto \n-lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl $(PTHREAD_LIBS), \n$(LIBS)) $(LDAP_LIBS_FE)\n+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lz -lk5crypto \n-lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl $(PTHREAD_LIBS), \n$(LIBS)) $(LDAP_LIBS_FE)\n  endif\n  ifeq ($(PORTNAME), win32)\n-SHLIB_LINK += -lshell32 -lws2_32 -lsecur32 $(filter -leay32 -lssleay32 \n-lcomerr32 -lkrb5_32, $(LIBS))\n+SHLIB_LINK += -lshell32 -lws2_32 -lsecur32 -liconv $(filter -leay32 \n-lssleay32 -lcomerr32 -lkrb5_32, $(LIBS))\n  endif\n\n  SHLIB_EXPORTS = exports.txt\n\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:38:50 +0200", "msg_from": "Sandro Mani <manisandro@gmail.com>", "msg_from_op": true, "msg_subject": "[Patch] Add missing libraries to Libs.private of libpq.pc" }, { "msg_contents": "Sandro Mani <manisandro@gmail.com> writes:\n> The following patch, which we added to build mingw-postgresql on Fedora, \n> adds some missing libraries to Libs.private of libpq.pc, discovered when \n> attempting to statically link with libpq:\n\nTBH, I think we should just reject this patch. We do not encourage or\nsupport statically linking libpq (and I thought that was against\ndistro-level policies in Fedora, as well --- such policies certainly\nexisted when I worked for Red Hat). Moreover, the proposed patch\nrequires us to absorb assumptions about the dependencies of external\nlibraries that we really shouldn't be making. I fear that it risks\ncausing new problems on other platforms, or at the very least\nunnecessarily bloating libpq's dependency footprint. In particular,\ncreating a hard dependency on -liconv regardless of build options\nseems right out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 16:54:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Add missing libraries to Libs.private of libpq.pc" }, { "msg_contents": "On 2020-04-08 11:38, Sandro Mani wrote:\n> The following patch, which we added to build mingw-postgresql on Fedora,\n> adds some missing libraries to Libs.private of libpq.pc, discovered when\n> attempting to statically link with libpq:\n> \n> -lz: is required by -lcrypto\n\nI think the correct fix for that would be to add libssl to libpq's \nRequires.private.\n\n> -liconv: is required by -lintl (though possibly depends on whether\n> gettext was compiled with iconv support)\n\nYeah, in both of these cases it depends on what libssl or libintl \nvariant you actually got. It could be the OS one or a separately \ninstalled one, it could be one with or without pkg-config support. I'm \nnot sure what a robust solution would be.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 21:47:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Add missing libraries to Libs.private of libpq.pc" }, { "msg_contents": "On 2020-07-10 21:47, Peter Eisentraut wrote:\n> On 2020-04-08 11:38, Sandro Mani wrote:\n>> The following patch, which we added to build mingw-postgresql on Fedora,\n>> adds some missing libraries to Libs.private of libpq.pc, discovered when\n>> attempting to statically link with libpq:\n>>\n>> -lz: is required by -lcrypto\n> \n> I think the correct fix for that would be to add libssl to libpq's\n> Requires.private.\n\nFor that, I propose the attached patch.\n\n>> -liconv: is required by -lintl (though possibly depends on whether\n>> gettext was compiled with iconv support)\n\nI think the solution here would be to have gettext provide a pkg-config \nfile.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 4 Sep 2020 22:07:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Add missing libraries to Libs.private of libpq.pc" }, { "msg_contents": "On 2020-09-04 22:07, Peter Eisentraut wrote:\n> On 2020-07-10 21:47, Peter Eisentraut wrote:\n>> On 2020-04-08 11:38, Sandro Mani wrote:\n>>> The following patch, which we added to build mingw-postgresql on Fedora,\n>>> adds some missing libraries to Libs.private of libpq.pc, discovered when\n>>> attempting to statically link with libpq:\n>>>\n>>> -lz: is required by -lcrypto\n>>\n>> I think the correct fix for that would be to add libssl to libpq's\n>> Requires.private.\n> \n> For that, I propose the attached patch.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Sep 2020 15:56:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Add missing libraries to Libs.private of libpq.pc" } ]
[ { "msg_contents": "Hi\n\nThe following patch, which we added to build mingw-postgresql on Fedora, \nmakes the internal minimal pthreads reimplementation only used when \nbuilding with MSVC, as on MINGW it causes symbol collisions with the \nsymbols provided my winpthreads.\n\nThanks\nSandro\n\n\ndiff -rupN postgresql-11.5/src/interfaces/ecpg/ecpglib/misc.c \npostgresql-11.5-new/src/interfaces/ecpg/ecpglib/misc.c\n--- postgresql-11.5/src/interfaces/ecpg/ecpglib/misc.c 2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/ecpg/ecpglib/misc.c 2020-04-08 \n11:20:39.850738296 +0200\n@@ -449,7 +449,7 @@ ECPGis_noind_null(enum ECPGttype type, c\n      return false;\n  }\n\n-#ifdef WIN32\n+#ifdef _MSC_VER\n  #ifdef ENABLE_THREAD_SAFETY\n\n  void\ndiff -rupN \npostgresql-11.5/src/interfaces/ecpg/include/ecpg-pthread-win32.h \npostgresql-11.5-new/src/interfaces/ecpg/include/ecpg-pthread-win32.h\n--- postgresql-11.5/src/interfaces/ecpg/include/ecpg-pthread-win32.h \n2019-08-05 23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/ecpg/include/ecpg-pthread-win32.h \n2020-04-08 11:20:39.851738296 +0200\n@@ -7,7 +7,7 @@\n\n  #ifdef ENABLE_THREAD_SAFETY\n\n-#ifndef WIN32\n+#ifndef _MSC_VER\n\n  #include <pthread.h>\n  #else\ndiff -rupN postgresql-11.5/src/interfaces/libpq/fe-connect.c \npostgresql-11.5-new/src/interfaces/libpq/fe-connect.c\n--- postgresql-11.5/src/interfaces/libpq/fe-connect.c 2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/fe-connect.c 2020-04-08 \n11:20:39.853738297 +0200\n@@ -50,7 +50,7 @@\n  #endif\n\n  #ifdef ENABLE_THREAD_SAFETY\n-#ifdef WIN32\n+#ifdef _MSC_VER\n  #include \"pthread-win32.h\"\n  #else\n  #include <pthread.h>\ndiff -rupN postgresql-11.5/src/interfaces/libpq/fe-secure.c \npostgresql-11.5-new/src/interfaces/libpq/fe-secure.c\n--- postgresql-11.5/src/interfaces/libpq/fe-secure.c    2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/fe-secure.c 2020-04-08 \n11:20:39.854738297 +0200\n@@ -48,7 +48,7 @@\n  #include <sys/stat.h>\n\n  #ifdef ENABLE_THREAD_SAFETY\n-#ifdef WIN32\n+#ifdef _MSC_VER\n  #include \"pthread-win32.h\"\n  #else\n  #include <pthread.h>\ndiff -rupN postgresql-11.5/src/interfaces/libpq/fe-secure-openssl.c \npostgresql-11.5-new/src/interfaces/libpq/fe-secure-openssl.c\n--- postgresql-11.5/src/interfaces/libpq/fe-secure-openssl.c 2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/fe-secure-openssl.c \n2020-04-08 11:20:39.855738298 +0200\n@@ -47,7 +47,7 @@\n  #include <sys/stat.h>\n\n  #ifdef ENABLE_THREAD_SAFETY\n-#ifdef WIN32\n+#ifdef _MSC_VER\n  #include \"pthread-win32.h\"\n  #else\n  #include <pthread.h>\ndiff -rupN postgresql-11.5/src/interfaces/libpq/libpq-int.h \npostgresql-11.5-new/src/interfaces/libpq/libpq-int.h\n--- postgresql-11.5/src/interfaces/libpq/libpq-int.h    2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/libpq-int.h 2020-04-08 \n11:20:39.855738298 +0200\n@@ -29,7 +29,7 @@\n  #endif\n\n  #ifdef ENABLE_THREAD_SAFETY\n-#ifdef WIN32\n+#ifdef _MSC_VER\n  #include \"pthread-win32.h\"\n  #else\n  #include <pthread.h>\ndiff -rupN postgresql-11.5/src/interfaces/libpq/pthread-win32.c \npostgresql-11.5-new/src/interfaces/libpq/pthread-win32.c\n--- postgresql-11.5/src/interfaces/libpq/pthread-win32.c 2019-08-05 \n23:14:59.000000000 +0200\n+++ postgresql-11.5-new/src/interfaces/libpq/pthread-win32.c 2020-04-08 \n11:21:51.674766968 +0200\n@@ -10,10 +10,13 @@\n  *-------------------------------------------------------------------------\n  */\n\n+#ifdef _MSC_VER\n+\n  #include \"postgres_fe.h\"\n\n  #include \"pthread-win32.h\"\n\n+\n  DWORD\n  pthread_self(void)\n  {\n@@ -58,3 +61,5 @@ pthread_mutex_unlock(pthread_mutex_t *mp\n      LeaveCriticalSection(*mp);\n      return 0;\n  }\n+\n+#endif // _MSC_VER\n\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:38:52 +0200", "msg_from": "Sandro Mani <manisandro@gmail.com>", "msg_from_op": true, "msg_subject": "[Patch] Use internal pthreads reimplementation only when building\n with MSVC" }, { "msg_contents": "Hello,\n\nOn 2020-Apr-08, Sandro Mani wrote:\n\n> The following patch, which we added to build mingw-postgresql on Fedora,\n> makes the internal minimal pthreads reimplementation only used when building\n> with MSVC, as on MINGW it causes symbol collisions with the symbols provided\n> my winpthreads.\n\nAre there any build-system tweaks needed to enable use of winpthreads?\nIf none are needed, why are all our mingw buildfarm members building\ncorrectly? I suggest that if you want to maintain \"mingw-postgresql\nbuilt on Fedora\", it would be a good idea to have a buildfarm animal\nthat tests it on a recurring basis.\n\nPlease do submit patches as separate attachments rather than in the\nemail body.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:57:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Use internal pthreads reimplementation only when\n building with MSVC" }, { "msg_contents": "> On 9 Apr 2020, at 23:57, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Please do submit patches as separate attachments rather than in the\n> email body.\n\nSince the CF app is unable to see that there is a patch at all, I took the\nliberty to resubmit the posted patch rebased on top of HEAD and with the C++\nreplaced with a C /* */ comment.\n\nMarking this entry Waiting on Author based on Alvaros questions.\n\ncheers ./daniel", "msg_date": "Thu, 2 Jul 2020 16:35:12 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [Patch] Use internal pthreads reimplementation only when building\n with MSVC" }, { "msg_contents": "> On 2 Jul 2020, at 16:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 9 Apr 2020, at 23:57, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n>> Please do submit patches as separate attachments rather than in the\n>> email body.\n> \n> Since the CF app is unable to see that there is a patch at all, I took the\n> liberty to resubmit the posted patch rebased on top of HEAD and with the C++\n> replaced with a C /* */ comment.\n\nThis version now applies and builds but..\n\n> Marking this entry Waiting on Author based on Alvaros questions.\n\n..since the thread has stalled with no response to review questions I'm marking\nthis Returned with Feedback.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 31 Jul 2020 21:53:15 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [Patch] Use internal pthreads reimplementation only when building\n with MSVC" } ]
[ { "msg_contents": "Rationalize GetWalRcv{Write,Flush}RecPtr().\n\nGetWalRcvWriteRecPtr() previously reported the latest *flushed*\nlocation. Adopt the conventional terminology used elsewhere in the tree\nby renaming it to GetWalRcvFlushRecPtr(), and likewise for some related\nvariables that used the term \"received\".\n\nAdd a new definition of GetWalRcvWriteRecPtr(), which returns the latest\n*written* value. This will allow later patches to use the value for\nnon-data-integrity purposes, without having to wait for the flush\npointer to advance.\n\nReviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>\nReviewed-by: Andres Freund <andres@anarazel.de>\nDiscussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d140f2f3e225ea53e2d92ab6833b8c186c90666c\n\nModified Files\n--------------\nsrc/backend/access/transam/xlog.c | 20 ++++++++++----------\nsrc/backend/access/transam/xlogfuncs.c | 2 +-\nsrc/backend/replication/README | 2 +-\nsrc/backend/replication/walreceiver.c | 15 ++++++++++-----\nsrc/backend/replication/walreceiverfuncs.c | 24 ++++++++++++++++++------\nsrc/backend/replication/walsender.c | 2 +-\nsrc/include/replication/walreceiver.h | 18 ++++++++++++++----\n7 files changed, 55 insertions(+), 28 deletions(-)", "msg_date": "Wed, 08 Apr 2020 11:55:51 +0000", "msg_from": "Thomas Munro <tmunro@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Rationalize GetWalRcv{Write,Flush}RecPtr()." }, { "msg_contents": "On 2020-Apr-08, Thomas Munro wrote:\n\n> Rationalize GetWalRcv{Write,Flush}RecPtr().\n> \n> GetWalRcvWriteRecPtr() previously reported the latest *flushed*\n> location. Adopt the conventional terminology used elsewhere in the tree\n> by renaming it to GetWalRcvFlushRecPtr(), and likewise for some related\n> variables that used the term \"received\".\n> \n> Add a new definition of GetWalRcvWriteRecPtr(), which returns the latest\n> *written* value. This will allow later patches to use the value for\n> non-data-integrity purposes, without having to wait for the flush\n> pointer to advance.\n\nIt seems worth pointing out that the new GetWalRcvWriteRecPtr function\nhas a different signature from the original one -- so any third-party\ncode using the original function will now get a compile failure that\nshould alert them that they need to change their code to call\nGetWalRcvFlushRecPtr instead. Maybe we should add a line or two in the\ncomments GetWalRcvWriteRecPtr to make this explicit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:49:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rationalize GetWalRcv{Write,Flush}RecPtr()." }, { "msg_contents": "On 2020-Apr-09, Alvaro Herrera wrote:\n\n> It seems worth pointing out that the new GetWalRcvWriteRecPtr function\n> has a different signature from the original one -- so any third-party\n> code using the original function will now get a compile failure that\n> should alert them that they need to change their code to call\n> GetWalRcvFlushRecPtr instead. Maybe we should add a line or two in the\n> comments GetWalRcvWriteRecPtr to make this explicit.\n\nAfter using codesearch.debian.net and finding no results, I decided that\nthis is not worth the effort.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Apr 2020 17:24:14 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rationalize GetWalRcv{Write,Flush}RecPtr()." }, { "msg_contents": "On Thu, Apr 16, 2020 at 9:24 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Apr-09, Alvaro Herrera wrote:\n> > It seems worth pointing out that the new GetWalRcvWriteRecPtr function\n> > has a different signature from the original one -- so any third-party\n> > code using the original function will now get a compile failure that\n> > should alert them that they need to change their code to call\n> > GetWalRcvFlushRecPtr instead. Maybe we should add a line or two in the\n> > comments GetWalRcvWriteRecPtr to make this explicit.\n>\n> After using codesearch.debian.net and finding no results, I decided that\n> this is not worth the effort.\n\nThanks for checking. Yeah, it looks like you're right.\ncodesearch.debian.net is cool.\n\n\n", "msg_date": "Sat, 2 May 2020 15:53:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Rationalize GetWalRcv{Write,Flush}RecPtr()." } ]
[ { "msg_contents": "Hi,\n\nI just came across this scenario  where - vaccum o/p with (full 1, \nparallel 0) option not working\n\n--working\n\npostgres=# vacuum (parallel 1, full 0 ) foo;\nVACUUM\npostgres=#\n\n--Not working\n\npostgres=# vacuum (full 1, parallel 0 ) foo;\nERROR:  cannot specify both FULL and PARALLEL options\n\nI think it should work.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 8 Apr 2020 17:52:12 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 8, 2020 at 8:22 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> I just came across this scenario where - vaccum o/p with (full 1,\n> parallel 0) option not working\n>\n> --working\n>\n> postgres=# vacuum (parallel 1, full 0 ) foo;\n> VACUUM\n> postgres=#\n>\n> --Not working\n>\n> postgres=# vacuum (full 1, parallel 0 ) foo;\n> ERROR: cannot specify both FULL and PARALLEL options\n>\n> I think it should work.\n\nUh, why? There's a clear error message which matches what you tried to do.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 08:29:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, 8 Apr 2020 at 17:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 8, 2020 at 8:22 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> > I just came across this scenario where - vaccum o/p with (full 1,\n> > parallel 0) option not working\n> >\n> > --working\n> >\n> > postgres=# vacuum (parallel 1, full 0 ) foo;\n> > VACUUM\n> > postgres=#\n> >\n> > --Not working\n> >\n> > postgres=# vacuum (full 1, parallel 0 ) foo;\n> > ERROR: cannot specify both FULL and PARALLEL options\n> >\n> > I think it should work.\n>\n> Uh, why? There's a clear error message which matches what you tried to do.\n>\n\nI think, Tushar point is that either we should allow both\nvacuum(parallel 0, full 1) and vacuum(parallel 1, full 0) or in the\nboth cases, we should through error.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 19:54:49 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 8, 2020 at 10:25 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n> I think, Tushar point is that either we should allow both\n> vacuum(parallel 0, full 1) and vacuum(parallel 1, full 0) or in the\n> both cases, we should through error.\n\nOh, yeah, good point. Somebody must not've been careful enough with\nthe options-checking code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:57:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 08, 2020 at 11:57:08AM -0400, Robert Haas wrote:\n> On Wed, Apr 8, 2020 at 10:25 AM Mahendra Singh Thalor\n> <mahi6run@gmail.com> wrote:\n> > I think, Tushar point is that either we should allow both\n> > vacuum(parallel 0, full 1) and vacuum(parallel 1, full 0) or in the\n> > both cases, we should through error.\n> \n> Oh, yeah, good point. Somebody must not've been careful enough with\n> the options-checking code.\n\nActually I think someone was too careful.\n\n From 9256cdb0a77fb33194727e265a346407921055ef Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed, 8 Apr 2020 11:38:36 -0500\nSubject: [PATCH v1] parallel vacuum: options check to use same test as in\n vacuumlazy.c\n\n---\n src/backend/commands/vacuum.c | 4 +---\n 1 file changed, 1 insertion(+), 3 deletions(-)\n\ndiff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\nindex 351d5215a9..660c854d49 100644\n--- a/src/backend/commands/vacuum.c\n+++ b/src/backend/commands/vacuum.c\n@@ -104,7 +104,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n \tbool\t\tfreeze = false;\n \tbool\t\tfull = false;\n \tbool\t\tdisable_page_skipping = false;\n-\tbool\t\tparallel_option = false;\n \tListCell *lc;\n \n \t/* Set default value */\n@@ -145,7 +144,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n \t\t\tparams.truncate = get_vacopt_ternary_value(opt);\n \t\telse if (strcmp(opt->defname, \"parallel\") == 0)\n \t\t{\n-\t\t\tparallel_option = true;\n \t\t\tif (opt->arg == NULL)\n \t\t\t{\n \t\t\t\tereport(ERROR,\n@@ -199,7 +197,7 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n \t\t !(params.options & (VACOPT_FULL | VACOPT_FREEZE)));\n \tAssert(!(params.options & VACOPT_SKIPTOAST));\n \n-\tif ((params.options & VACOPT_FULL) && parallel_option)\n+\tif ((params.options & VACOPT_FULL) && params.nworkers > 0)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n \t\t\t\t errmsg(\"cannot specify both FULL and PARALLEL options\")));\n-- \n2.17.0", "msg_date": "Wed, 8 Apr 2020 11:41:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, 8 Apr 2020 at 22:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Apr 08, 2020 at 11:57:08AM -0400, Robert Haas wrote:\n> > On Wed, Apr 8, 2020 at 10:25 AM Mahendra Singh Thalor\n> > <mahi6run@gmail.com> wrote:\n> > > I think, Tushar point is that either we should allow both\n> > > vacuum(parallel 0, full 1) and vacuum(parallel 1, full 0) or in the\n> > > both cases, we should through error.\n> >\n> > Oh, yeah, good point. Somebody must not've been careful enough with\n> > the options-checking code.\n>\n> Actually I think someone was too careful.\n>\n> From 9256cdb0a77fb33194727e265a346407921055ef Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 8 Apr 2020 11:38:36 -0500\n> Subject: [PATCH v1] parallel vacuum: options check to use same test as in\n> vacuumlazy.c\n>\n> ---\n> src/backend/commands/vacuum.c | 4 +---\n> 1 file changed, 1 insertion(+), 3 deletions(-)\n>\n> diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> index 351d5215a9..660c854d49 100644\n> --- a/src/backend/commands/vacuum.c\n> +++ b/src/backend/commands/vacuum.c\n> @@ -104,7 +104,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> bool freeze = false;\n> bool full = false;\n> bool disable_page_skipping = false;\n> - bool parallel_option = false;\n> ListCell *lc;\n>\n> /* Set default value */\n> @@ -145,7 +144,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> params.truncate = get_vacopt_ternary_value(opt);\n> else if (strcmp(opt->defname, \"parallel\") == 0)\n> {\n> - parallel_option = true;\n> if (opt->arg == NULL)\n> {\n> ereport(ERROR,\n> @@ -199,7 +197,7 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> !(params.options & (VACOPT_FULL | VACOPT_FREEZE)));\n> Assert(!(params.options & VACOPT_SKIPTOAST));\n>\n> - if ((params.options & VACOPT_FULL) && parallel_option)\n> + if ((params.options & VACOPT_FULL) && params.nworkers > 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"cannot specify both FULL and PARALLEL options\")));\n> --\n> 2.17.0\n>\n\nThanks Justin for the patch.\n\nPatch looks fine to me and it is fixing the issue. After applying this\npatch, vacuum will work as:\n\n1) vacuum (parallel 1, full 0);\n-- vacuuming will be done with 1 parallel worker.\n2) vacuum (parallel 0, full 1);\n-- full vacuuming will be done.\n3) vacuum (parallel 1, full 1);\n-- will give error :ERROR: cannot specify both FULL and PARALLEL options\n\n3rd example is telling that we can't give both FULL and PARALLEL\noptions but in 1st and 2nd, we are giving both FULL and PARALLEL\noptions and we are not giving any error. I think, irrespective of\nvalue of both FULL and PARALLEL options, we should give error in all\nthe above mentioned three cases.\n\nThoughts?\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 00:06:04 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 09, 2020 at 12:06:04AM +0530, Mahendra Singh Thalor wrote:\n> On Wed, 8 Apr 2020 at 22:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Apr 08, 2020 at 11:57:08AM -0400, Robert Haas wrote:\n> > > On Wed, Apr 8, 2020 at 10:25 AM Mahendra Singh Thalor\n> > > <mahi6run@gmail.com> wrote:\n> > > > I think, Tushar point is that either we should allow both\n> > > > vacuum(parallel 0, full 1) and vacuum(parallel 1, full 0) or in the\n> > > > both cases, we should through error.\n> > >\n> > > Oh, yeah, good point. Somebody must not've been careful enough with\n> > > the options-checking code.\n> >\n> > Actually I think someone was too careful.\n> >\n> > From 9256cdb0a77fb33194727e265a346407921055ef Mon Sep 17 00:00:00 2001\n> > From: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Wed, 8 Apr 2020 11:38:36 -0500\n> > Subject: [PATCH v1] parallel vacuum: options check to use same test as in\n> > vacuumlazy.c\n> >\n> > ---\n> > src/backend/commands/vacuum.c | 4 +---\n> > 1 file changed, 1 insertion(+), 3 deletions(-)\n> >\n> > diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> > index 351d5215a9..660c854d49 100644\n> > --- a/src/backend/commands/vacuum.c\n> > +++ b/src/backend/commands/vacuum.c\n> > @@ -104,7 +104,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> > bool freeze = false;\n> > bool full = false;\n> > bool disable_page_skipping = false;\n> > - bool parallel_option = false;\n> > ListCell *lc;\n> >\n> > /* Set default value */\n> > @@ -145,7 +144,6 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> > params.truncate = get_vacopt_ternary_value(opt);\n> > else if (strcmp(opt->defname, \"parallel\") == 0)\n> > {\n> > - parallel_option = true;\n> > if (opt->arg == NULL)\n> > {\n> > ereport(ERROR,\n> > @@ -199,7 +197,7 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> > !(params.options & (VACOPT_FULL | VACOPT_FREEZE)));\n> > Assert(!(params.options & VACOPT_SKIPTOAST));\n> >\n> > - if ((params.options & VACOPT_FULL) && parallel_option)\n> > + if ((params.options & VACOPT_FULL) && params.nworkers > 0)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"cannot specify both FULL and PARALLEL options\")));\n> > --\n> > 2.17.0\n> >\n> \n> Thanks Justin for the patch.\n> \n> Patch looks fine to me and it is fixing the issue. After applying this\n> patch, vacuum will work as:\n> \n> 1) vacuum (parallel 1, full 0);\n> -- vacuuming will be done with 1 parallel worker.\n> 2) vacuum (parallel 0, full 1);\n> -- full vacuuming will be done.\n> 3) vacuum (parallel 1, full 1);\n> -- will give error :ERROR: cannot specify both FULL and PARALLEL options\n> \n> 3rd example is telling that we can't give both FULL and PARALLEL\n> options but in 1st and 2nd, we are giving both FULL and PARALLEL\n> options and we are not giving any error. I think, irrespective of\n> value of both FULL and PARALLEL options, we should give error in all\n> the above mentioned three cases.\n\nI think the behavior is correct, but the error message could be improved, like:\n|ERROR: cannot specify FULL with PARALLEL jobs\nor similar.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 8 Apr 2020 13:38:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 08, 2020 at 01:38:54PM -0500, Justin Pryzby wrote:\n> I think the behavior is correct, but the error message could be improved, like:\n> |ERROR: cannot specify FULL with PARALLEL jobs\n> or similar.\n\nPerhaps \"cannot use FULL and PARALLEL options together\"? I think that\nthis patch needs tests in sql/vacuum.sql.\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 10:36:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 7:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 08, 2020 at 01:38:54PM -0500, Justin Pryzby wrote:\n> > I think the behavior is correct, but the error message could be improved, like:\n> > |ERROR: cannot specify FULL with PARALLEL jobs\n> > or similar.\n>\n> Perhaps \"cannot use FULL and PARALLEL options together\"?\n>\n\nWe already have a similar message \"cannot specify both PARSER and COPY\noptions\", so I think the current message is fine.\n\n> I think that\n> this patch needs tests in sql/vacuum.sql.\n>\n\nWe already have one test that is testing an invalid combination of\nPARALLEL and FULL option, not sure of adding more on similar lines is\na good idea, but we can do that if it makes sense. What more tests\nyou have in mind which make sense here?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 11:05:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 12:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 09, 2020 at 12:06:04AM +0530, Mahendra Singh Thalor wrote:\n> > On Wed, 8 Apr 2020 at 22:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> >\n> > Thanks Justin for the patch.\n> >\n> > Patch looks fine to me and it is fixing the issue. After applying this\n> > patch, vacuum will work as:\n> >\n> > 1) vacuum (parallel 1, full 0);\n> > -- vacuuming will be done with 1 parallel worker.\n> > 2) vacuum (parallel 0, full 1);\n> > -- full vacuuming will be done.\n> > 3) vacuum (parallel 1, full 1);\n> > -- will give error :ERROR: cannot specify both FULL and PARALLEL options\n> >\n> > 3rd example is telling that we can't give both FULL and PARALLEL\n> > options but in 1st and 2nd, we are giving both FULL and PARALLEL\n> > options and we are not giving any error. I think, irrespective of\n> > value of both FULL and PARALLEL options, we should give error in all\n> > the above mentioned three cases.\n>\n> I think the behavior is correct, but the error message could be improved,\n>\n\nYeah, I also think that the behavior is fine. We can do what Mahendra\nis saying but that will unnecessarily block some syntax and we might\nneed to introduce an extra variable to detect that in code.\n\n> like:\n> |ERROR: cannot specify FULL with PARALLEL jobs\n> or similar.\n>\n\nI don't see much problem with the current error message as a similar\nmessage is used someplace else also as mentioned in my previous reply.\nHowever, we can change it if we feel the current message is not\nconveying the cause of the problem.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 11:22:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, 9 Apr 2020 at 14:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 9, 2020 at 12:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Apr 09, 2020 at 12:06:04AM +0530, Mahendra Singh Thalor wrote:\n> > > On Wed, 8 Apr 2020 at 22:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > >\n> > > Thanks Justin for the patch.\n> > >\n> > > Patch looks fine to me and it is fixing the issue. After applying this\n> > > patch, vacuum will work as:\n> > >\n> > > 1) vacuum (parallel 1, full 0);\n> > > -- vacuuming will be done with 1 parallel worker.\n> > > 2) vacuum (parallel 0, full 1);\n> > > -- full vacuuming will be done.\n> > > 3) vacuum (parallel 1, full 1);\n> > > -- will give error :ERROR: cannot specify both FULL and PARALLEL options\n> > >\n> > > 3rd example is telling that we can't give both FULL and PARALLEL\n> > > options but in 1st and 2nd, we are giving both FULL and PARALLEL\n> > > options and we are not giving any error. I think, irrespective of\n> > > value of both FULL and PARALLEL options, we should give error in all\n> > > the above mentioned three cases.\n> >\n> > I think the behavior is correct, but the error message could be improved,\n> >\n>\n> Yeah, I also think that the behavior is fine.\n\nMe too.\n\n> We can do what Mahendra\n> is saying but that will unnecessarily block some syntax and we might\n> need to introduce an extra variable to detect that in code.\n\nISTM we have been using the expression \"specifying the option\" in log\nmessages when a user wrote the option in the command. But now that\nVACUUM command options can have a true/false it doesn't make sense to\nsay like \"if the option is specified we cannot do that\". So maybe\n\"cannot turn on FULL and PARALLEL options\" or something would be\nbetter, I think.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:23:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 09, 2020 at 11:05:50AM +0530, Amit Kapila wrote:\n> On Thu, Apr 9, 2020 at 7:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> I think that\n>> this patch needs tests in sql/vacuum.sql.\n> \n> We already have one test that is testing an invalid combination of\n> PARALLEL and FULL option, not sure of adding more on similar lines is\n> a good idea, but we can do that if it makes sense. What more tests\n> you have in mind which make sense here?\n\nAs you say, vacuum.sql includes this test:\nVACUUM (PARALLEL 2, FULL TRUE) pvactst; -- error, cannot use both PARALLEL and FULL\nERROR: cannot specify both FULL and PARALLEL options\n\nBut based on the discussion of this thread, it seems to me that we had\nbetter stress more option combinations, particularly the two following\nones:\nvacuum (full 0, parallel 1) foo;\nvacuum (full 1, parallel 0) foo;\n\nWithout that, how do you make sure that the compatibility wanted does\nnot break again in the future? As of HEAD, the first one passes and\nthe second one fails. And as Tushar is telling us we want to \nhandle both cases in a consistent way.\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 15:44:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 11:54 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 9 Apr 2020 at 14:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > We can do what Mahendra\n> > is saying but that will unnecessarily block some syntax and we might\n> > need to introduce an extra variable to detect that in code.\n>\n> ISTM we have been using the expression \"specifying the option\" in log\n> messages when a user wrote the option in the command. But now that\n> VACUUM command options can have a true/false it doesn't make sense to\n> say like \"if the option is specified we cannot do that\". So maybe\n> \"cannot turn on FULL and PARALLEL options\" or something would be\n> better, I think.\n>\n\nSure, we can change that, but isn't the existing example of similar\nmessage \"cannot specify both PARSER and COPY options\" occurs when\nboth the options have valid values? If so, we can use a similar\nprinciple here, no?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 12:31:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 12:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 09, 2020 at 11:05:50AM +0530, Amit Kapila wrote:\n> > On Thu, Apr 9, 2020 at 7:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> I think that\n> >> this patch needs tests in sql/vacuum.sql.\n> >\n> > We already have one test that is testing an invalid combination of\n> > PARALLEL and FULL option, not sure of adding more on similar lines is\n> > a good idea, but we can do that if it makes sense. What more tests\n> > you have in mind which make sense here?\n>\n> As you say, vacuum.sql includes this test:\n> VACUUM (PARALLEL 2, FULL TRUE) pvactst; -- error, cannot use both PARALLEL and FULL\n> ERROR: cannot specify both FULL and PARALLEL options\n>\n> But based on the discussion of this thread, it seems to me that we had\n> better stress more option combinations, particularly the two following\n> ones:\n> vacuum (full 0, parallel 1) foo;\n> vacuum (full 1, parallel 0) foo;\n>\n> Without that, how do you make sure that the compatibility wanted does\n> not break again in the future? As of HEAD, the first one passes and\n> the second one fails. And as Tushar is telling us we want to\n> handle both cases in a consistent way.\n>\n\nWe can add more tests to validate the syntax, but my worry was to not\nincrease test timing by doing (parallel) vacuum. So maybe we can do\nsuch syntax validation on empty tables or you have any better idea?\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 12:33:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 09, 2020 at 12:33:57PM +0530, Amit Kapila wrote:\n> We can add more tests to validate the syntax, but my worry was to not\n> increase test timing by doing (parallel) vacuum. So maybe we can do\n> such syntax validation on empty tables or you have any better idea?\n\nUsing empty tables for positive tests is the least expensive option.\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 16:20:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 09, 2020 at 12:31:55PM +0530, Amit Kapila wrote:\n> Sure, we can change that, but isn't the existing example of similar\n> message \"cannot specify both PARSER and COPY options\" occurs when\n> both the options have valid values? If so, we can use a similar\n> principle here, no?\n\nA better comparison is with this one:\n\nsrc/bin/pg_dump/pg_restore.c: pg_log_error(\"cannot specify both --single-transaction and multiple jobs\");\n\nbut it doesn't say just: \"..specify both --single and --jobs\", which would be\nwrong in the same way, and which we already dealt with some time ago:\n\ncommit 14a4f6f3748df4ff63bb2d2d01146b2b98df20ef\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Tue Apr 14 00:06:35 2009 +0000\n\n pg_restore -jN does not equate \"multiple jobs\", so partly revert the\n previous patch.\n \n Per note from Tom.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Apr 2020 02:57:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, 9 Apr 2020 at 16:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 9, 2020 at 11:54 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 9 Apr 2020 at 14:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > We can do what Mahendra\n> > > is saying but that will unnecessarily block some syntax and we might\n> > > need to introduce an extra variable to detect that in code.\n> >\n> > ISTM we have been using the expression \"specifying the option\" in log\n> > messages when a user wrote the option in the command. But now that\n> > VACUUM command options can have a true/false it doesn't make sense to\n> > say like \"if the option is specified we cannot do that\". So maybe\n> > \"cannot turn on FULL and PARALLEL options\" or something would be\n> > better, I think.\n> >\n>\n> Sure, we can change that, but isn't the existing example of similar\n> message \"cannot specify both PARSER and COPY options\" occurs when\n> both the options have valid values? If so, we can use a similar\n> principle here, no?\n\nYes but the difference is that we cannot disable PARSER or COPY by\nspecifying options whereas we can do something like \"VACUUM (FULL\nfalse) tbl\" to disable FULL option. I might be misunderstanding the\nmeaning of \"specify\" though.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:07:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 09, 2020 at 05:07:48PM +0900, Masahiko Sawada wrote:\n> Yes but the difference is that we cannot disable PARSER or COPY by\n> specifying options whereas we can do something like \"VACUUM (FULL\n> false) tbl\" to disable FULL option. I might be misunderstanding the\n> meaning of \"specify\" though.\n\nYou have it right.\n\nWe should fix the behavior, but change the error message for consistency with\nthat change, like so.\n\n-- \nJustin", "msg_date": "Thu, 9 Apr 2020 03:33:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Apr 9, 2020 at 7:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Apr 08, 2020 at 01:38:54PM -0500, Justin Pryzby wrote:\n> > > I think the behavior is correct, but the error message could be improved, like:\n> > > |ERROR: cannot specify FULL with PARALLEL jobs\n> > > or similar.\n> >\n> > Perhaps \"cannot use FULL and PARALLEL options together\"?\n> >\n>\n> We already have a similar message \"cannot specify both PARSER and COPY\n> options\", so I think the current message is fine.\n\nSo, whether the error message is OK depends on the details. The\nsituation as I understand it is that a vacuum cannot be both parallel\nand full. If you disallow every command that includes both key words,\nthen the message seems fine. But suppose you allow\n\nVACUUM (PARALLEL 1, FULL 0) foo;\n\nThere's no technical problem here, because the vacuum is not both\nparallel and full. But if you allow that case, then there is an error\nmessage problem, perhaps, because your error message says that you\ncannot specify both options, but here you did specify both options,\nand yet it worked. So really if this case is allowed a more accurate\nerror message would be something like:\n\nERROR: VACUUM FULL cannot be performed in parallel\n\nBut if you used this latter error message yet disallowed VACUUM\n(PARALLEL 1, FULL 0) then it again wouldn't make sense, because the\nerror message would be now forbidding something that you never tried\nto do.\n\nThe point is that we need to decide whether we're going to complain\nwhenever both options are specified in the syntax, or whether we're\ngoing to complain when they're combined in a way that we don't\nsupport. The error message we choose should match whatever decision we\nmake there.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 10:00:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 09, 2020 at 05:07:48PM +0900, Masahiko Sawada wrote:\n> > Yes but the difference is that we cannot disable PARSER or COPY by\n> > specifying options whereas we can do something like \"VACUUM (FULL\n> > false) tbl\" to disable FULL option. I might be misunderstanding the\n> > meaning of \"specify\" though.\n>\n> You have it right.\n>\n> We should fix the behavior, but change the error message for consistency with\n> that change, like so.\n>\n\nOkay, but I think the error message suggested by Robert \"ERROR: VACUUM\nFULL cannot be performed in parallel\" sounds better than what you have\nproposed. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 10:34:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, Apr 9, 2020 at 7:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 9, 2020 at 1:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Apr 9, 2020 at 7:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > On Wed, Apr 08, 2020 at 01:38:54PM -0500, Justin Pryzby wrote:\n> > > > I think the behavior is correct, but the error message could be improved, like:\n> > > > |ERROR: cannot specify FULL with PARALLEL jobs\n> > > > or similar.\n> > >\n> > > Perhaps \"cannot use FULL and PARALLEL options together\"?\n> > >\n> >\n> > We already have a similar message \"cannot specify both PARSER and COPY\n> > options\", so I think the current message is fine.\n>\n> So, whether the error message is OK depends on the details. The\n> situation as I understand it is that a vacuum cannot be both parallel\n> and full. If you disallow every command that includes both key words,\n> then the message seems fine. But suppose you allow\n>\n> VACUUM (PARALLEL 1, FULL 0) foo;\n>\n> There's no technical problem here, because the vacuum is not both\n> parallel and full. But if you allow that case, then there is an error\n> message problem, perhaps, because your error message says that you\n> cannot specify both options, but here you did specify both options,\n> and yet it worked. So really if this case is allowed a more accurate\n> error message would be something like:\n>\n> ERROR: VACUUM FULL cannot be performed in parallel\n>\n> But if you used this latter error message yet disallowed VACUUM\n> (PARALLEL 1, FULL 0) then it again wouldn't make sense, because the\n> error message would be now forbidding something that you never tried\n> to do.\n>\n> The point is that we need to decide whether we're going to complain\n> whenever both options are specified in the syntax, or whether we're\n> going to complain when they're combined in a way that we don't\n> support.\n>\n\nI would prefer later as I don't find it a good idea to unnecessarily\nblock some syntax.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 10:37:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Fri, 10 Apr 2020 at 14:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 9, 2020 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Apr 09, 2020 at 05:07:48PM +0900, Masahiko Sawada wrote:\n> > > Yes but the difference is that we cannot disable PARSER or COPY by\n> > > specifying options whereas we can do something like \"VACUUM (FULL\n> > > false) tbl\" to disable FULL option. I might be misunderstanding the\n> > > meaning of \"specify\" though.\n> >\n> > You have it right.\n> >\n> > We should fix the behavior, but change the error message for consistency with\n> > that change, like so.\n> >\n>\n> Okay, but I think the error message suggested by Robert \"ERROR: VACUUM\n> FULL cannot be performed in parallel\" sounds better than what you have\n> proposed. What do you think?\n\nI totally agree.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:13:12 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Fri, Apr 10, 2020 at 10:34:02AM +0530, Amit Kapila wrote:\n> On Thu, Apr 9, 2020 at 2:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Apr 09, 2020 at 05:07:48PM +0900, Masahiko Sawada wrote:\n> > > Yes but the difference is that we cannot disable PARSER or COPY by\n> > > specifying options whereas we can do something like \"VACUUM (FULL\n> > > false) tbl\" to disable FULL option. I might be misunderstanding the\n> > > meaning of \"specify\" though.\n> >\n> > You have it right.\n> >\n> > We should fix the behavior, but change the error message for consistency with\n> > that change, like so.\n> >\n> \n> Okay, but I think the error message suggested by Robert \"ERROR: VACUUM\n> FULL cannot be performed in parallel\" sounds better than what you have\n> proposed. What do you think?\n\nNo problem. I think I was trying to make my text similar to that from\n14a4f6f37.\n\nI realized that I didn't sq!uash my last patch, so it didn't include the\nfunctional change (which is maybe what Robert was referring to).\n\n-- \nJustin", "msg_date": "Fri, 10 Apr 2020 08:35:32 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Fri, Apr 10, 2020 at 7:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>\n> No problem. I think I was trying to make my text similar to that from\n> 14a4f6f37.\n>\n> I realized that I didn't sq!uash my last patch, so it didn't include the\n> functional change (which is maybe what Robert was referring to).\n>\n\nI think it is better to add a new test for temporary table which has\nless data. We don't want to increase test timings to test the\ncombination of options. I changed that in the attached patch. I will\ncommit this tomorrow unless you or anyone else has any more comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Apr 2020 14:54:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Mon, 13 Apr 2020 at 18:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 10, 2020 at 7:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> >\n> > No problem. I think I was trying to make my text similar to that from\n> > 14a4f6f37.\n> >\n> > I realized that I didn't sq!uash my last patch, so it didn't include the\n> > functional change (which is maybe what Robert was referring to).\n> >\n>\n> I think it is better to add a new test for temporary table which has\n> less data. We don't want to increase test timings to test the\n> combination of options. I changed that in the attached patch. I will\n> commit this tomorrow unless you or anyone else has any more comments.\n>\n\nThank you for updating the patch!\n\nI think we can update the documentation as well. Currently, the\ndocumentation says \"This option can't be used with the FULL option.\"\nbut we can say instead, for example, \"VACUUM FULL can't use parallel\nvacuum.\".\n\nAlso, I'm concerned that the documentation says that VACUUM FULL\ncannot use parallel vacuum and we compute the parallel degree when\nPARALLEL option is omitted, but the following command is accepted:\n\npostgres(1:55514)=# vacuum (full on) test;\nVACUUM\n\nInstead, we can say:\n\nIn plain VACUUM (without FULL), if the PARALLEL option is omitted,\nthen VACUUM decides the number of workers based on the number of\nindexes that support parallel vacuum operation on the relation which\nis further limited by max_parallel_maintenance_workers.\n\n(it just adds \"In plain VACUUM (without FULL)\" to the beginning of the\noriginal sentence.)\n\nWhat do you think?\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:52:37 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Mon, Apr 13, 2020 at 4:23 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 13 Apr 2020 at 18:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 10, 2020 at 7:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > >\n> > > No problem. I think I was trying to make my text similar to that from\n> > > 14a4f6f37.\n> > >\n> > > I realized that I didn't sq!uash my last patch, so it didn't include the\n> > > functional change (which is maybe what Robert was referring to).\n> > >\n> >\n> > I think it is better to add a new test for temporary table which has\n> > less data. We don't want to increase test timings to test the\n> > combination of options. I changed that in the attached patch. I will\n> > commit this tomorrow unless you or anyone else has any more comments.\n> >\n>\n> Thank you for updating the patch!\n>\n> I think we can update the documentation as well. Currently, the\n> documentation says \"This option can't be used with the FULL option.\"\n> but we can say instead, for example, \"VACUUM FULL can't use parallel\n> vacuum.\".\n>\n\nI am not very sure about this. I don't think the current text is wrong\nespecially when you see the value we can specify there is described\nas: \"Specifies a non-negative integer value passed to the selected\noption.\". However, we can consider changing it if others also think\nthe proposed text or something like that is better than current text.\n\n> Also, I'm concerned that the documentation says that VACUUM FULL\n> cannot use parallel vacuum and we compute the parallel degree when\n> PARALLEL option is omitted, but the following command is accepted:\n>\n> postgres(1:55514)=# vacuum (full on) test;\n> VACUUM\n>\n> Instead, we can say:\n>\n> In plain VACUUM (without FULL), if the PARALLEL option is omitted,\n> then VACUUM decides the number of workers based on the number of\n> indexes that support parallel vacuum operation on the relation which\n> is further limited by max_parallel_maintenance_workers.\n>\n> (it just adds \"In plain VACUUM (without FULL)\" to the beginning of the\n> original sentence.)\n>\n\nYeah, something on these lines would be a good idea. Note that, we are\nalready planning to slightly change this particular sentence in\nanother patch [1].\n\n[1] - https://www.postgresql.org/message-id/20200322021801.GB2563%40telsasoft.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Apr 2020 17:55:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Mon, Apr 13, 2020 at 05:55:43PM +0530, Amit Kapila wrote:\n> On Mon, Apr 13, 2020 at 4:23 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> I am not very sure about this. I don't think the current text is wrong\n> especially when you see the value we can specify there is described\n> as: \"Specifies a non-negative integer value passed to the selected\n> option.\". However, we can consider changing it if others also think\n> the proposed text or something like that is better than current text.\n\nFWIW, the current formulation in the docs looked fine to me.\n\n> Yeah, something on these lines would be a good idea. Note that, we are\n> already planning to slightly change this particular sentence in\n> another patch [1].\n> \n> [1] - https://www.postgresql.org/message-id/20200322021801.GB2563%40telsasoft.com\n\nMakes sense. I have two comments.\n\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n- errmsg(\"cannot specify both FULL and PARALLEL options\")));\n+ errmsg(\"VACUUM FULL cannot be performed in parallel\")));\nBetter to avoid a full sentence here [1]? This should be a \"cannot do\nfoo\" errror. \n\n-VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n+VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n+VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n\nTo fully close the gap in the tests, I would also add a test for\n(PARALLEL 1, FULL false) where FULL directly specified, even if that\nsounds like a nit. That's fine to test even on a temporary table.\n\n[1]: https://www.postgresql.org/docs/devel/error-style-guide.html\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 11:22:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Makes sense. I have two comments.\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> - errmsg(\"cannot specify both FULL and PARALLEL options\")));\n> + errmsg(\"VACUUM FULL cannot be performed in parallel\")));\n> Better to avoid a full sentence here [1]? This should be a \"cannot do\n> foo\" errror.\n>\n\nI could see similar error messages in other places like below:\nCONCURRENTLY cannot be used when the materialized view is not populated\nCONCURRENTLY and WITH NO DATA options cannot be used together\nCOPY delimiter cannot be newline or carriage return\n\nAlso, I am not sure if it violates the style we have used in code. It\nseems the error message is short, succinct and conveys the required\ninformation.\n\n> -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n>\n> To fully close the gap in the tests, I would also add a test for\n> (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> sounds like a nit. That's fine to test even on a temporary table.\n>\n\nOkay, I will do this once we agree on the error message stuff.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 08:55:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Tue, Apr 14, 2020 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n>\n> > -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> > +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> > WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> > +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n> >\n> > To fully close the gap in the tests, I would also add a test for\n> > (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> > sounds like a nit. That's fine to test even on a temporary table.\n> >\n>\n> Okay, I will do this once we agree on the error message stuff.\n>\n\nI have changed one of the existing tests to test the option suggested\nby you. Additionally, I have changed the docs as per suggestion from\nSawada-san. I haven't changed the error message. Let me know if you\nhave any more comments?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Apr 2020 08:54:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 15, 2020 at 08:54:17AM +0530, Amit Kapila wrote:\n> On Tue, Apr 14, 2020 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> >\n> > > -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> > > +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> > > WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> > > +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n> > >\n> > > To fully close the gap in the tests, I would also add a test for\n> > > (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> > > sounds like a nit. That's fine to test even on a temporary table.\n> > >\n> >\n> > Okay, I will do this once we agree on the error message stuff.\n> >\n> \n> I have changed one of the existing tests to test the option suggested\n> by you. Additionally, I have changed the docs as per suggestion from\n> Sawada-san. I haven't changed the error message. Let me know if you\n> have any more comments?\n\nYou did:\n|...then the number of workers is determined based on the number of\n|indexes that support parallel vacuum operation on the [-relation,-]{+relation+} and is further\n|limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n\nI'd suggest to say instead:\n|...then the number of workers is determined based on the number of\n|indexes ON THE RELATION that support parallel vacuum operation, and is further\n|limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 14 Apr 2020 22:33:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 15, 2020 at 9:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 08:54:17AM +0530, Amit Kapila wrote:\n> > On Tue, Apr 14, 2020 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > >\n> > > > -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> > > > +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> > > > WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> > > > +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n> > > >\n> > > > To fully close the gap in the tests, I would also add a test for\n> > > > (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> > > > sounds like a nit. That's fine to test even on a temporary table.\n> > > >\n> > >\n> > > Okay, I will do this once we agree on the error message stuff.\n> > >\n> >\n> > I have changed one of the existing tests to test the option suggested\n> > by you. Additionally, I have changed the docs as per suggestion from\n> > Sawada-san. I haven't changed the error message. Let me know if you\n> > have any more comments?\n>\n> You did:\n> |...then the number of workers is determined based on the number of\n> |indexes that support parallel vacuum operation on the [-relation,-]{+relation+} and is further\n> |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n>\n> I'd suggest to say instead:\n> |...then the number of workers is determined based on the number of\n> |indexes ON THE RELATION that support parallel vacuum operation, and is further\n> |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n>\n\nI have not changed this now but I find your suggestion better than\nexisting wording. I'll change this before committing the patch unless\nthere are more comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Apr 2020 09:12:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Wed, Apr 15, 2020 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 9:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Apr 15, 2020 at 08:54:17AM +0530, Amit Kapila wrote:\n> > > On Tue, Apr 14, 2020 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > >\n> > > >\n> > > > > -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> > > > > +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> > > > > WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> > > > > +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n> > > > >\n> > > > > To fully close the gap in the tests, I would also add a test for\n> > > > > (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> > > > > sounds like a nit. That's fine to test even on a temporary table.\n> > > > >\n> > > >\n> > > > Okay, I will do this once we agree on the error message stuff.\n> > > >\n> > >\n> > > I have changed one of the existing tests to test the option suggested\n> > > by you. Additionally, I have changed the docs as per suggestion from\n> > > Sawada-san. I haven't changed the error message. Let me know if you\n> > > have any more comments?\n> >\n> > You did:\n> > |...then the number of workers is determined based on the number of\n> > |indexes that support parallel vacuum operation on the [-relation,-]{+relation+} and is further\n> > |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> >\n> > I'd suggest to say instead:\n> > |...then the number of workers is determined based on the number of\n> > |indexes ON THE RELATION that support parallel vacuum operation, and is further\n> > |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> >\n>\n> I have not changed this now but I find your suggestion better than\n> existing wording. I'll change this before committing the patch unless\n> there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Apr 2020 11:31:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" }, { "msg_contents": "On Thu, 16 Apr 2020 at 15:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 15, 2020 at 9:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Wed, Apr 15, 2020 at 08:54:17AM +0530, Amit Kapila wrote:\n> > > > On Tue, Apr 14, 2020 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Apr 14, 2020 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > > >\n> > > > >\n> > > > > > -VACUUM (PARALLEL 1) tmp; -- disables parallel vacuum option\n> > > > > > +VACUUM (PARALLEL 1) tmp; -- parallel vacuum disabled for temp tables\n> > > > > > WARNING: disabling parallel option of vacuum on \"tmp\" --- cannot vacuum temporary tables in parallel\n> > > > > > +VACUUM (PARALLEL 0, FULL TRUE) tmp; -- can specify parallel disabled (even though that's implied by FULL)\n> > > > > >\n> > > > > > To fully close the gap in the tests, I would also add a test for\n> > > > > > (PARALLEL 1, FULL false) where FULL directly specified, even if that\n> > > > > > sounds like a nit. That's fine to test even on a temporary table.\n> > > > > >\n> > > > >\n> > > > > Okay, I will do this once we agree on the error message stuff.\n> > > > >\n> > > >\n> > > > I have changed one of the existing tests to test the option suggested\n> > > > by you. Additionally, I have changed the docs as per suggestion from\n> > > > Sawada-san. I haven't changed the error message. Let me know if you\n> > > > have any more comments?\n> > >\n> > > You did:\n> > > |...then the number of workers is determined based on the number of\n> > > |indexes that support parallel vacuum operation on the [-relation,-]{+relation+} and is further\n> > > |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > >\n> > > I'd suggest to say instead:\n> > > |...then the number of workers is determined based on the number of\n> > > |indexes ON THE RELATION that support parallel vacuum operation, and is further\n> > > |limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > >\n> >\n> > I have not changed this now but I find your suggestion better than\n> > existing wording. I'll change this before committing the patch unless\n> > there are more comments.\n> >\n>\n> Pushed.\n\nThanks! I've updated the Open Items page.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 16:39:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum o/p with (full 1, parallel 0) option throwing an error" } ]
[ { "msg_contents": "This seems to be a bug in master, v12, and (probably) v11, where \"FOR EACH FOR\"\nwas first allowed on partition tables (86f575948).\n\nI thought this would work like partitioned indexes (8b08f7d48), where detaching\na partition makes its index non-inherited, and attaching a partition marks a\npre-existing, matching partition as inherited rather than creating a new one.\n\nDROP TABLE t, t1;\nCREATE TABLE t(i int)PARTITION BY RANGE(i);\nCREATE TABLE t1 PARTITION OF t FOR VALUES FROM(1)TO(2);\nCREATE OR REPLACE FUNCTION trigf() RETURNS trigger LANGUAGE plpgsql AS $$ BEGIN END $$;\nCREATE TRIGGER trig AFTER INSERT ON t FOR EACH ROW EXECUTE FUNCTION trigf();\nSELECT tgrelid::regclass, * FROM pg_trigger WHERE tgrelid='t1'::regclass;\nALTER TABLE t DETACH PARTITION t1;\nALTER TABLE t ATTACH PARTITION t1 FOR VALUES FROM (1)TO(2);\nERROR: trigger \"trig\" for relation \"t1\" already exists\n\nDROP TRIGGER trig ON t1;\nERROR: cannot drop trigger trig on table t1 because trigger trig on table t requires it\nHINT: You can drop trigger trig on table t instead.\n\nI remember these, but they don't seem to be relevant to this issue, which seems\nto be independant.\n\n1fa846f1c9 Fix cloning of row triggers to sub-partitions\nb9b408c487 Record parents of triggers\n\nThe commit for partitioned indexes talks about using an pre-existing index on\nthe child as a \"convenience gadget\", puts indexes into pg_inherit, and\nintroduces \"ALTER INDEX..ATTACH PARTITION\" and \"CREATE INDEX..ON ONLY\".\n\nIt's probably rare for a duplicate index to be useful (unless rebuilding to be\nmore optimal, which is probably not reasonably interspersed with altering\ninheritence). But I don't know if that's equally true for triggers. So I'm\nnot sure what the intended behavior is, so I've stopped after implementing\na partial fix.", "msg_date": "Wed, 8 Apr 2020 10:24:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-08, Justin Pryzby wrote:\n\n> This seems to be a bug in master, v12, and (probably) v11, where \"FOR EACH FOR\"\n> was first allowed on partition tables (86f575948).\n> \n> I thought this would work like partitioned indexes (8b08f7d48), where detaching\n> a partition makes its index non-inherited, and attaching a partition marks a\n> pre-existing, matching partition as inherited rather than creating a new one.\n\nHmm. Let's agree to what behavior we want, and then we implement that.\nIt seems to me there are two choices:\n\n1. on detach, keep the trigger but make it independent of the trigger on\nparent. (This requires that the trigger is made dependent on the\ntrigger on parent, if the table is attached as partition again;\notherwise you'd end up with multiple copies of the trigger if you\ndetach/attach multiple times).\n\n2. on detach, remove the trigger from the partition.\n\nI think (2) is easier to implement, but (1) is the more convenient\nbehavior.\n\n(The current behavior is obviously a bug.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 12:02:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Wed, Apr 08, 2020 at 12:02:39PM -0400, Alvaro Herrera wrote:\n> On 2020-Apr-08, Justin Pryzby wrote:\n> \n> > This seems to be a bug in master, v12, and (probably) v11, where \"FOR EACH FOR\"\n> > was first allowed on partition tables (86f575948).\n> > \n> > I thought this would work like partitioned indexes (8b08f7d48), where detaching\n> > a partition makes its index non-inherited, and attaching a partition marks a\n> > pre-existing, matching partition as inherited rather than creating a new one.\n> \n> Hmm. Let's agree to what behavior we want, and then we implement that.\n> It seems to me there are two choices:\n> \n> 1. on detach, keep the trigger but make it independent of the trigger on\n> parent. (This requires that the trigger is made dependent on the\n> trigger on parent, if the table is attached as partition again;\n> otherwise you'd end up with multiple copies of the trigger if you\n> detach/attach multiple times).\n> \n> 2. on detach, remove the trigger from the partition.\n> \n> I think (2) is easier to implement, but (1) is the more convenient\n> behavior.\n\nAt telsasoft, we don't care (we uninherit tables before ALTERing parents to\navoid disruptive locking and to avoid worst-case disk use).\n\n(1) is consistent with the behavior for indexes, which is a slight advantage\nfor users' ability to understand and keep track of the behavior. But adding\ntriggers is pretty different so I'm not sure it's a totally compelling\nparallel.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:44:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Hmm. Let's agree to what behavior we want, and then we implement that.\n> It seems to me there are two choices:\n\n> 1. on detach, keep the trigger but make it independent of the trigger on\n> parent. (This requires that the trigger is made dependent on the\n> trigger on parent, if the table is attached as partition again;\n> otherwise you'd end up with multiple copies of the trigger if you\n> detach/attach multiple times).\n\n> 2. on detach, remove the trigger from the partition.\n\n> I think (2) is easier to implement, but (1) is the more convenient\n> behavior.\n\nI think that #1 would soon lead to needing all the same infrastructure\nas we have for inherited columns and constraints, ie triggers would need\nequivalents of attislocal and attinhcount. I don't really want to go\nthere, so I'd vote for #2.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 12:50:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-08, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Hmm. Let's agree to what behavior we want, and then we implement that.\n> > It seems to me there are two choices:\n> \n> > 1. on detach, keep the trigger but make it independent of the trigger on\n> > parent. (This requires that the trigger is made dependent on the\n> > trigger on parent, if the table is attached as partition again;\n> > otherwise you'd end up with multiple copies of the trigger if you\n> > detach/attach multiple times).\n> \n> > 2. on detach, remove the trigger from the partition.\n> \n> > I think (2) is easier to implement, but (1) is the more convenient\n> > behavior.\n> \n> I think that #1 would soon lead to needing all the same infrastructure\n> as we have for inherited columns and constraints, ie triggers would need\n> equivalents of attislocal and attinhcount. I don't really want to go\n> there, so I'd vote for #2.\n\nHmm. Those things are used for the legacy inheritance case supporting\nmultiple inheritance, where we need to figure out which parent the table\nis being detached (disinherited) from. But for partitioning we know\nwhich parent it is, since there can only be one. So I don't think that\nargument applies.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 14:01:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Apr-08, Tom Lane wrote:\n>> I think that #1 would soon lead to needing all the same infrastructure\n>> as we have for inherited columns and constraints, ie triggers would need\n>> equivalents of attislocal and attinhcount. I don't really want to go\n>> there, so I'd vote for #2.\n\n> Hmm. Those things are used for the legacy inheritance case supporting\n> multiple inheritance, where we need to figure out which parent the table\n> is being detached (disinherited) from. But for partitioning we know\n> which parent it is, since there can only be one. So I don't think that\n> argument applies.\n\nMy point is that so long as you only allow the case of exactly one parent,\nyou can just delete the child trigger, because it must belong to that\nparent. As soon as there's any flexibility, you are going to end up\nreinventing all the stuff we had to invent to manage\nmaybe-or-maybe-not-inherited columns. So I think the \"detach\" idea\nis the first step on that road, and I counsel not taking that step.\n\n(This implies that when creating a child trigger, we should error out,\n*not* allow the case, if there's already a trigger by that name. Not\nsure if that's what happens today, but again I'd say that's what we\nshould do to avoid complicated cases.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 14:09:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Thu, Apr 9, 2020 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Apr-08, Tom Lane wrote:\n> >> I think that #1 would soon lead to needing all the same infrastructure\n> >> as we have for inherited columns and constraints, ie triggers would need\n> >> equivalents of attislocal and attinhcount. I don't really want to go\n> >> there, so I'd vote for #2.\n>\n> > Hmm. Those things are used for the legacy inheritance case supporting\n> > multiple inheritance, where we need to figure out which parent the table\n> > is being detached (disinherited) from. But for partitioning we know\n> > which parent it is, since there can only be one. So I don't think that\n> > argument applies.\n>\n> My point is that so long as you only allow the case of exactly one parent,\n> you can just delete the child trigger, because it must belong to that\n> parent. As soon as there's any flexibility, you are going to end up\n> reinventing all the stuff we had to invent to manage\n> maybe-or-maybe-not-inherited columns. So I think the \"detach\" idea\n> is the first step on that road, and I counsel not taking that step.\n\nAs mentioned upthread, we have behavior #1 for indexes (attach\nexisting / detach & keep), without any of the *islocal, *inhcount\ninfrastructure. It is a bit complex, because we need logic to check\nthe equivalence of an existing index on the partition being attached,\nso implementing the same behavior for trigger is going to have to be\nalmost as complex. Considering that #2 will be much simpler to\nimplement, but would be asymmetric with everything else.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 22:04:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Apr 9, 2020 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My point is that so long as you only allow the case of exactly one parent,\n>> you can just delete the child trigger, because it must belong to that\n>> parent. As soon as there's any flexibility, you are going to end up\n>> reinventing all the stuff we had to invent to manage\n>> maybe-or-maybe-not-inherited columns. So I think the \"detach\" idea\n>> is the first step on that road, and I counsel not taking that step.\n\n> As mentioned upthread, we have behavior #1 for indexes (attach\n> existing / detach & keep), without any of the *islocal, *inhcount\n> infrastructure. It is a bit complex, because we need logic to check\n> the equivalence of an existing index on the partition being attached,\n> so implementing the same behavior for trigger is going to have to be\n> almost as complex. Considering that #2 will be much simpler to\n> implement, but would be asymmetric with everything else.\n\nI think there is justification for jumping through some hoops for\nindexes, because they can be extremely expensive to recreate.\nThe same argument doesn't hold even a little bit for child\ntriggers, though.\n\nAlso it can be expected that an index will still behave sensibly after\nits table is standalone, whereas that's far from obvious for a trigger\nthat was meant to work on partition member tables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Apr 2020 09:46:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Thu, Apr 09, 2020 at 09:46:38AM -0400, Tom Lane wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Apr 9, 2020 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> My point is that so long as you only allow the case of exactly one parent,\n> >> you can just delete the child trigger, because it must belong to that\n> >> parent. As soon as there's any flexibility, you are going to end up\n> >> reinventing all the stuff we had to invent to manage\n> >> maybe-or-maybe-not-inherited columns. So I think the \"detach\" idea\n> >> is the first step on that road, and I counsel not taking that step.\n> \n> > As mentioned upthread, we have behavior #1 for indexes (attach\n> > existing / detach & keep), without any of the *islocal, *inhcount\n> > infrastructure. It is a bit complex, because we need logic to check\n> > the equivalence of an existing index on the partition being attached,\n> > so implementing the same behavior for trigger is going to have to be\n> > almost as complex. Considering that #2 will be much simpler to\n> > implement, but would be asymmetric with everything else.\n> \n> I think there is justification for jumping through some hoops for\n> indexes, because they can be extremely expensive to recreate.\n> The same argument doesn't hold even a little bit for child\n> triggers, though.\n> \n> Also it can be expected that an index will still behave sensibly after\n> its table is standalone, whereas that's far from obvious for a trigger\n> that was meant to work on partition member tables.\n\nI haven't heard a compelling argument for or against either way.\n\nMaybe the worst behavior might be if, during ATTACH, we searched for a matching\ntrigger, and \"merged\" it (marked it inherited) if it matched. That could be\nbad if someone *wanted* two triggers, which seems unlikely, but to each their\nown.\n\nI implemented the simple way (and, as an experiment, 75% of the hard way).\n\nIt occured to me that we don't currently distinguish between a trigger on a\nchild table, and a trigger on a parent table which was recursively created on a\nchild. That makes sense for indexes and constraints, since the objects persist\nif the table is detached, so it doesn't matter how it was defined.\n\nBut if we remove trigger during DETACH, then it's *not* the same as a trigger\nthat was defined on the child, and I suggest that in v13 we should make that\nvisible.\n\n-- \nJustin", "msg_date": "Sat, 18 Apr 2020 19:22:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "v3 fixes a brown-paper-bag logic error.\n\n-- \nJustin", "msg_date": "Sat, 18 Apr 2020 20:28:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-18, Justin Pryzby wrote:\n\n> I haven't heard a compelling argument for or against either way.\n> \n> Maybe the worst behavior might be if, during ATTACH, we searched for a matching\n> trigger, and \"merged\" it (marked it inherited) if it matched. That could be\n> bad if someone *wanted* two triggers, which seems unlikely, but to each their\n> own.\n\nI think the simplicity argument trumps the other ones, so I agree to go\nwith your v3 patch proposed downthread.\n\nWhat happens if you detach the parent? I mean, should the trigger\nremoval recurse to children?\n\n> It occured to me that we don't currently distinguish between a trigger on a\n> child table, and a trigger on a parent table which was recursively created on a\n> child. That makes sense for indexes and constraints, since the objects persist\n> if the table is detached, so it doesn't matter how it was defined.\n> \n> But if we remove trigger during DETACH, then it's *not* the same as a trigger\n> that was defined on the child, and I suggest that in v13 we should make that\n> visible.\n\nHmm, interesting point -- whether the trigger is partition or not is\nimportant because it affects what happens on detach. I agree that we\nshould make it visible. Is the proposed single bit \"PARTITION\" good\nenough, or should we indicate what's the ancestor table that defines the\npartition?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 15:13:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Wed, Apr 08, 2020 at 11:44:33AM -0500, Justin Pryzby wrote:\n> On Wed, Apr 08, 2020 at 12:02:39PM -0400, Alvaro Herrera wrote:\n> > On 2020-Apr-08, Justin Pryzby wrote:\n> > \n> > > This seems to be a bug in master, v12, and (probably) v11, where \"FOR EACH FOR\"\n> > > was first allowed on partition tables (86f575948).\n> > > \n> > > I thought this would work like partitioned indexes (8b08f7d48), where detaching\n> > > a partition makes its index non-inherited, and attaching a partition marks a\n> > > pre-existing, matching partition as inherited rather than creating a new one.\n> > \n> > Hmm. Let's agree to what behavior we want, and then we implement that.\n> > It seems to me there are two choices:\n> > \n> > 1. on detach, keep the trigger but make it independent of the trigger on\n> > parent. (This requires that the trigger is made dependent on the\n> > trigger on parent, if the table is attached as partition again;\n> > otherwise you'd end up with multiple copies of the trigger if you\n> > detach/attach multiple times).\n> > \n> > 2. on detach, remove the trigger from the partition.\n> > \n> > I think (2) is easier to implement, but (1) is the more convenient\n> > behavior.\n> \n> At telsasoft, we don't care (we uninherit tables before ALTERing parents to\n> avoid disruptive locking and to avoid worst-case disk use).\n\nI realized that I was wrong about what would be most desirable for us, for an\nuncommon case:\n\nOur loader INSERTs into the child table, not the parent (I think I did that to\ntry to implement UPSERT before partitioned indexes in v11).\n\nAll but the newest partitions are DETACHed when we need to promote a column.\n\nIt's probably rare that we'd be inserting into a table old enough to be\ndetached, and normally that would be ok, but if a trigger were missing, it\nwould misbehave. In our use-case, we're creating trigger on the parent as a\nconvenient way to maintain them on the partitions, which doesn't work if a\ntable exists but detached..\n\nSo we'd actually prefer the behavior of indexes/constraints, where the trigger\nis preserved if the child is detached. I'm not requesting to do that just for\nour use case, which may be atypical or not a good model, but adding our one\ndata point.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 19 Apr 2020 14:18:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-19, Justin Pryzby wrote:\n\n> It's probably rare that we'd be inserting into a table old enough to be\n> detached, and normally that would be ok, but if a trigger were missing, it\n> would misbehave. In our use-case, we're creating trigger on the parent as a\n> convenient way to maintain them on the partitions, which doesn't work if a\n> table exists but detached..\n> \n> So we'd actually prefer the behavior of indexes/constraints, where the trigger\n> is preserved if the child is detached. I'm not requesting to do that just for\n> our use case, which may be atypical or not a good model, but adding our one\n> data point.\n\nI think the easiest way to implement this is to have two triggers -- the\none that's direct in the partition checks whether the table is a\npartition and does nothing in that case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 16:38:15 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Sun, Apr 19, 2020 at 03:13:29PM -0400, Alvaro Herrera wrote:\n> On 2020-Apr-18, Justin Pryzby wrote:\n> > I haven't heard a compelling argument for or against either way.\n> > \n> > Maybe the worst behavior might be if, during ATTACH, we searched for a matching\n> > trigger, and \"merged\" it (marked it inherited) if it matched. That could be\n> > bad if someone *wanted* two triggers, which seems unlikely, but to each their\n> > own.\n> \n> I think the simplicity argument trumps the other ones, so I agree to go\n> with your v3 patch proposed downthread.\n> \n> What happens if you detach the parent? I mean, should the trigger\n> removal recurse to children?\n\nIt think it should probably exactly undo what CloneRowTriggersToPartition does.\n..and I guess you're trying to politely say that it didn't. I tried to fix in\nv4 - please check if that's right.\n\n> > It occured to me that we don't currently distinguish between a trigger on a\n> > child table, and a trigger on a parent table which was recursively created on a\n> > child. That makes sense for indexes and constraints, since the objects persist\n> > if the table is detached, so it doesn't matter how it was defined.\n> > \n> > But if we remove trigger during DETACH, then it's *not* the same as a trigger\n> > that was defined on the child, and I suggest that in v13 we should make that\n> > visible.\n> \n> Hmm, interesting point -- whether the trigger is partition or not is\n> important because it affects what happens on detach. I agree that we\n> should make it visible. Is the proposed single bit \"PARTITION\" good\n> enough, or should we indicate what's the ancestor table that defines the\n> partition?\n\nYea, it's an obvious thing to do.\n\nOne issue is that tgparentid is new, so showing the partition status of the\ntrigger when connected to an pre-13 server would be nontrivial: b9b408c48.\n\n-- \nJustin", "msg_date": "Sun, 19 Apr 2020 15:49:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Mon, Apr 20, 2020 at 5:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Apr 19, 2020 at 03:13:29PM -0400, Alvaro Herrera wrote:\n> > On 2020-Apr-18, Justin Pryzby wrote:\n> > > I haven't heard a compelling argument for or against either way.\n> > >\n> > > Maybe the worst behavior might be if, during ATTACH, we searched for a matching\n> > > trigger, and \"merged\" it (marked it inherited) if it matched. That could be\n> > > bad if someone *wanted* two triggers, which seems unlikely, but to each their\n> > > own.\n> >\n> > I think the simplicity argument trumps the other ones, so I agree to go\n> > with your v3 patch proposed downthread.\n> >\n> > What happens if you detach the parent? I mean, should the trigger\n> > removal recurse to children?\n>\n> It think it should probably exactly undo what CloneRowTriggersToPartition does.\n> ..and I guess you're trying to politely say that it didn't. I tried to fix in\n> v4 - please check if that's right.\n\nLooks correct to me. Maybe have a test cover that?\n\n> > > It occured to me that we don't currently distinguish between a trigger on a\n> > > child table, and a trigger on a parent table which was recursively created on a\n> > > child. That makes sense for indexes and constraints, since the objects persist\n> > > if the table is detached, so it doesn't matter how it was defined.\n> > >\n> > > But if we remove trigger during DETACH, then it's *not* the same as a trigger\n> > > that was defined on the child, and I suggest that in v13 we should make that\n> > > visible.\n> >\n> > Hmm, interesting point -- whether the trigger is partition or not is\n> > important because it affects what happens on detach. I agree that we\n> > should make it visible. Is the proposed single bit \"PARTITION\" good\n> > enough, or should we indicate what's the ancestor table that defines the\n> > partition?\n>\n> Yea, it's an obvious thing to do.\n\nThis:\n\n+ \"false AS tgisinternal\"),\n+ (pset.sversion >= 13000 ?\n+ \"pg_partition_root(t.tgrelid) AS parent\" :\n+ \"'' AS parent\"),\n+ oid);\n\n\nlooks wrong, because the actual partition root may not also be the\ntrigger parent root, for example:\n\ncreate table f (a int references p) partition by list (a);\ncreate table f1 partition of f for values in (1) partition by list (a);\ncreate table f11 partition of f for values in (1);\ncreate function trigfunc() returns trigger language plpgsql as $$\nbegin raise notice '%', new; return new; end; $$;\ncreate trigger trig before insert on f1 for each row execute function\ntrigfunc();\n\\d f11\n Table \"public.f11\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nPartition of: f1 FOR VALUES IN (1)\nTriggers:\n trig BEFORE INSERT ON f11 FOR EACH ROW EXECUTE FUNCTION\ntrigfunc(), ON TABLE f\n\nHere, ON TABLE should say \"f1\".\n\nThe following gets the correct parent for me:\n\n- (pset.sversion >= 13000 ?\n- \"pg_partition_root(t.tgrelid) AS parent\" :\n- \"'' AS parent\"),\n+ (pset.sversion >= 130000 ?\n+ \"(SELECT relid\"\n+ \" FROM pg_trigger, pg_partition_ancestors(t.tgrelid)\"\n+ \" WHERE tgname = t.tgname AND tgrelid = relid\"\n+ \" AND tgparentid = 0) AS parent\" :\n+ \" null AS parent\"),\n\nThe server version number being compared against was missing a zero in\nyour patch.\n\nAlso, how about, for consistency, making the parent table labeling of\nthe trigger look similar to that for the foreign constraint, so\ninstead of:\n\nTriggers:\n trig BEFORE INSERT ON f11 FOR EACH ROW EXECUTE FUNCTION\ntrigfunc(), ON TABLE f1\n\nhow about:\n\nTriggers:\n TABLE \"f1\" TRIGGER \"trig\" BEFORE INSERT ON f11 FOR EACH ROW\nEXECUTE FUNCTION trigfunc()\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:35:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Mon, Apr 20, 2020 at 06:35:44PM +0900, Amit Langote wrote:\n> On Mon, Apr 20, 2020 at 5:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Sun, Apr 19, 2020 at 03:13:29PM -0400, Alvaro Herrera wrote:\n> > > What happens if you detach the parent? I mean, should the trigger\n> > > removal recurse to children?\n> >\n> > It think it should probably exactly undo what CloneRowTriggersToPartition does.\n> > ..and I guess you're trying to politely say that it didn't. I tried to fix in\n> > v4 - please check if that's right.\n> \n> Looks correct to me. Maybe have a test cover that?\n\nI included such a test with the v4 patch.\n\n> > > > But if we remove trigger during DETACH, then it's *not* the same as a trigger\n> > > > that was defined on the child, and I suggest that in v13 we should make that\n> > > > visible.\n> > >\n> > > Hmm, interesting point -- whether the trigger is partition or not is\n> > > important because it affects what happens on detach. I agree that we\n> > > should make it visible. Is the proposed single bit \"PARTITION\" good\n> > > enough, or should we indicate what's the ancestor table that defines the\n> > > partition?\n> >\n> > Yea, it's an obvious thing to do.\n> \n> This:\n> \n> + \"false AS tgisinternal\"),\n> + (pset.sversion >= 13000 ?\n> + \"pg_partition_root(t.tgrelid) AS parent\" :\n> + \"'' AS parent\"),\n> + oid);\n> \n> \n> looks wrong, because the actual partition root may not also be the\n> trigger parent root, for example:\n\nSigh, right.\n\n> The following gets the correct parent for me:\n> \n> - (pset.sversion >= 13000 ?\n> - \"pg_partition_root(t.tgrelid) AS parent\" :\n> - \"'' AS parent\"),\n> + (pset.sversion >= 130000 ?\n> + \"(SELECT relid\"\n> + \" FROM pg_trigger, pg_partition_ancestors(t.tgrelid)\"\n> + \" WHERE tgname = t.tgname AND tgrelid = relid\"\n> + \" AND tgparentid = 0) AS parent\" :\n> + \" null AS parent\"),\n\nI'm happy to see that this doesn't require a recursive cte, at least.\nI was trying to think how to break it by returning multiple results or results\nout of order, but I think that can't happen.\n\n> Also, how about, for consistency, making the parent table labeling of\n> the trigger look similar to that for the foreign constraint, so\n> Triggers:\n> TABLE \"f1\" TRIGGER \"trig\" BEFORE INSERT ON f11 FOR EACH ROW EXECUTE FUNCTION trigfunc()\n\nI'll leave that for committer to decide.\n\nI split into separate patches since only 0001 should be backpatched (with\ns/OidIsValid(tgparentid)/isPartitionTrigger/).\n\n-- \nJustin", "msg_date": "Mon, 20 Apr 2020 14:57:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "> +\t\t\tdeleteDependencyRecordsFor(TriggerRelationId,\n> +\t\t\t\t\tpg_trigger->oid,\n> +\t\t\t\t\tfalse);\n> +\t\t\tdeleteDependencyRecordsFor(RelationRelationId,\n> +\t\t\t\t\tpg_trigger->oid,\n> +\t\t\t\t\tfalse);\n> +\n> +\t\t\tCommandCounterIncrement();\n> +\t\t\tObjectAddressSet(object, TriggerRelationId, pg_trigger->oid);\n> +\t\t\tperformDeletion(&object, DROP_RESTRICT, PERFORM_DELETION_INTERNAL);\n> +\t\t}\n> +\n> +\t\tsystable_endscan(scan);\n> +\t\ttable_close(tgrel, RowExclusiveLock);\n> +\t}\n\nTwo small issues here. First, your second call to\ndeleteDependencyRecordsFor did nothing, because your first call deleted\nall the dependency records. I changed that to two\ndeleteDependencyRecordsForClass() calls, that actually do what you\nintended.\n\nThe other is that instead of deleting each trigger, we can accumulate\nthem to delete with a single performMultipleDeletions call; this also\nmeans we get to do CommandCounterIncrement just once.\n\nv6 fixes those things and AFAICS is ready to push.\n\nI haven't reviewed your 0002 carefully, but (as inventor of the \"TABLE\nt\" marker for FK constraints) I agree with Amit that we should imitate\nthat instead of coming up with a new way to show it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 20 Apr 2020 19:04:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-20, Alvaro Herrera wrote:\n\n> +\twhile (HeapTupleIsValid(trigtup = systable_getnext(scan)))\n> +\t{\n> +\t\tForm_pg_trigger pg_trigger = (Form_pg_trigger) GETSTRUCT(trigtup);\n> +\t\tObjectAddress trig;\n> +\n> +\t\t/* Ignore triggers that weren't cloned */\n> +\t\tif (!OidIsValid(pg_trigger->tgparentid) ||\n> +\t\t\t!pg_trigger->tgisinternal ||\n> +\t\t\t!TRIGGER_FOR_ROW(pg_trigger->tgtype))\n> +\t\t\tcontinue;\n\nActually, shouldn't we be checking just \"!OidIsValid(pg_trigger->tgparentid)\"\nhere? Surely the other two conditions should already not matter either\nway if tgparentid is set. I can't see us starting to clone\nfor-statement triggers, but I'm not sure I trust the internal marking to\nremain one way or the other.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 11:45:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "I think I also owe the attached doc updates.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 21 Apr 2020 12:20:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 12:20:38PM -0400, Alvaro Herrera wrote:\n> diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\n> index 7595e609b5..233905552c 100644\n> --- a/doc/src/sgml/ref/alter_table.sgml\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -941,13 +943,14 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> <term><literal>DETACH PARTITION</literal> <replaceable class=\"parameter\">partition_name</replaceable></term>\n> <listitem>\n> <para>\n> This form detaches specified partition of the target table. The detached\n> partition continues to exist as a standalone table, but no longer has any\n> ties to the table from which it was detached. Any indexes that were\n> - attached to the target table's indexes are detached.\n> + attached to the target table's indexes are detached. Any triggers that\n> + were created to mirror those in the target table are removed.\n\nCan you say \"cloned\" here instead of mirror ?\n> + attached to the target table's indexes are detached. Any triggers that\n> + were created as clones of triggers in the target table are removed.\n\nAlso, I see in the surrounding context a missing word? \nThis form detaches THE specified partition of the target table.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Apr 2020 11:45:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On 2020-Apr-20, Justin Pryzby wrote:\n\n> On Mon, Apr 20, 2020 at 06:35:44PM +0900, Amit Langote wrote:\n\n> > Also, how about, for consistency, making the parent table labeling of\n> > the trigger look similar to that for the foreign constraint, so\n> > Triggers:\n> > TABLE \"f1\" TRIGGER \"trig\" BEFORE INSERT ON f11 FOR EACH ROW EXECUTE FUNCTION trigfunc()\n> \n> I'll leave that for committer to decide.\n\nPushed. Many thanks for this!\n\nChanges: I thought that printing the \"ON TABLE\" bit when it's defined in\nthe same table is pointless and ugly, so I added a NULLIF to prevent it\nin that case (it's not every day that you can put NULLIF to work). I\nalso changed the empty string to NULL for the case with older servers,\nso that it doesn't print a lame \"ON TABLE \" clause for them. Lastly,\nadded pg_catalog qualifications everywhere needed.\n\nContrary to what I had said, I decided to leave the output as submitted;\nthe constraint lines are not really precedent against it:\n\n55432 13devel 24286=# \\d lev3\n Partitioned table \"public.lev3\"\n Column │ Type │ Collation │ Nullable │ Default \n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ not null │ \nPartition of: lev2 FOR VALUES IN (3)\nPartition key: LIST (a)\nIndexes:\n \"lev3_pkey\" PRIMARY KEY, btree (a)\nForeign-key constraints:\n TABLE \"lev1\" CONSTRAINT \"lev1_a_fkey\" FOREIGN KEY (a) REFERENCES lev1(a)\nReferenced by:\n TABLE \"lev1\" CONSTRAINT \"lev1_a_fkey\" FOREIGN KEY (a) REFERENCES lev1(a)\nTriggers:\n tt AFTER UPDATE ON lev3 FOR EACH ROW EXECUTE FUNCTION trigger_nothing(), ON TABLE lev2\nNumber of partitions: 1 (Use \\d+ to list them.)\n\nIn the \"FK constraints\" and \"referenced by\" entries, it looks natural\nsince the constraint refers to a table. Not so in the trigger case.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 19:03:30 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" }, { "msg_contents": "On Tue, Apr 21, 2020 at 07:03:30PM -0400, Alvaro Herrera wrote:\n> On 2020-Apr-20, Justin Pryzby wrote:\n> \n> > On Mon, Apr 20, 2020 at 06:35:44PM +0900, Amit Langote wrote:\n> \n> > > Also, how about, for consistency, making the parent table labeling of\n> > > the trigger look similar to that for the foreign constraint, so\n> > > Triggers:\n> > > TABLE \"f1\" TRIGGER \"trig\" BEFORE INSERT ON f11 FOR EACH ROW EXECUTE FUNCTION trigfunc()\n> > \n> > I'll leave that for committer to decide.\n> \n> Pushed. Many thanks for this!\n\nThanks for polishing it.\n\nI was just about to convince myself of the merits of doing it Amit's way :)\n\nI noticed a few issues:\n\n - should put \\n's around Amit's subquery to make psql -E look pretty;\n - maybe should quote the TABLE, like \\\"%s\\\" ?\n\n#3 is that *if* we did it Amit's way, I *think* maybe we should show the\nparent's triggerdef, not the childs.\nIt seems strange to me to say \"TABLE trigpart .* INSERT ON trigpart3\"\n\n- TABLE \"trigpart\" TRIGGER trg1 AFTER INSERT ON trigpart3 FOR EACH ROW EXECUTE FUNCTION trigger_nothing()\n+ TABLE \"trigpart\" TRIGGER trg1 AFTER INSERT ON trigpart FOR EACH ROW EXECUTE FUNCTION trigger_nothing()\n\n\n-- \nJustin", "msg_date": "Tue, 21 Apr 2020 20:06:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: DETACH PARTITION and FOR EACH ROW triggers on partitioned tables" } ]
[ { "msg_contents": "Hi,\n\nCompiling master on debian stretch, gcc 9.3.0 complains:\n\npartbounds.c: In function ‘partition_bounds_merge’:\npartbounds.c:1024:21: warning: unused variable ‘inner_binfo’ \n[-Wunused-variable]\n 1024 | PartitionBoundInfo inner_binfo = inner_rel->boundinfo;\n | ^~~~~~~~~~~\n\nMaybe it can be fixed.\n\nthanks,\n\nErik Rijkers\n\n\n", "msg_date": "Wed, 08 Apr 2020 18:07:28 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "warning in partioning code" }, { "msg_contents": "Hi Erik,\n\nOn Thu, Apr 9, 2020 at 1:07 AM Erik Rijkers <er@xs4all.nl> wrote:\n> Compiling master on debian stretch, gcc 9.3.0 complains:\n>\n> partbounds.c: In function ‘partition_bounds_merge’:\n> partbounds.c:1024:21: warning: unused variable ‘inner_binfo’\n> [-Wunused-variable]\n> 1024 | PartitionBoundInfo inner_binfo = inner_rel->boundinfo;\n> | ^~~~~~~~~~~\n>\n> Maybe it can be fixed.\n\nYeah, that is a known issue [1]. I'll work on that tomorrow. (I'm\ntoo tired today.) Thanks for the report!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAGz5QCJ1gBhg6upU0%2BpkYmHZsj%2BOaMgXCAf2GBVEm_k6%2BUr0zQ%40mail.gmail.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 01:36:14 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: warning in partioning code" } ]
[ { "msg_contents": "After having reviewed [1] more than a year ago (the problem I found was that\nthe transient table is not available for deferred constraints), I've tried to\nimplement the same in an alternative way. The RI triggers still work as row\nlevel triggers, but if multiple events of the same kind appear in the queue,\nthey are all passed to the trigger function at once. Thus the check query does\nnot have to be executed that frequently.\n\nSome performance comparisons are below. (Besides the execution time, please\nnote the difference in the number of trigger function executions.) In general,\nthe checks are significantly faster if there are many rows to process, and a\nbit slower when we only need to check a single row. However I'm not sure about\nthe accuracy if only a single row is measured (if a single row check is\nperformed several times, the execution time appears to fluctuate).\n\nComments are welcome.\n\nSetup\n=====\n\nCREATE TABLE p(i int primary key);\nINSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\nCREATE TABLE f(i int REFERENCES p);\n\n\nInsert many rows into the FK table\n==================================\n\nmaster:\n\nEXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Insert on f (cost=0.00..163.84 rows=16384 width=4) (actual time=32.741..32.741 rows=0 loops=1)\n -> Function Scan on generate_series g (cost=0.00..163.84 rows=16384 width=4) (actual time=2.403..4.802 rows=16384 loops=1)\n Planning Time: 0.050 ms\n Trigger for constraint f_i_fkey: time=448.986 calls=16384\n Execution Time: 485.444 ms\n(5 rows)\n\npatched:\n\nEXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Insert on f (cost=0.00..163.84 rows=16384 width=4) (actual time=34.053..34.053 rows=0 loops=1)\n -> Function Scan on generate_series g (cost=0.00..163.84 rows=16384 width=4) (actual time=2.223..4.448 rows=16384 loops=1)\n Planning Time: 0.047 ms\n Trigger for constraint f_i_fkey: time=105.164 calls=8\n Execution Time: 141.201 ms\n\n\nInsert a single row into the FK table\n=====================================\n\nmaster:\n\nEXPLAIN ANALYZE INSERT INTO f VALUES (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on f (cost=0.00..0.01 rows=1 width=4) (actual time=0.060..0.060 rows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.026 ms\n Trigger for constraint f_i_fkey: time=0.435 calls=1\n Execution Time: 0.517 ms\n(5 rows)\n\npatched:\n\nEXPLAIN ANALYZE INSERT INTO f VALUES (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on f (cost=0.00..0.01 rows=1 width=4) (actual time=0.066..0.066 rows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.025 ms\n Trigger for constraint f_i_fkey: time=0.578 calls=1\n Execution Time: 0.670 ms\n\n\nCheck if FK row exists during deletion from the PK\n==================================================\n\nmaster:\n\nDELETE FROM p WHERE i=16384;\nERROR: update or delete on table \"p\" violates foreign key constraint \"f_i_fkey\" on table \"f\"\nDETAIL: Key (i)=(16384) is still referenced from table \"f\".\nTime: 3.381 ms\n\npatched:\n\nDELETE FROM p WHERE i=16384;\nERROR: update or delete on table \"p\" violates foreign key constraint \"f_i_fkey\" on table \"f\"\nDETAIL: Key (i)=(16384) is still referenced from table \"f\".\nTime: 5.561 ms\n\n\nCascaded DELETE --- many PK rows\n================================\n\nDROP TABLE f;\nCREATE TABLE f(i int REFERENCES p ON UPDATE CASCADE ON DELETE CASCADE);\nINSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n\nmaster:\n\nEXPLAIN ANALYZE DELETE FROM p;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Delete on p (cost=0.00..236.84 rows=16384 width=6) (actual time=38.334..38.334 rows=0 loops=1)\n -> Seq Scan on p (cost=0.00..236.84 rows=16384 width=6) (actual time=0.019..3.925 rows=16384 loops=1)\n Planning Time: 0.049 ms\n Trigger for constraint f_i_fkey: time=31348.756 calls=16384\n Execution Time: 31390.784 ms\n\npatched:\n\nEXPLAIN ANALYZE DELETE FROM p;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Delete on p (cost=0.00..236.84 rows=16384 width=6) (actual time=33.360..33.360 rows=0 loops=1)\n -> Seq Scan on p (cost=0.00..236.84 rows=16384 width=6) (actual time=0.012..3.183 rows=16384 loops=1)\n Planning Time: 0.094 ms\n Trigger for constraint f_i_fkey: time=9.580 calls=8\n Execution Time: 43.941 ms\n\n\nCascaded DELETE --- a single PK row\n===================================\n\nINSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\nINSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n\nmaster:\n\nDELETE FROM p WHERE i=16384;\nDELETE 1\nTime: 5.754 ms\n\npatched:\n\nDELETE FROM p WHERE i=16384;\nDELETE 1\nTime: 8.098 ms\n\n\nCascaded UPDATE - many rows\n===========================\n\nmaster:\n\nEXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Update on p (cost=0.00..277.80 rows=16384 width=10) (actual time=166.954..166.954 rows=0 loops=1)\n -> Seq Scan on p (cost=0.00..277.80 rows=16384 width=10) (actual time=0.013..7.780 rows=16384 loops=1)\n Planning Time: 0.177 ms\n Trigger for constraint f_i_fkey on p: time=60405.362 calls=16384\n Trigger for constraint f_i_fkey on f: time=455.874 calls=16384\n Execution Time: 61036.996 ms\n\npatched:\n\nEXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Update on p (cost=0.00..277.77 rows=16382 width=10) (actual time=159.512..159.512 rows=0 loops=1)\n -> Seq Scan on p (cost=0.00..277.77 rows=16382 width=10) (actual time=0.014..7.783 rows=16382 loops=1)\n Planning Time: 0.146 ms\n Trigger for constraint f_i_fkey on p: time=169.628 calls=9\n Trigger for constraint f_i_fkey on f: time=124.079 calls=2\n Execution Time: 456.072 ms\n\n\nCascaded UPDATE - a single row\n==============================\n\nmaster:\n\nUPDATE p SET i = i - 16384 WHERE i=32767;\nUPDATE 1\nTime: 4.858 ms\n\npatched:\n\nUPDATE p SET i = i - 16384 WHERE i=32767;\nUPDATE 1\nTime: 11.955 ms\n\n\n[1] https://commitfest.postgresql.org/22/1975/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 08 Apr 2020 18:38:01 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "More efficient RI checks - take 2" }, { "msg_contents": "st 8. 4. 2020 v 18:36 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n\n> After having reviewed [1] more than a year ago (the problem I found was\n> that\n> the transient table is not available for deferred constraints), I've tried\n> to\n> implement the same in an alternative way. The RI triggers still work as row\n> level triggers, but if multiple events of the same kind appear in the\n> queue,\n> they are all passed to the trigger function at once. Thus the check query\n> does\n> not have to be executed that frequently.\n>\n> Some performance comparisons are below. (Besides the execution time, please\n> note the difference in the number of trigger function executions.) In\n> general,\n> the checks are significantly faster if there are many rows to process, and\n> a\n> bit slower when we only need to check a single row. However I'm not sure\n> about\n> the accuracy if only a single row is measured (if a single row check is\n> performed several times, the execution time appears to fluctuate).\n>\n\nIt is hard task to choose good strategy for immediate constraints, but for\ndeferred constraints you know how much rows should be checked, and then you\ncan choose better strategy.\n\nIs possible to use estimation for choosing method of RI checks?\n\n\n\n> Comments are welcome.\n>\n> Setup\n> =====\n>\n> CREATE TABLE p(i int primary key);\n> INSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\n> CREATE TABLE f(i int REFERENCES p);\n>\n>\n> Insert many rows into the FK table\n> ==================================\n>\n> master:\n>\n> EXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------\n> Insert on f (cost=0.00..163.84 rows=16384 width=4) (actual\n> time=32.741..32.741 rows=0 loops=1)\n> -> Function Scan on generate_series g (cost=0.00..163.84 rows=16384\n> width=4) (actual time=2.403..4.802 rows=16384 loops=1)\n> Planning Time: 0.050 ms\n> Trigger for constraint f_i_fkey: time=448.986 calls=16384\n> Execution Time: 485.444 ms\n> (5 rows)\n>\n> patched:\n>\n> EXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------\n> Insert on f (cost=0.00..163.84 rows=16384 width=4) (actual\n> time=34.053..34.053 rows=0 loops=1)\n> -> Function Scan on generate_series g (cost=0.00..163.84 rows=16384\n> width=4) (actual time=2.223..4.448 rows=16384 loops=1)\n> Planning Time: 0.047 ms\n> Trigger for constraint f_i_fkey: time=105.164 calls=8\n> Execution Time: 141.201 ms\n>\n>\n> Insert a single row into the FK table\n> =====================================\n>\n> master:\n>\n> EXPLAIN ANALYZE INSERT INTO f VALUES (1);\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------\n> Insert on f (cost=0.00..0.01 rows=1 width=4) (actual time=0.060..0.060\n> rows=0 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002\n> rows=1 loops=1)\n> Planning Time: 0.026 ms\n> Trigger for constraint f_i_fkey: time=0.435 calls=1\n> Execution Time: 0.517 ms\n> (5 rows)\n>\n> patched:\n>\n> EXPLAIN ANALYZE INSERT INTO f VALUES (1);\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------\n> Insert on f (cost=0.00..0.01 rows=1 width=4) (actual time=0.066..0.066\n> rows=0 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002\n> rows=1 loops=1)\n> Planning Time: 0.025 ms\n> Trigger for constraint f_i_fkey: time=0.578 calls=1\n> Execution Time: 0.670 ms\n>\n>\n> Check if FK row exists during deletion from the PK\n> ==================================================\n>\n> master:\n>\n> DELETE FROM p WHERE i=16384;\n> ERROR: update or delete on table \"p\" violates foreign key constraint\n> \"f_i_fkey\" on table \"f\"\n> DETAIL: Key (i)=(16384) is still referenced from table \"f\".\n> Time: 3.381 ms\n>\n> patched:\n>\n> DELETE FROM p WHERE i=16384;\n> ERROR: update or delete on table \"p\" violates foreign key constraint\n> \"f_i_fkey\" on table \"f\"\n> DETAIL: Key (i)=(16384) is still referenced from table \"f\".\n> Time: 5.561 ms\n>\n>\n> Cascaded DELETE --- many PK rows\n> ================================\n>\n> DROP TABLE f;\n> CREATE TABLE f(i int REFERENCES p ON UPDATE CASCADE ON DELETE CASCADE);\n> INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n>\n> master:\n>\n> EXPLAIN ANALYZE DELETE FROM p;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n> Delete on p (cost=0.00..236.84 rows=16384 width=6) (actual\n> time=38.334..38.334 rows=0 loops=1)\n> -> Seq Scan on p (cost=0.00..236.84 rows=16384 width=6) (actual\n> time=0.019..3.925 rows=16384 loops=1)\n> Planning Time: 0.049 ms\n> Trigger for constraint f_i_fkey: time=31348.756 calls=16384\n> Execution Time: 31390.784 ms\n>\n> patched:\n>\n> EXPLAIN ANALYZE DELETE FROM p;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n> Delete on p (cost=0.00..236.84 rows=16384 width=6) (actual\n> time=33.360..33.360 rows=0 loops=1)\n> -> Seq Scan on p (cost=0.00..236.84 rows=16384 width=6) (actual\n> time=0.012..3.183 rows=16384 loops=1)\n> Planning Time: 0.094 ms\n> Trigger for constraint f_i_fkey: time=9.580 calls=8\n> Execution Time: 43.941 ms\n>\n>\n> Cascaded DELETE --- a single PK row\n> ===================================\n>\n> INSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\n> INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n>\n> master:\n>\n> DELETE FROM p WHERE i=16384;\n> DELETE 1\n> Time: 5.754 ms\n>\n> patched:\n>\n> DELETE FROM p WHERE i=16384;\n> DELETE 1\n> Time: 8.098 ms\n>\n>\n> Cascaded UPDATE - many rows\n> ===========================\n>\n> master:\n>\n> EXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------\n> Update on p (cost=0.00..277.80 rows=16384 width=10) (actual\n> time=166.954..166.954 rows=0 loops=1)\n> -> Seq Scan on p (cost=0.00..277.80 rows=16384 width=10) (actual\n> time=0.013..7.780 rows=16384 loops=1)\n> Planning Time: 0.177 ms\n> Trigger for constraint f_i_fkey on p: time=60405.362 calls=16384\n> Trigger for constraint f_i_fkey on f: time=455.874 calls=16384\n> Execution Time: 61036.996 ms\n>\n> patched:\n>\n> EXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------\n> Update on p (cost=0.00..277.77 rows=16382 width=10) (actual\n> time=159.512..159.512 rows=0 loops=1)\n> -> Seq Scan on p (cost=0.00..277.77 rows=16382 width=10) (actual\n> time=0.014..7.783 rows=16382 loops=1)\n> Planning Time: 0.146 ms\n> Trigger for constraint f_i_fkey on p: time=169.628 calls=9\n> Trigger for constraint f_i_fkey on f: time=124.079 calls=2\n> Execution Time: 456.072 ms\n>\n>\n> Cascaded UPDATE - a single row\n> ==============================\n>\n> master:\n>\n> UPDATE p SET i = i - 16384 WHERE i=32767;\n> UPDATE 1\n> Time: 4.858 ms\n>\n> patched:\n>\n> UPDATE p SET i = i - 16384 WHERE i=32767;\n> UPDATE 1\n> Time: 11.955 ms\n>\n>\n> [1] https://commitfest.postgresql.org/22/1975/\n>\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>\n>\n\nst 8. 4. 2020 v 18:36 odesílatel Antonin Houska <ah@cybertec.at> napsal:After having reviewed [1] more than a year ago (the problem I found was that\nthe transient table is not available for deferred constraints), I've tried to\nimplement the same in an alternative way. The RI triggers still work as row\nlevel triggers, but if multiple events of the same kind appear in the queue,\nthey are all passed to the trigger function at once. Thus the check query does\nnot have to be executed that frequently.\n\nSome performance comparisons are below. (Besides the execution time, please\nnote the difference in the number of trigger function executions.) In general,\nthe checks are significantly faster if there are many rows to process, and a\nbit slower when we only need to check a single row. However I'm not sure about\nthe accuracy if only a single row is measured (if a single row check is\nperformed several times, the execution time appears to fluctuate).It is hard task to choose good strategy for immediate constraints, but for deferred constraints you know how much rows should be checked, and then you can choose better strategy.Is possible to use estimation for choosing method of RI checks? \n\nComments are welcome.\n\nSetup\n=====\n\nCREATE TABLE p(i int primary key);\nINSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\nCREATE TABLE f(i int REFERENCES p);\n\n\nInsert many rows into the FK table\n==================================\n\nmaster:\n\nEXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n                                                           QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Insert on f  (cost=0.00..163.84 rows=16384 width=4) (actual time=32.741..32.741 rows=0 loops=1)\n   ->  Function Scan on generate_series g  (cost=0.00..163.84 rows=16384 width=4) (actual time=2.403..4.802 rows=16384 loops=1)\n Planning Time: 0.050 ms\n Trigger for constraint f_i_fkey: time=448.986 calls=16384\n Execution Time: 485.444 ms\n(5 rows)\n\npatched:\n\nEXPLAIN ANALYZE INSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n                                                           QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Insert on f  (cost=0.00..163.84 rows=16384 width=4) (actual time=34.053..34.053 rows=0 loops=1)\n   ->  Function Scan on generate_series g  (cost=0.00..163.84 rows=16384 width=4) (actual time=2.223..4.448 rows=16384 loops=1)\n Planning Time: 0.047 ms\n Trigger for constraint f_i_fkey: time=105.164 calls=8\n Execution Time: 141.201 ms\n\n\nInsert a single row into the FK table\n=====================================\n\nmaster:\n\nEXPLAIN ANALYZE INSERT INTO f VALUES (1);\n                                        QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on f  (cost=0.00..0.01 rows=1 width=4) (actual time=0.060..0.060 rows=0 loops=1)\n   ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.026 ms\n Trigger for constraint f_i_fkey: time=0.435 calls=1\n Execution Time: 0.517 ms\n(5 rows)\n\npatched:\n\nEXPLAIN ANALYZE INSERT INTO f VALUES (1);\n                                        QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on f  (cost=0.00..0.01 rows=1 width=4) (actual time=0.066..0.066 rows=0 loops=1)\n   ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.025 ms\n Trigger for constraint f_i_fkey: time=0.578 calls=1\n Execution Time: 0.670 ms\n\n\nCheck if FK row exists during deletion from the PK\n==================================================\n\nmaster:\n\nDELETE FROM p WHERE i=16384;\nERROR:  update or delete on table \"p\" violates foreign key constraint \"f_i_fkey\" on table \"f\"\nDETAIL:  Key (i)=(16384) is still referenced from table \"f\".\nTime: 3.381 ms\n\npatched:\n\nDELETE FROM p WHERE i=16384;\nERROR:  update or delete on table \"p\" violates foreign key constraint \"f_i_fkey\" on table \"f\"\nDETAIL:  Key (i)=(16384) is still referenced from table \"f\".\nTime: 5.561 ms\n\n\nCascaded DELETE --- many PK rows\n================================\n\nDROP TABLE f;\nCREATE TABLE f(i int REFERENCES p ON UPDATE CASCADE ON DELETE CASCADE);\nINSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n\nmaster:\n\nEXPLAIN ANALYZE DELETE FROM p;\n                                                QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Delete on p  (cost=0.00..236.84 rows=16384 width=6) (actual time=38.334..38.334 rows=0 loops=1)\n   ->  Seq Scan on p  (cost=0.00..236.84 rows=16384 width=6) (actual time=0.019..3.925 rows=16384 loops=1)\n Planning Time: 0.049 ms\n Trigger for constraint f_i_fkey: time=31348.756 calls=16384\n Execution Time: 31390.784 ms\n\npatched:\n\nEXPLAIN ANALYZE DELETE FROM p;\n                                                QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Delete on p  (cost=0.00..236.84 rows=16384 width=6) (actual time=33.360..33.360 rows=0 loops=1)\n   ->  Seq Scan on p  (cost=0.00..236.84 rows=16384 width=6) (actual time=0.012..3.183 rows=16384 loops=1)\n Planning Time: 0.094 ms\n Trigger for constraint f_i_fkey: time=9.580 calls=8\n Execution Time: 43.941 ms\n\n\nCascaded DELETE --- a single PK row\n===================================\n\nINSERT INTO p SELECT x FROM generate_series(1, 16384) g(x);\nINSERT INTO f SELECT i FROM generate_series(1, 16384) g(i);\n\nmaster:\n\nDELETE FROM p WHERE i=16384;\nDELETE 1\nTime: 5.754 ms\n\npatched:\n\nDELETE FROM p WHERE i=16384;\nDELETE 1\nTime: 8.098 ms\n\n\nCascaded UPDATE - many rows\n===========================\n\nmaster:\n\nEXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n                                                 QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Update on p  (cost=0.00..277.80 rows=16384 width=10) (actual time=166.954..166.954 rows=0 loops=1)\n   ->  Seq Scan on p  (cost=0.00..277.80 rows=16384 width=10) (actual time=0.013..7.780 rows=16384 loops=1)\n Planning Time: 0.177 ms\n Trigger for constraint f_i_fkey on p: time=60405.362 calls=16384\n Trigger for constraint f_i_fkey on f: time=455.874 calls=16384\n Execution Time: 61036.996 ms\n\npatched:\n\nEXPLAIN ANALYZE UPDATE p SET i = i + 16384;\n                                                 QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Update on p  (cost=0.00..277.77 rows=16382 width=10) (actual time=159.512..159.512 rows=0 loops=1)\n   ->  Seq Scan on p  (cost=0.00..277.77 rows=16382 width=10) (actual time=0.014..7.783 rows=16382 loops=1)\n Planning Time: 0.146 ms\n Trigger for constraint f_i_fkey on p: time=169.628 calls=9\n Trigger for constraint f_i_fkey on f: time=124.079 calls=2\n Execution Time: 456.072 ms\n\n\nCascaded UPDATE - a single row\n==============================\n\nmaster:\n\nUPDATE p SET i = i - 16384 WHERE i=32767;\nUPDATE 1\nTime: 4.858 ms\n\npatched:\n\nUPDATE p SET i = i - 16384 WHERE i=32767;\nUPDATE 1\nTime: 11.955 ms\n\n\n[1] https://commitfest.postgresql.org/22/1975/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 8 Apr 2020 19:05:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Wed, Apr 8, 2020 at 1:06 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 8. 4. 2020 v 18:36 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n>\n>> After having reviewed [1] more than a year ago (the problem I found was\n>> that\n>> the transient table is not available for deferred constraints), I've\n>> tried to\n>> implement the same in an alternative way. The RI triggers still work as\n>> row\n>> level triggers, but if multiple events of the same kind appear in the\n>> queue,\n>> they are all passed to the trigger function at once. Thus the check query\n>> does\n>> not have to be executed that frequently.\n>>\n>\nI'm excited that you picked this up!\n\n\n>\n>> Some performance comparisons are below. (Besides the execution time,\n>> please\n>> note the difference in the number of trigger function executions.) In\n>> general,\n>> the checks are significantly faster if there are many rows to process,\n>> and a\n>> bit slower when we only need to check a single row. However I'm not sure\n>> about\n>> the accuracy if only a single row is measured (if a single row check is\n>> performed several times, the execution time appears to fluctuate).\n>>\n>\nThese numbers are very promising, and much more in line with my initial\nexpectations. Obviously the impact on single-row DML is of major concern,\nthough.\n\nIt is hard task to choose good strategy for immediate constraints, but for\n> deferred constraints you know how much rows should be checked, and then you\n> can choose better strategy.\n>\n\n> Is possible to use estimation for choosing method of RI checks?\n>\n\nIn doing my initial attempt, the feedback I was getting was that the people\nwho truly understood the RI checks fell into the following groups:\n1. people who wanted to remove the SPI calls from the triggers\n2. people who wanted to completely refactor RI to not use triggers\n3. people who wanted to completely refactor triggers\n\nWhile #3 is clearly beyond the scope for an endeavor like this, #1 seems\nlike it would nearly eliminate the 1-row penalty (we'd still have the\nTupleStore initi penalty, but it would just be a handy queue structure, and\nmaybe that cost would be offset by removing the SPI overhead), and once\nthat is done, we could see about step #2.\n\nOn Wed, Apr 8, 2020 at 1:06 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 8. 4. 2020 v 18:36 odesílatel Antonin Houska <ah@cybertec.at> napsal:After having reviewed [1] more than a year ago (the problem I found was that\nthe transient table is not available for deferred constraints), I've tried to\nimplement the same in an alternative way. The RI triggers still work as row\nlevel triggers, but if multiple events of the same kind appear in the queue,\nthey are all passed to the trigger function at once. Thus the check query does\nnot have to be executed that frequently.I'm excited that you picked this up! \n\nSome performance comparisons are below. (Besides the execution time, please\nnote the difference in the number of trigger function executions.) In general,\nthe checks are significantly faster if there are many rows to process, and a\nbit slower when we only need to check a single row. However I'm not sure about\nthe accuracy if only a single row is measured (if a single row check is\nperformed several times, the execution time appears to fluctuate).These numbers are very promising, and much more in line with my initial expectations. Obviously the impact on single-row DML is of major concern, though.It is hard task to choose good strategy for immediate constraints, but for deferred constraints you know how much rows should be checked, and then you can choose better strategy.Is possible to use estimation for choosing method of RI checks?In doing my initial attempt, the feedback I was getting was that the people who truly understood the RI checks fell into the following groups:1. people who wanted to remove the SPI calls from the triggers2. people who wanted to completely refactor RI to not use triggers3. people who wanted to completely refactor triggersWhile #3 is clearly beyond the scope for an endeavor like this, #1 seems like it would nearly eliminate the 1-row penalty (we'd still have the TupleStore initi penalty, but it would just be a handy queue structure, and maybe that cost would be offset by removing the SPI overhead), and once that is done, we could see about step #2.", "msg_date": "Wed, 8 Apr 2020 13:55:55 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>> st 8. 4. 2020 v 18:36 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n> \n>> Some performance comparisons are below. (Besides the execution time, please\n>> note the difference in the number of trigger function executions.) In general,\n>> the checks are significantly faster if there are many rows to process, and a\n>> bit slower when we only need to check a single row. However I'm not sure about\n>> the accuracy if only a single row is measured (if a single row check is\n>> performed several times, the execution time appears to fluctuate).\n> \n> It is hard task to choose good strategy for immediate constraints, but for\n> deferred constraints you know how much rows should be checked, and then you\n> can choose better strategy.\n> \n> Is possible to use estimation for choosing method of RI checks?\n\nThe exact number of rows (\"batch size\") is always known before the query is\nexecuted. So one problem to solve is that, when only one row is affected, we\nneed to convince the planner that the \"transient table\" really contains a\nsingle row. Otherwise it can, for example, produce a hash join where the hash\neventually contains a single row.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 15:56:35 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> wrote:\n\n> These numbers are very promising, and much more in line with my initial\n> expectations. Obviously the impact on single-row DML is of major concern,\n> though.\n\nYes, I agree.\n\n> In doing my initial attempt, the feedback I was getting was that the people\n> who truly understood the RI checks fell into the following groups:\n\n> 1. people who wanted to remove the SPI calls from the triggers\n> 2. people who wanted to completely refactor RI to not use triggers\n> 3. people who wanted to completely refactor triggers\n> \n> While #3 is clearly beyond the scope for an endeavor like this, #1 seems\n> like it would nearly eliminate the 1-row penalty (we'd still have the\n> TupleStore initi penalty, but it would just be a handy queue structure, and\n> maybe that cost would be offset by removing the SPI overhead),\n\nI can imagine removal of the SPI from the current implementation (and\nconstructing the plans \"manually\"), but note that the queries I use in my\npatch are no longer that trivial. So the SPI makes sense to me because it\nensures regular query planning.\n\nAs for the tuplestore, I'm not sure the startup cost is a problem: if you're\nconcerned about the 1-row case, the row should usually be stored in memory.\n\n> and once that is done, we could see about step #2.\n\nAs I said during my review of your patch last year, I think the RI semantics\nhas too much in common with that of triggers. I'd need more info to imagine\nsuch a change.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:31:13 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": ">\n> I can imagine removal of the SPI from the current implementation (and\n> constructing the plans \"manually\"), but note that the queries I use in my\n> patch are no longer that trivial. So the SPI makes sense to me because it\n> ensures regular query planning.\n>\n\nAs an intermediate step, in the case where we have one row, it should be\nsimple enough to extract that row manually, and do an SPI call with fixed\nvalues rather than the join to the ephemeral table, yes?\n\n\n> As for the tuplestore, I'm not sure the startup cost is a problem: if\n> you're\n> concerned about the 1-row case, the row should usually be stored in memory.\n>\n\n\n\n> > and once that is done, we could see about step #2.\n>\n> As I said during my review of your patch last year, I think the RI\n> semantics\n> has too much in common with that of triggers. I'd need more info to imagine\n> such a change.\n>\n\nAs a general outline, I think that DML would iterate over the 2 sets of\npotentially relevant RI definitions rather than iterating over the\ntriggers.\n\nThe similarities between RI and general triggers are obvious, which\nexplains why they went that route initially, but they're also a crutch, but\nsince all RI operations boil down to either an iteration over a tuplestore\nto do lookups in an index (when checking for referenced rows), or a hash\njoin of the transient data against the un-indexed table when checking for\nreferencing rows, and people who know this stuff far better than me seem to\nthink that SPI overhead is best avoided when possible. I'm looking forward\nto having more time to spend on this.\n\nI can imagine removal of the SPI from the current implementation (and\nconstructing the plans \"manually\"), but note that the queries I use in my\npatch are no longer that trivial. So the SPI makes sense to me because it\nensures regular query planning.As an intermediate step, in the case where we have one row, it should be simple enough to extract that row manually, and do an SPI call with fixed values rather than the join to the ephemeral table, yes? As for the tuplestore, I'm not sure the startup cost is a problem: if you're\nconcerned about the 1-row case, the row should usually be stored in memory. > and once that is done, we could see about step #2.\n\nAs I said during my review of your patch last year, I think the RI semantics\nhas too much in common with that of triggers. I'd need more info to imagine\nsuch a change.As a general outline, I think that DML would iterate over the 2 sets of potentially relevant RI definitions rather than iterating over the triggers. The similarities between RI and general triggers are obvious, which explains why they went that route initially, but they're also a crutch, but since all RI operations boil down to either an iteration over a tuplestore to do lookups in an index (when checking for referenced rows), or a hash join of the transient data against the un-indexed table when checking for referencing rows, and people who know this stuff far better than me seem to think that SPI overhead is best avoided when possible. I'm looking forward to having more time to spend on this.", "msg_date": "Mon, 20 Apr 2020 11:45:39 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On 2020-Apr-20, Corey Huinker wrote:\n\n> > I can imagine removal of the SPI from the current implementation (and\n> > constructing the plans \"manually\"), but note that the queries I use in my\n> > patch are no longer that trivial. So the SPI makes sense to me because it\n> > ensures regular query planning.\n> \n> As an intermediate step, in the case where we have one row, it should be\n> simple enough to extract that row manually, and do an SPI call with fixed\n> values rather than the join to the ephemeral table, yes?\n\nI do wonder if the RI stuff would actually end up being faster without\nSPI. If not, we'd only end up writing more code to do the same thing.\nNow that tables can be partitioned, it is much more of a pain than when\nonly regular tables could be supported. Obviously without SPI you\nwouldn't *have* to go through the planner, which might be a win in\nitself if the execution tree to use were always perfectly clear ... but\nnow that the queries get more complex per partitioning and this\noptimization, is it?\n\nYou could remove the crosscheck_snapshot feature from SPI, I suppose,\nbut that's not that much code.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 11:34:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I do wonder if the RI stuff would actually end up being faster without\n> SPI. If not, we'd only end up writing more code to do the same thing.\n> Now that tables can be partitioned, it is much more of a pain than when\n> only regular tables could be supported. Obviously without SPI you\n> wouldn't *have* to go through the planner, which might be a win in\n> itself if the execution tree to use were always perfectly clear ... but\n> now that the queries get more complex per partitioning and this\n> optimization, is it?\n\nAFAIK, we do not have any code besides the planner that is capable of\nbuilding a plan tree at all, and I'd be pretty hesitant to try to create\nsuch; those things are complicated.\n\nIt'd likely only make sense to bypass the planner if the required work\nis predictable enough that you don't need a plan tree at all, but can\njust hard-wire what has to be done. That seems a bit unlikely in the\npresence of partitioning.\n\nInstead of a plan tree, you could build a parse tree to pass through the\nplanner, rather than building a SQL statement string that has to be\nparsed. The code jumps through enough hoops to make sure the string will\nbe parsed \"just so\" that this might net out to about an equal amount of\ncode in ri_triggers.c, and it'd save a nontrivial amount of parsing work.\nBut you'd have to abandon SPI, probably, or at least it's not clear how\nmuch that'd be doing for you anymore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 Apr 2020 16:14:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 11:34:54 -0400, Alvaro Herrera wrote:\n> On 2020-Apr-20, Corey Huinker wrote:\n> \n> > > I can imagine removal of the SPI from the current implementation (and\n> > > constructing the plans \"manually\"), but note that the queries I use in my\n> > > patch are no longer that trivial. So the SPI makes sense to me because it\n> > > ensures regular query planning.\n> > \n> > As an intermediate step, in the case where we have one row, it should be\n> > simple enough to extract that row manually, and do an SPI call with fixed\n> > values rather than the join to the ephemeral table, yes?\n> \n> I do wonder if the RI stuff would actually end up being faster without\n> SPI.\n\nI would suspect so. How much is another question.\n\nI assume that with constructing plans \"manually\" you don't mean to\ncreate a plan tree, but to invoke parser/planner directly? I think\nthat'd likely be better than going through SPI, and there's precedent\ntoo.\n\n\nBut honestly, my gut feeling is that for a lot of cases it'd be best\njust bypass parser, planner *and* executor. And just do manual\nsystable_beginscan() style checks. For most cases we exactly know what\nplan shape we expect, and going through the overhead of creating a query\nstring, parsing, planning, caching the previous steps, and creating an\nexecutor tree for every check is a lot. Even just the amount of memory\nfor caching the plans can be substantial.\n\nSide note: I for one would appreciate a setting that just made all RI\nactions requiring a seqscan error out...\n\n\n> If not, we'd only end up writing more code to do the same thing. Now\n> that tables can be partitioned, it is much more of a pain than when\n> only regular tables could be supported. Obviously without SPI you\n> wouldn't *have* to go through the planner, which might be a win in\n> itself if the execution tree to use were always perfectly clear\n> ... but now that the queries get more complex per partitioning and\n> this optimization, is it?\n\nI think it's actually a good case where we will commonly be able to do\n*better* than generic planning. The infrastructure for efficient\npartition pruning exists (for COPY etc) - but isn't easily applicable to\ngeneric plans.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 08:42:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 16:14:53 -0400, Tom Lane wrote:\n> AFAIK, we do not have any code besides the planner that is capable of\n> building a plan tree at all, and I'd be pretty hesitant to try to create\n> such; those things are complicated.\n\nI suspect what was meant was not to create the plan tree directly, but\nto bypass SPI when creating the plan / executing the query.\n\n\nIMO SPI for most uses in core PG really adds more complication and\noverhead than warranted. The whole concept of having a global tuptable,\na stack and xact.c integration to repair that design defficiency... The\nhiding of what happens behind a pretty random set of different\nabstractions. That all makes it appear as if SPI did something super\ncomplicated, but it really doesn't. It just is a bad and\nover-complicated abstraction layer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 08:55:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On 2020-Apr-22, Andres Freund wrote:\n\n> I assume that with constructing plans \"manually\" you don't mean to\n> create a plan tree, but to invoke parser/planner directly? I think\n> that'd likely be better than going through SPI, and there's precedent\n> too.\n\nWell, I was actually thinking in building ready-made execution trees,\nbypassing the planner altogether. But apparently no one thinks that\nthis is a good idea, and we don't have any code that does that already,\nso maybe it's not a great idea.\n\nHowever:\n\n> But honestly, my gut feeling is that for a lot of cases it'd be best\n> just bypass parser, planner *and* executor. And just do manual\n> systable_beginscan() style checks. For most cases we exactly know what\n> plan shape we expect, and going through the overhead of creating a query\n> string, parsing, planning, caching the previous steps, and creating an\n> executor tree for every check is a lot. Even just the amount of memory\n> for caching the plans can be substantial.\n\nAvoiding the executor altogether scares me, but I can't say exactly why.\nFoe example, you couldn't use foreign tables at either side of the FK --\nbut we don't allow FKs on those tables and we'd have to use some\nspecific executor node for such a thing anyway. So this not a real\nargument against going that route.\n\n> Side note: I for one would appreciate a setting that just made all RI\n> actions requiring a seqscan error out...\n\nHmm, interesting thought. I guess there are actual cases where it's\nnot strictly necessary, for example where the referencing table is\nreally tiny -- not the *referenced* table, note, since you need the\nUNIQUE index on that side in any case. I suppose that's not a really\ninteresting case. I don't think this is implementable when going\nthrough SPI.\n\n> I think it's actually a good case where we will commonly be able to do\n> *better* than generic planning. The infrastructure for efficient\n> partition pruning exists (for COPY etc) - but isn't easily applicable to\n> generic plans.\n\nTrue.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 13:18:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Wed, Apr 22, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Well, I was actually thinking in building ready-made execution trees,\n> bypassing the planner altogether. But apparently no one thinks that\n> this is a good idea, and we don't have any code that does that already,\n> so maybe it's not a great idea.\n\nIf it's any consolation, I had the same idea very recently while\nchatting with Amit Langote. Maybe it's a bad idea, but you're not the\nonly one who had it. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 13:46:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 13:18:06 -0400, Alvaro Herrera wrote:\n> > But honestly, my gut feeling is that for a lot of cases it'd be best\n> > just bypass parser, planner *and* executor. And just do manual\n> > systable_beginscan() style checks. For most cases we exactly know what\n> > plan shape we expect, and going through the overhead of creating a query\n> > string, parsing, planning, caching the previous steps, and creating an\n> > executor tree for every check is a lot. Even just the amount of memory\n> > for caching the plans can be substantial.\n> \n> Avoiding the executor altogether scares me, but I can't say exactly why.\n> Foe example, you couldn't use foreign tables at either side of the FK --\n> but we don't allow FKs on those tables and we'd have to use some\n> specific executor node for such a thing anyway. So this not a real\n> argument against going that route.\n\nI think it'd also not that hard to call a specific routine for doing\nfkey checks on the remote side. Probably easier to handle things that\nway than through \"generic\" FDW code.\n\n\n> > Side note: I for one would appreciate a setting that just made all RI\n> > actions requiring a seqscan error out...\n> \n> Hmm, interesting thought. I guess there are actual cases where it's\n> not strictly necessary, for example where the referencing table is\n> really tiny -- not the *referenced* table, note, since you need the\n> UNIQUE index on that side in any case. I suppose that's not a really\n> interesting case.\n\nYea, the index is pretty much free there. Except I guess for the case of\na tiny table thats super heavily updated.\n\n\n> I don't think this is implementable when going through SPI.\n\nIt'd probably be not too hard to approximate by just erroring out when\nthere's no index on the relevant column, before even doing the planning.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:11:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 13:46:22 -0400, Robert Haas wrote:\n> On Wed, Apr 22, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Well, I was actually thinking in building ready-made execution trees,\n> > bypassing the planner altogether. But apparently no one thinks that\n> > this is a good idea, and we don't have any code that does that already,\n> > so maybe it's not a great idea.\n\nI was commenting on what I understood Corey to say, but was fairly\nunclear about it. But I'm also far from sure that I understood Corey\ncorrectly...\n\n\n> If it's any consolation, I had the same idea very recently while\n> chatting with Amit Langote. Maybe it's a bad idea, but you're not the\n> only one who had it. :-)\n\nThat seems extremely hard, given our current infrastructure. I think\nthere's actually a good case to be made for the idea in the abstract,\nbut ... The amount of logic the ExecInit* routines have is substantial,\nthe state they set up ss complicates. A lot of nodes have state that is\nprivate to their .c files. All executor nodes reference the\ncorresponding Plan nodes, so you also need to mock up those.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:36:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-08 13:55:55 -0400, Corey Huinker wrote:\n> In doing my initial attempt, the feedback I was getting was that the people\n> who truly understood the RI checks fell into the following groups:\n> 1. people who wanted to remove the SPI calls from the triggers\n> 2. people who wanted to completely refactor RI to not use triggers\n> 3. people who wanted to completely refactor triggers\n\nFWIW, for me these three are largely independent avenues:\n\nWRT 1: There's a lot of benefit in reducing the per-call overhead of\nRI. Not going through SPI is one way to do that. Even if RI were not to\nuse triggers, we'd still want to reduce the per-statement costs.\n\nWRT 2: Not using the generic per-row trigger framework for RI has significant\nbenefits too - checking multiple rows at once, deduplicating repeated\nchecks, reducing the per-row storage overhead ...\n\nWRT 3: Fairly obviously improving the generic trigger code (more\nefficient fetching of tuple versions, spilling etc) would have benefits\nentirely independent of other RI improvements.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:42:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:36 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-04-22 13:46:22 -0400, Robert Haas wrote:\n> > On Wed, Apr 22, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > > Well, I was actually thinking in building ready-made execution trees,\n> > > bypassing the planner altogether. But apparently no one thinks that\n> > > this is a good idea, and we don't have any code that does that already,\n> > > so maybe it's not a great idea.\n>\n> I was commenting on what I understood Corey to say, but was fairly\n> unclear about it. But I'm also far from sure that I understood Corey\n> correctly...\n>\n\nI was unclear because, even after my failed foray into statement level\ntriggers for RI checks, I'm still pretty inexperienced in this area.\n\nI'm just happy that it's being discussed.\n\nOn Wed, Apr 22, 2020 at 2:36 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-04-22 13:46:22 -0400, Robert Haas wrote:\n> On Wed, Apr 22, 2020 at 1:18 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Well, I was actually thinking in building ready-made execution trees,\n> > bypassing the planner altogether.  But apparently no one thinks that\n> > this is a good idea, and we don't have any code that does that already,\n> > so maybe it's not a great idea.\n\nI was commenting on what I understood Corey to say, but was fairly\nunclear about it. But I'm also far from sure that I understood Corey\ncorrectly...I was unclear because, even after my failed foray into statement level triggers for RI checks, I'm still pretty inexperienced in this area.I'm just happy that it's being discussed.", "msg_date": "Wed, 22 Apr 2020 15:13:19 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > If it's any consolation, I had the same idea very recently while\n> > chatting with Amit Langote. Maybe it's a bad idea, but you're not the\n> > only one who had it. :-)\n>\n> That seems extremely hard, given our current infrastructure. I think\n> there's actually a good case to be made for the idea in the abstract,\n> but ... The amount of logic the ExecInit* routines have is substantial,\n> the state they set up ss complicates. A lot of nodes have state that is\n> private to their .c files. All executor nodes reference the\n> corresponding Plan nodes, so you also need to mock up those.\n\nRight -- the idea I was talking about was to create a Plan tree\nwithout using the main planner. So it wouldn't bother costing an index\nscan on each index, and a sequential scan, on the target table - it\nwould just make an index scan plan, or maybe an index path that it\nwould then convert to an index plan. Or something like that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 17:43:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Right -- the idea I was talking about was to create a Plan tree\n> without using the main planner. So it wouldn't bother costing an index\n> scan on each index, and a sequential scan, on the target table - it\n> would just make an index scan plan, or maybe an index path that it\n> would then convert to an index plan. Or something like that.\n\nConsing up a Path tree and then letting create_plan() make it into\nan executable plan might not be a terrible idea. There's a whole\nboatload of finicky details that you could avoid that way, like\neverything in setrefs.c.\n\nBut it's not entirely clear to me that we know the best plan for a\nstatement-level RI action with sufficient certainty to go that way.\nIs it really the case that the plan would not vary based on how\nmany tuples there are to check, for example? If we *do* know\nexactly what should happen, I'd tend to lean towards Andres'\nidea that we shouldn't be using the executor at all, but just\nhard-wiring stuff at the level of \"do these table scans\".\n\nAlso, it's definitely not the case that create_plan() has an API\nthat's so clean that you would be able to use it without major\nhassles. You'd still have to generate a pile of lookalike planner\ndata structures, and make sure that expression subtrees have been\nfed through eval_const_expressions(), etc etc.\n\nOn the whole I still think that generating a Query tree and then\nletting the planner do its thing might be the best approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 18:40:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Wed, Apr 22, 2020 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But it's not entirely clear to me that we know the best plan for a\n> statement-level RI action with sufficient certainty to go that way.\n> Is it really the case that the plan would not vary based on how\n> many tuples there are to check, for example? If we *do* know\n> exactly what should happen, I'd tend to lean towards Andres'\n> idea that we shouldn't be using the executor at all, but just\n> hard-wiring stuff at the level of \"do these table scans\".\n\nWell, I guess I'd naively think we want an index scan on a plain\ntable. It is barely possible that in some corner case a sequential\nscan would be faster, but could it be enough faster to save the cost\nof planning? I doubt it, but I just work here.\n\nOn a partitioning hierarchy we want to figure out which partition is\nrelevant for the value we're trying to find, and then scan that one.\n\nI'm not sure there are any other cases. We have to have a UNIQUE\nconstraint or we can't be referencing this target table. So it can't\nbe a plain inheritance hierarchy, nor (I think) a foreign table.\n\n> Also, it's definitely not the case that create_plan() has an API\n> that's so clean that you would be able to use it without major\n> hassles. You'd still have to generate a pile of lookalike planner\n> data structures, and make sure that expression subtrees have been\n> fed through eval_const_expressions(), etc etc.\n\nYeah, that's annoying.\n\n> On the whole I still think that generating a Query tree and then\n> letting the planner do its thing might be the best approach.\n\nMaybe, but it seems awfully heavy-weight. Once you go into the planner\nit's pretty hard to avoid considering indexes we don't care about,\nbitmap scans we don't care about, a sequential scan we don't care\nabout, etc. You'd certainly save something just from avoiding\nparsing, but planning's pretty expensive too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 22:11:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Right -- the idea I was talking about was to create a Plan tree\n> > without using the main planner. So it wouldn't bother costing an index\n> > scan on each index, and a sequential scan, on the target table - it\n> > would just make an index scan plan, or maybe an index path that it\n> > would then convert to an index plan. Or something like that.\n> \n> Consing up a Path tree and then letting create_plan() make it into\n> an executable plan might not be a terrible idea. There's a whole\n> boatload of finicky details that you could avoid that way, like\n> everything in setrefs.c.\n> \n> But it's not entirely clear to me that we know the best plan for a\n> statement-level RI action with sufficient certainty to go that way.\n> Is it really the case that the plan would not vary based on how\n> many tuples there are to check, for example?\n\nI'm concerned about that too. With my patch the checks become a bit slower if\nonly a single row is processed. The problem seems to be that the planner is\nnot entirely convinced about that the number of input rows, so it can still\nbuild a plan that expects many rows. For example (as I mentioned elsewhere in\nthe thread), a hash join where the hash table only contains one tuple. Or\nsimilarly a sort node for a single input tuple.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 07:08:02 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "čt 23. 4. 2020 v 7:06 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Right -- the idea I was talking about was to create a Plan tree\n> > > without using the main planner. So it wouldn't bother costing an index\n> > > scan on each index, and a sequential scan, on the target table - it\n> > > would just make an index scan plan, or maybe an index path that it\n> > > would then convert to an index plan. Or something like that.\n> >\n> > Consing up a Path tree and then letting create_plan() make it into\n> > an executable plan might not be a terrible idea. There's a whole\n> > boatload of finicky details that you could avoid that way, like\n> > everything in setrefs.c.\n> >\n> > But it's not entirely clear to me that we know the best plan for a\n> > statement-level RI action with sufficient certainty to go that way.\n> > Is it really the case that the plan would not vary based on how\n> > many tuples there are to check, for example?\n>\n> I'm concerned about that too. With my patch the checks become a bit slower\n> if\n> only a single row is processed. The problem seems to be that the planner is\n> not entirely convinced about that the number of input rows, so it can still\n> build a plan that expects many rows. For example (as I mentioned elsewhere\n> in\n> the thread), a hash join where the hash table only contains one tuple. Or\n> similarly a sort node for a single input tuple.\n>\n\nwithout statistics the planner expect about 2000 rows table , no?\n\nPavel\n\n\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>\n\nčt 23. 4. 2020 v 7:06 odesílatel Antonin Houska <ah@cybertec.at> napsal:Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Right -- the idea I was talking about was to create a Plan tree\n> > without using the main planner. So it wouldn't bother costing an index\n> > scan on each index, and a sequential scan, on the target table - it\n> > would just make an index scan plan, or maybe an index path that it\n> > would then convert to an index plan. Or something like that.\n> \n> Consing up a Path tree and then letting create_plan() make it into\n> an executable plan might not be a terrible idea.  There's a whole\n> boatload of finicky details that you could avoid that way, like\n> everything in setrefs.c.\n> \n> But it's not entirely clear to me that we know the best plan for a\n> statement-level RI action with sufficient certainty to go that way.\n> Is it really the case that the plan would not vary based on how\n> many tuples there are to check, for example?\n\nI'm concerned about that too. With my patch the checks become a bit slower if\nonly a single row is processed. The problem seems to be that the planner is\nnot entirely convinced about that the number of input rows, so it can still\nbuild a plan that expects many rows. For example (as I mentioned elsewhere in\nthe thread), a hash join where the hash table only contains one tuple. Or\nsimilarly a sort node for a single input tuple.without statistics the planner expect about 2000 rows table , no?Pavel\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 23 Apr 2020 07:13:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> čt 23. 4. 2020 v 7:06 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n> \n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > But it's not entirely clear to me that we know the best plan for a\n> > statement-level RI action with sufficient certainty to go that way.\n> > Is it really the case that the plan would not vary based on how\n> > many tuples there are to check, for example?\n> \n> I'm concerned about that too. With my patch the checks become a bit slower if\n> only a single row is processed. The problem seems to be that the planner is\n> not entirely convinced about that the number of input rows, so it can still\n> build a plan that expects many rows. For example (as I mentioned elsewhere in\n> the thread), a hash join where the hash table only contains one tuple. Or\n> similarly a sort node for a single input tuple.\n> \n> without statistics the planner expect about 2000 rows table , no?\n\nI think that at some point it estimates the number of rows from the number of\ntable pages, but I don't remember details.\n\nI wanted to say that if we constructed the plan \"manually\", we'd need at least\ntwo substantially different variants: one to check many rows and the other to\ncheck a single row.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 08:29:33 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "čt 23. 4. 2020 v 8:28 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > čt 23. 4. 2020 v 7:06 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n> >\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > But it's not entirely clear to me that we know the best plan for a\n> > > statement-level RI action with sufficient certainty to go that way.\n> > > Is it really the case that the plan would not vary based on how\n> > > many tuples there are to check, for example?\n> >\n> > I'm concerned about that too. With my patch the checks become a bit\n> slower if\n> > only a single row is processed. The problem seems to be that the\n> planner is\n> > not entirely convinced about that the number of input rows, so it can\n> still\n> > build a plan that expects many rows. For example (as I mentioned\n> elsewhere in\n> > the thread), a hash join where the hash table only contains one tuple.\n> Or\n> > similarly a sort node for a single input tuple.\n> >\n> > without statistics the planner expect about 2000 rows table , no?\n>\n> I think that at some point it estimates the number of rows from the number\n> of\n> table pages, but I don't remember details.\n>\n> I wanted to say that if we constructed the plan \"manually\", we'd need at\n> least\n> two substantially different variants: one to check many rows and the other\n> to\n> check a single row.\n>\n\nThere can be more variants - a hash join should not be good enough for\nbigger data.\n\nThe overhead of RI is too big, so I think any solution that will be faster\nthen current and can be inside Postgres 14 can be perfect.\n\nBut when you know so input is only one row, you can build a query without\njoin\n\n\n\n\n\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>\n\nčt 23. 4. 2020 v 8:28 odesílatel Antonin Houska <ah@cybertec.at> napsal:Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> čt 23. 4. 2020 v 7:06 odesílatel Antonin Houska <ah@cybertec.at> napsal:\n> \n>  Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>  > But it's not entirely clear to me that we know the best plan for a\n>  > statement-level RI action with sufficient certainty to go that way.\n>  > Is it really the case that the plan would not vary based on how\n>  > many tuples there are to check, for example?\n> \n>  I'm concerned about that too. With my patch the checks become a bit slower if\n>  only a single row is processed. The problem seems to be that the planner is\n>  not entirely convinced about that the number of input rows, so it can still\n>  build a plan that expects many rows. For example (as I mentioned elsewhere in\n>  the thread), a hash join where the hash table only contains one tuple. Or\n>  similarly a sort node for a single input tuple.\n> \n> without statistics the planner expect about 2000 rows table , no?\n\nI think that at some point it estimates the number of rows from the number of\ntable pages, but I don't remember details.\n\nI wanted to say that if we constructed the plan \"manually\", we'd need at least\ntwo substantially different variants: one to check many rows and the other to\ncheck a single row.There can be more variants - a hash join should not be good enough for bigger data.The overhead of RI is too big, so I think any solution that will be faster then current and can be inside Postgres 14 can be perfect. But when you know so input is only one row, you can build a query without join\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 23 Apr 2020 08:36:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Apr 22, 2020 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > But it's not entirely clear to me that we know the best plan for a\n> > statement-level RI action with sufficient certainty to go that way.\n> > Is it really the case that the plan would not vary based on how\n> > many tuples there are to check, for example? If we *do* know\n> > exactly what should happen, I'd tend to lean towards Andres'\n> > idea that we shouldn't be using the executor at all, but just\n> > hard-wiring stuff at the level of \"do these table scans\".\n> \n> Well, I guess I'd naively think we want an index scan on a plain\n> table. It is barely possible that in some corner case a sequential\n> scan would be faster, but could it be enough faster to save the cost\n> of planning? I doubt it, but I just work here.\n> \n> On a partitioning hierarchy we want to figure out which partition is\n> relevant for the value we're trying to find, and then scan that one.\n> \n> I'm not sure there are any other cases. We have to have a UNIQUE\n> constraint or we can't be referencing this target table. So it can't\n> be a plain inheritance hierarchy, nor (I think) a foreign table.\n\nIn the cases where we have a UNIQUE constraint, and therefore a clear\nindex to use, I tend to agree that we should just be getting to it and\navoiding the planner/executor, as Andres suggest.\n\nI'm not super thrilled about the idea of throwing an ERROR when we\nhaven't got an index to use though, and we don't require an index on the\nreferring side, meaning that, with such a change, a DELETE or UPDATE on\nthe referred table with an ON CASCADE FK will just start throwing\nerrors. That's not terribly friendly, even if it's not really best\npractice to not have an index to help with those cases.\n\nI'd hope that we would at least teach pg_upgrade to look for such cases\nand throw errors (though maybe that could be downgraded to a WARNING\nwith a flag..?) if it finds any when upgrading, so that users don't\nupgrade and then suddenly start getting errors for simple statements\nthat used to work just fine.\n\n> > On the whole I still think that generating a Query tree and then\n> > letting the planner do its thing might be the best approach.\n> \n> Maybe, but it seems awfully heavy-weight. Once you go into the planner\n> it's pretty hard to avoid considering indexes we don't care about,\n> bitmap scans we don't care about, a sequential scan we don't care\n> about, etc. You'd certainly save something just from avoiding\n> parsing, but planning's pretty expensive too.\n\nAgreed.\n\nThanks,\n\nStephen", "msg_date": "Thu, 23 Apr 2020 08:40:47 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 22, 2020 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But it's not entirely clear to me that we know the best plan for a\n>> statement-level RI action with sufficient certainty to go that way.\n\n> Well, I guess I'd naively think we want an index scan on a plain\n> table. It is barely possible that in some corner case a sequential\n> scan would be faster, but could it be enough faster to save the cost\n> of planning? I doubt it, but I just work here.\n\nI think we're failing to communicate here. I agree that if the goal\nis simply to re-implement what the RI triggers currently do --- that\nis, retail one-row-at-a-time checks --- then we could probably dispense\nwith all the parser/planner/executor overhead and directly implement\nan indexscan using an API at about the level genam.c provides.\n(The issue of whether it's okay to require an index to be available is\nannoying, but we could always fall back to the old ways if one is not.)\n\nHowever, what I thought this thread was about was switching to\nstatement-level RI checking. At that point, what we're talking\nabout is performing a join involving a not-known-in-advance number\nof tuples on each side. If you think you can hard-wire the choice\nof join technology and have it work well all the time, I'm going to\nsay with complete confidence that you are wrong. The planner spends\nhuge amounts of effort on that and still doesn't always get it right\n... but it does better than a hard-wired choice would do.\n\nMaybe there's room to pursue both things --- you could imagine,\nperhaps, looking at the planner's estimate of number of affected\nrows at executor startup and deciding from that whether to fire\nper-row or per-statement RI triggers. But we're really going to\nwant different implementations within those two types of triggers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 10:35:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Thu, Apr 23, 2020 at 2:18 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Apr-22, Andres Freund wrote:\n> > I assume that with constructing plans \"manually\" you don't mean to\n> > create a plan tree, but to invoke parser/planner directly? I think\n> > that'd likely be better than going through SPI, and there's precedent\n> > too.\n>\n> Well, I was actually thinking in building ready-made execution trees,\n> bypassing the planner altogether. But apparently no one thinks that\n> this is a good idea, and we don't have any code that does that already,\n> so maybe it's not a great idea.\n\nWe do have an instance in validateForeignKeyConstraint() of \"manually\"\nenforcing RI:\n\nIf RI_Initial_Check() (a relatively complex query) cannot be\nperformed, the referencing table is scanned manually and each tuple\nthus found is looked up in the referenced table by using\nRI_FKey_check_ins(), a simpler query.\n\nIronically though, RI_Initial_Check() is to short-circuit the manual algorithm.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 00:39:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Thu, Apr 23, 2020 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we're failing to communicate here. I agree that if the goal\n> is simply to re-implement what the RI triggers currently do --- that\n> is, retail one-row-at-a-time checks --- then we could probably dispense\n> with all the parser/planner/executor overhead and directly implement\n> an indexscan using an API at about the level genam.c provides.\n> (The issue of whether it's okay to require an index to be available is\n> annoying, but we could always fall back to the old ways if one is not.)\n>\n> However, what I thought this thread was about was switching to\n> statement-level RI checking. At that point, what we're talking\n> about is performing a join involving a not-known-in-advance number\n> of tuples on each side. If you think you can hard-wire the choice\n> of join technology and have it work well all the time, I'm going to\n> say with complete confidence that you are wrong. The planner spends\n> huge amounts of effort on that and still doesn't always get it right\n> ... but it does better than a hard-wired choice would do.\n\nOh, yeah. If we're talking about that, then getting by without using\nthe planner doesn't seem feasible. Sorry, I guess I didn't read the\nthread carefully enough.\n\nAs you say, perhaps there's room for both things, but also as you say,\nit's not obvious how to decide intelligently between them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 28 Apr 2020 08:18:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Apr 23, 2020 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think we're failing to communicate here. I agree that if the goal\n> > is simply to re-implement what the RI triggers currently do --- that\n> > is, retail one-row-at-a-time checks --- then we could probably dispense\n> > with all the parser/planner/executor overhead and directly implement\n> > an indexscan using an API at about the level genam.c provides.\n> > (The issue of whether it's okay to require an index to be available is\n> > annoying, but we could always fall back to the old ways if one is not.)\n> >\n> > However, what I thought this thread was about was switching to\n> > statement-level RI checking. At that point, what we're talking\n> > about is performing a join involving a not-known-in-advance number\n> > of tuples on each side. If you think you can hard-wire the choice\n> > of join technology and have it work well all the time, I'm going to\n> > say with complete confidence that you are wrong. The planner spends\n> > huge amounts of effort on that and still doesn't always get it right\n> > ... but it does better than a hard-wired choice would do.\n> \n> Oh, yeah. If we're talking about that, then getting by without using\n> the planner doesn't seem feasible. Sorry, I guess I didn't read the\n> thread carefully enough.\n\nYeah, I had been thinking about what we might do with the existing\nrow-level RI checks too. If we're able to get statement-level without\nmuch impact on the single-row-statement case then that's certainly\ninteresting, although it sure feels like we're ending up with a lot left\non the table.\n\n> As you say, perhaps there's room for both things, but also as you say,\n> it's not obvious how to decide intelligently between them.\n\nThe single-row case seems pretty clear and also seems common enough that\nit'd be worth paying the cost to figure out if it's a single-row\nstatement or not.\n\nPerhaps we start with row-level for the first row, implemented directly\nusing an index lookup, and when we hit some threshold (maybe even just\n\"more than one\") switch to using the transient table and queue'ing\nthe rest to check at the end.\n\nWhat bothers me the most about this approach (though, to be clear, I\nthink we should still pursue it) is the risk that we might end up\npicking a spectacularly bad plan that ends up taking a great deal more\ntime than the index-probe based approach we almost always have today.\nIf we limit that impact to only cases where >1 row is involved, then\nthat's certainly better (though maybe we'll need a GUC for this\nanyway..? If we had the single-row approach + the statement-level one,\npresumably the GUC would just make us always take the single-row method,\nso it hopefully wouldn't be too grotty to have).\n\nThanks,\n\nStephen", "msg_date": "Tue, 28 Apr 2020 10:31:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> As you say, perhaps there's room for both things, but also as you say,\n>> it's not obvious how to decide intelligently between them.\n\n> The single-row case seems pretty clear and also seems common enough that\n> it'd be worth paying the cost to figure out if it's a single-row\n> statement or not.\n\nThat seems hard to do in advance ... but it would be easy to code\na statement-level AFTER trigger along the lines of\n\n\tif (transition table contains one row)\n\t // fast special case here\n\telse\n\t // slow general case here.\n\nI think the question really comes down to this: is the per-row overhead of\nthe transition-table mechanism comparable to that of the AFTER trigger\nqueue? Or if not, can we make it so?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Apr 2020 10:44:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nOn 2020-04-28 10:44:58 -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> >> As you say, perhaps there's room for both things, but also as you say,\n> >> it's not obvious how to decide intelligently between them.\n>\n> > The single-row case seems pretty clear and also seems common enough that\n> > it'd be worth paying the cost to figure out if it's a single-row\n> > statement or not.\n\nIt's not that obvious to me that it's going to be beneficial to go down\na planned path in all that many cases. If all that the RI check does is\na index_rescan() && index_getnext_slot(), there's not that many\nrealistic types of plans that are going to be better.\n\nIIUC a query to check a transition table would, simplified, boil down to\neither:\n\nSELECT * FROM transition_table tt\nWHERE\n -- for a insertion/update into the referencing table and\n (\n NOT EXISTS (SELECT * FROM referenced_table rt WHERE rt.referenced_column = tt.referencing_column)\n [AND ... , ]\n )\n -- for update / delete of referenced table\n OR EXISTS (SELECT * FROM referencing_table rt1 WHERE rt1.referencing_column = tt.referenced_column1)\n [OR ... , ]\nLIMIT 1;\n\nWhere a returned row would signal an error. But it would need to handle\nrow locking, CASCADE/SET NULL/SET DEFAULT etc. More on that below.\n\nWhile it's tempting to want to write the latter check as\n\n-- for update / delete of referenced table\nSELECT * FROM referencing_table rt\nWHERE referencing_column IN (SELECT referenced_column FROM transition_table tt)\nLIMIT 1;\nthat'd make it harder to know the violating row.\n\n\nAs the transition table isn't ordered it seems like there's not that\nmany realistic ways to execute this:\n\n1) A nestloop semi/anti-join with an inner index scan\n2) Sort transition table, do a merge semi/anti-join between sort and an\n ordered index scan on the referenced / referencing table(s).\n3) Hash semi/anti-join, requiring a full table scan of the tables\n\n\nI think 1) would be worse than just doing the indexscan manually. 2)\nwould probably be beneficial if there's a lot of rows on the inner side,\ndue to the ordered access and deduplication. 3) would sometimes be\nbeneficial because it'd avoid an index scan for each tuple in the\ntransition table.\n\nThe cases in which it is clear to me that a bulk check could\ntheoretically be significantly better than a fast per-row check are:\n\n1) There's a *lot* of rows in the transition table to comparatively small\n referenced / referencing tables. As those tables can cheaply be\n hashed, a hashjoin will be be more efficient than doing a index lookup\n for each transition table row.\n2) There's a lot of duplicate content in the transition\n table. E.g. because there's a lot of references to the same row.\n\nDid I forget important ones?\n\n\nWith regard to the row locking etc that I elided above: I think that\nactually will prevent most if not all interesting optimizations: Because\nof the FOR KEY SHARE that's required, the planner plan will pretty much\nalways degrade to a per row subplan check anyway. Is there any\nformulation of the necessary queries that don't suffer from this\nproblem?\n\n\n> That seems hard to do in advance ... but it would be easy to code\n> a statement-level AFTER trigger along the lines of\n>\n> \tif (transition table contains one row)\n> \t // fast special case here\n> \telse\n> \t // slow general case here.\n\nI suspect we'd need something more complicated than this for it to be\nbeneficial. My gut feeling would be that the case where a transition\ntable style check would be most commonly beneficial is when you have a\nvery small referenced table, and a *lot* of rows get inserted. But\nclearly we wouldn't want to have bulk insertion suddenly also store all\nrows in a transition table.\n\nNor would we want to have a bulk UPDATE cause all the updated rows to be\nstored in the transition table, even though none of the relevant columns\nchanged (i.e. the RI_FKey_[f]pk_upd_check_required logic in\nAfterTriggerSaveEvent()).\n\n\nI still don't quite see how shunting RI checks through triggers saves us\nmore than it costs:\n\nEspecially for the stuff we do as AFTER: Most of the time we could do\nthe work we defer till query end immediately, rather than building up an\nin-memory queue. Besides saving memory, in a lot of cases that'd also\nmake it unnecessary to refetch the row at a later time, possibly needing\nto chase updated row versions.\n\nBut even for the BEFORE checks, largely going through generic trigger\ncode means it's much harder to batch checks without suddenly requiring\nmemory proportional to the number of inserted rows.\n\n\nThere obviously are cases where it's not possible to check just after\neach row. Deferrable constraints, as well as CASCADING / SET NOT NULL /\nSET DEFAULT on tables with user defined triggers, for example. But it'd\nlikely be sensible to handle that in the way we already handle deferred\nuniqueness checks, i.e. we only queue something if there's a potential\nfor a problem.\n\n\n> I think the question really comes down to this: is the per-row overhead of\n> the transition-table mechanism comparable to that of the AFTER trigger\n> queue? Or if not, can we make it so?\n\nIt's probably more expensive, in some ways, at the moment. The biggest\ndifference is that the transition table stores complete rows, valid as\nof the moment they've been inserted/updated/deleted, whereas the trigger\nqueue only stores enough information to fetch the tuple again during\ntrigger execution. Several RI checks however re-check visiblity before\nexecuting, so that's another fetch, that'd likely not be elided by a\nsimple change to using transition tables.\n\nBoth have significant downsides, obviously. Storing complete rows can\ntake a lot more memory, and refetching rows is expensive, especially if\nit happens much later (with the row pushed out of shared_buffers\npotentially).\n\n\nI think it was a mistake to have these two different systems in\ntrigger.c. When transition tables were added we shouldn't have kept\nper-tuple state both in the queue and in the transition\ntuplestore. Instead we should have only used the tuplestore, and\noptimized what information we store inside depending on the need of the\nvarious after triggers.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Apr 2020 15:21:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> In general, the checks are significantly faster if there are many rows to\n> process, and a bit slower when we only need to check a single row.\n\nAttached is a new version that uses the existing simple queries if there's\nonly one row to check. SPI is used for both single-row and bulk checks - as\ndiscussed in this thread, it can perhaps be replaced with a different approach\nif appears to be beneficial, at least for the single-row checks.\n\nI think using a separate query for the single-row check is more practicable\nthan convincing the planner that the bulk-check query should only check a\nsingle row. So this patch version tries to show what it'd look like.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 05 Jun 2020 17:16:43 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Hi,\n\nI was looking at this patch with Corey during a patch-review session. So\nthese are basically our \"combined\" comments.\n\n\nOn 2020-06-05 17:16:43 +0200, Antonin Houska wrote:\n> From 6c1cb8ae7fbf0a8122d8c6637c61b9915bc57223 Mon Sep 17 00:00:00 2001\n> From: Antonin Houska <ah@cybertec.at>\n> Date: Fri, 5 Jun 2020 16:42:34 +0200\n> Subject: [PATCH 1/5] Check for RI violation outside ri_PerformCheck().\n\nProbably good to add a short comment to the commit explaining why you're\ndoing this.\n\nThe change makes sense to me. Unless somebody protests I think we should\njust apply it regardless of the rest of the series - the code seems\nclearer afterwards.\n\n\n> From 6b09e5598553c8e57b4ef9342912f51adb48f8af Mon Sep 17 00:00:00 2001\n> From: Antonin Houska <ah@cybertec.at>\n> Date: Fri, 5 Jun 2020 16:42:34 +0200\n> Subject: [PATCH 2/5] Changed ri_GenerateQual() so it generates the whole\n> qualifier.\n> \n> This way we can use the function to reduce the amount of copy&pasted code a\n> bit.\n\n\n> /*\n> - * ri_GenerateQual --- generate a WHERE clause equating two variables\n> + * ri_GenerateQual --- generate WHERE/ON clause.\n> + *\n> + * Note: to avoid unnecessary explicit casts, make sure that the left and\n> + * right operands match eq_oprs expect (ie don't swap the left and right\n> + * operands accidentally).\n> + */\n> +static void\n> +ri_GenerateQual(StringInfo buf, char *sep, int nkeys,\n> +\t\t\t\tconst char *ltabname, Relation lrel,\n> +\t\t\t\tconst int16 *lattnums,\n> +\t\t\t\tconst char *rtabname, Relation rrel,\n> +\t\t\t\tconst int16 *rattnums,\n> +\t\t\t\tconst Oid *eq_oprs,\n> +\t\t\t\tGenQualParams params,\n> +\t\t\t\tOid *paramtypes)\n> +{\n> +\tfor (int i = 0; i < nkeys; i++)\n> +\t{\n> +\t\tOid\t\t\tltype = RIAttType(lrel, lattnums[i]);\n> +\t\tOid\t\t\trtype = RIAttType(rrel, rattnums[i]);\n> +\t\tOid\t\t\tlcoll = RIAttCollation(lrel, lattnums[i]);\n> +\t\tOid\t\t\trcoll = RIAttCollation(rrel, rattnums[i]);\n> +\t\tchar\t\tparamname[16];\n> +\t\tchar\t *latt,\n> +\t\t\t\t *ratt;\n> +\t\tchar\t *sep_current = i > 0 ? sep : NULL;\n> +\n> +\t\tif (params != GQ_PARAMS_NONE)\n> +\t\t\tsprintf(paramname, \"$%d\", i + 1);\n> +\n> +\t\tif (params == GQ_PARAMS_LEFT)\n> +\t\t{\n> +\t\t\tlatt = paramname;\n> +\t\t\tparamtypes[i] = ltype;\n> +\t\t}\n> +\t\telse\n> +\t\t\tlatt = ri_ColNameQuoted(ltabname, RIAttName(lrel, lattnums[i]));\n> +\n> +\t\tif (params == GQ_PARAMS_RIGHT)\n> +\t\t{\n> +\t\t\tratt = paramname;\n> +\t\t\tparamtypes[i] = rtype;\n> +\t\t}\n> +\t\telse\n> +\t\t\tratt = ri_ColNameQuoted(rtabname, RIAttName(rrel, rattnums[i]));\n\n\nWhy do we need support for having params on left or right side, instead\nof just having them on one side?\n\n\n> +\t\tri_GenerateQualComponent(buf, sep_current, latt, ltype, eq_oprs[i],\n> +\t\t\t\t\t\t\t\t ratt, rtype);\n> +\n> +\t\tif (lcoll != rcoll)\n> +\t\t\tri_GenerateQualCollation(buf, lcoll);\n> +\t}\n> +}\n> +\n> +/*\n> + * ri_GenerateQual --- generate a component of WHERE/ON clause equating two\n> + * variables, to be AND-ed to the other components.\n> *\n> * This basically appends \" sep leftop op rightop\" to buf, adding casts\n> * and schema qualification as needed to ensure that the parser will select\n> @@ -1828,17 +1802,86 @@ quoteRelationName(char *buffer, Relation rel)\n> * if they aren't variables or parameters.\n> */\n> static void\n> -ri_GenerateQual(StringInfo buf,\n> -\t\t\t\tconst char *sep,\n> -\t\t\t\tconst char *leftop, Oid leftoptype,\n> -\t\t\t\tOid opoid,\n> -\t\t\t\tconst char *rightop, Oid rightoptype)\n> +ri_GenerateQualComponent(StringInfo buf,\n> +\t\t\t\t\t\t const char *sep,\n> +\t\t\t\t\t\t const char *leftop, Oid leftoptype,\n> +\t\t\t\t\t\t Oid opoid,\n> +\t\t\t\t\t\t const char *rightop, Oid rightoptype)\n> {\n> -\tappendStringInfo(buf, \" %s \", sep);\n> +\tif (sep)\n> +\t\tappendStringInfo(buf, \" %s \", sep);\n> \tgenerate_operator_clause(buf, leftop, leftoptype, opoid,\n> \t\t\t\t\t\t\t rightop, rightoptype);\n> }\n\nWhy is this handled inside ri_GenerateQualComponent() instead of\nri_GenerateQual()? Especially because the latter now has code to pass in\na different sep into ri_GenerateQualComponent().\n\n\n> +/*\n> + * ri_ColNameQuoted() --- return column name, with both table and column name\n> + * quoted.\n> + */\n> +static char *\n> +ri_ColNameQuoted(const char *tabname, const char *attname)\n> +{\n> +\tchar\t\tquoted[MAX_QUOTED_NAME_LEN];\n> +\tStringInfo\tresult = makeStringInfo();\n> +\n> +\tif (tabname && strlen(tabname) > 0)\n> +\t{\n> +\t\tquoteOneName(quoted, tabname);\n> +\t\tappendStringInfo(result, \"%s.\", quoted);\n> +\t}\n> +\n> +\tquoteOneName(quoted, attname);\n> +\tappendStringInfoString(result, quoted);\n> +\n> +\treturn result->data;\n> +}\n\nWhy does this new function accept a NULL / zero length string? I guess\nthat's because we currently don't qualify in all places?\n\n\n> +/*\n> + * Check that RI trigger function was called in expected context\n> + */\n> +static void\n> +ri_CheckTrigger(FunctionCallInfo fcinfo, const char *funcname, int tgkind)\n> +{\n> +\tTriggerData *trigdata = (TriggerData *) fcinfo->context;\n> +\n> +\tif (!CALLED_AS_TRIGGER(fcinfo))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n> +\t\t\t\t errmsg(\"function \\\"%s\\\" was not called by trigger manager\", funcname)));\n> +\n> +\t/*\n> +\t * Check proper event\n> +\t */\n> +\tif (!TRIGGER_FIRED_AFTER(trigdata->tg_event) ||\n> +\t\t!TRIGGER_FIRED_FOR_ROW(trigdata->tg_event))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n> +\t\t\t\t errmsg(\"function \\\"%s\\\" must be fired AFTER ROW\", funcname)));\n> +\n> +\tswitch (tgkind)\n> +\t{\n> +\t\tcase RI_TRIGTYPE_INSERT:\n> +\t\t\tif (!TRIGGER_FIRED_BY_INSERT(trigdata->tg_event))\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n> +\t\t\t\t\t\t errmsg(\"function \\\"%s\\\" must be fired for INSERT\", funcname)));\n> +\t\t\tbreak;\n> +\t\tcase RI_TRIGTYPE_UPDATE:\n> +\t\t\tif (!TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n> +\t\t\t\t\t\t errmsg(\"function \\\"%s\\\" must be fired for UPDATE\", funcname)));\n> +\t\t\tbreak;\n> +\n> +\t\tcase RI_TRIGTYPE_DELETE:\n> +\t\t\tif (!TRIGGER_FIRED_BY_DELETE(trigdata->tg_event))\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_E_R_I_E_TRIGGER_PROTOCOL_VIOLATED),\n> +\t\t\t\t\t\t errmsg(\"function \\\"%s\\\" must be fired for DELETE\", funcname)));\n> +\t\t\tbreak;\n> +\t}\n> +}\n> +\n\nWhy did you move this around, as part of this commit?\n\n\n\n\n> From 208c733d759592402901599446b4f7e7197c1777 Mon Sep 17 00:00:00 2001\n> From: Antonin Houska <ah@cybertec.at>\n> Date: Fri, 5 Jun 2020 16:42:34 +0200\n> Subject: [PATCH 4/5] Introduce infrastructure for batch processing RI events.\n> \n> Separate storage is used for the RI trigger events because the \"transient\n> table\" that we provide to statement triggers would not be available for\n> deferred constraints. Also, the regular statement level trigger is not ideal\n> for the RI checks because it requires the query execution to complete before\n> the RI checks even start. On the other hand, if we use batches of row trigger\n> events, we only need to tune the batch size so that user gets RI violation\n> error rather soon.\n> \n> This patch only introduces the infrastructure, however the trigger function is\n> still called per event. This is just to reduce the size of the diffs.\n> ---\n> src/backend/commands/tablecmds.c | 68 +-\n> src/backend/commands/trigger.c | 406 ++++++--\n> src/backend/executor/spi.c | 16 +-\n> src/backend/utils/adt/ri_triggers.c | 1385 +++++++++++++++++++--------\n> src/include/commands/trigger.h | 25 +\n> 5 files changed, 1381 insertions(+), 519 deletions(-)\n\nMy first comment here is that this is too large a change and should be\nbroken up.\n\nI think there's also not enough explanation in here what the new design\nis. I can infer some of that from the code, but that's imo shifting work\nto the reviewer / reader unnecessarily.\n\n\n\n> +static void AfterTriggerExecuteRI(EState *estate,\n> +\t\t\t\t\t\t\t\t ResultRelInfo *relInfo,\n> +\t\t\t\t\t\t\t\t FmgrInfo *finfo,\n> +\t\t\t\t\t\t\t\t Instrumentation *instr,\n> +\t\t\t\t\t\t\t\t TriggerData *trig_last,\n> +\t\t\t\t\t\t\t\t MemoryContext batch_context);\n> static AfterTriggersTableData *GetAfterTriggersTableData(Oid relid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t CmdType cmdType);\n> static void AfterTriggerFreeQuery(AfterTriggersQueryData *qs);\n> @@ -3807,13 +3821,16 @@ afterTriggerDeleteHeadEventChunk(AfterTriggersQueryData *qs)\n> *\tfmgr lookup cache space at the caller level. (For triggers fired at\n> *\tthe end of a query, we can even piggyback on the executor's state.)\n> *\n> - *\tevent: event currently being fired.\n> + *\tevent: event currently being fired. Pass NULL if the current batch of RI\n> + *\t\ttrigger events should be processed.\n> *\trel: open relation for event.\n> *\ttrigdesc: working copy of rel's trigger info.\n> *\tfinfo: array of fmgr lookup cache entries (one per trigger in trigdesc).\n> *\tinstr: array of EXPLAIN ANALYZE instrumentation nodes (one per trigger),\n> *\t\tor NULL if no instrumentation is wanted.\n> + *\ttrig_last: trigger info used for the last trigger execution.\n> *\tper_tuple_context: memory context to call trigger function in.\n> + *\tbatch_context: memory context to store tuples for RI triggers.\n> *\ttrig_tuple_slot1: scratch slot for tg_trigtuple (foreign tables only)\n> *\ttrig_tuple_slot2: scratch slot for tg_newtuple (foreign tables only)\n> * ----------\n> @@ -3824,39 +3841,55 @@ AfterTriggerExecute(EState *estate,\n> \t\t\t\t\tResultRelInfo *relInfo,\n> \t\t\t\t\tTriggerDesc *trigdesc,\n> \t\t\t\t\tFmgrInfo *finfo, Instrumentation *instr,\n> +\t\t\t\t\tTriggerData *trig_last,\n> \t\t\t\t\tMemoryContext per_tuple_context,\n> +\t\t\t\t\tMemoryContext batch_context,\n> \t\t\t\t\tTupleTableSlot *trig_tuple_slot1,\n> \t\t\t\t\tTupleTableSlot *trig_tuple_slot2)\n> {\n> \tRelation\trel = relInfo->ri_RelationDesc;\n> \tAfterTriggerShared evtshared = GetTriggerSharedData(event);\n> \tOid\t\t\ttgoid = evtshared->ats_tgoid;\n> -\tTriggerData LocTriggerData = {0};\n> \tHeapTuple\trettuple;\n> -\tint\t\t\ttgindx;\n> \tbool\t\tshould_free_trig = false;\n> \tbool\t\tshould_free_new = false;\n> +\tbool\t\tis_new = false;\n> \n> -\t/*\n> -\t * Locate trigger in trigdesc.\n> -\t */\n> -\tfor (tgindx = 0; tgindx < trigdesc->numtriggers; tgindx++)\n> +\tif (trig_last->tg_trigger == NULL)\n> \t{\n> -\t\tif (trigdesc->triggers[tgindx].tgoid == tgoid)\n> +\t\tint\t\t\ttgindx;\n> +\n> +\t\t/*\n> +\t\t * Locate trigger in trigdesc.\n> +\t\t */\n> +\t\tfor (tgindx = 0; tgindx < trigdesc->numtriggers; tgindx++)\n> \t\t{\n> -\t\t\tLocTriggerData.tg_trigger = &(trigdesc->triggers[tgindx]);\n> -\t\t\tbreak;\n> +\t\t\tif (trigdesc->triggers[tgindx].tgoid == tgoid)\n> +\t\t\t{\n> +\t\t\t\ttrig_last->tg_trigger = &(trigdesc->triggers[tgindx]);\n> +\t\t\t\ttrig_last->tgindx = tgindx;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> \t\t}\n> +\t\tif (trig_last->tg_trigger == NULL)\n> +\t\t\telog(ERROR, \"could not find trigger %u\", tgoid);\n> +\n> +\t\tif (RI_FKey_trigger_type(trig_last->tg_trigger->tgfoid) !=\n> +\t\t\tRI_TRIGGER_NONE)\n> +\t\t\ttrig_last->is_ri_trigger = true;\n> +\n> +\t\tis_new = true;\n> \t}\n> -\tif (LocTriggerData.tg_trigger == NULL)\n> -\t\telog(ERROR, \"could not find trigger %u\", tgoid);\n> +\n> +\t/* trig_last for non-RI trigger should always be initialized again. */\n> +\tAssert(trig_last->is_ri_trigger || is_new);\n> \n> \t/*\n> \t * If doing EXPLAIN ANALYZE, start charging time to this trigger. We want\n> \t * to include time spent re-fetching tuples in the trigger cost.\n> \t */\n> -\tif (instr)\n> -\t\tInstrStartNode(instr + tgindx);\n> +\tif (instr && !trig_last->is_ri_trigger)\n> +\t\tInstrStartNode(instr + trig_last->tgindx);\n\nI'm pretty unhappy about the amount of new infrastructure this adds to\ntrigger.c. We're now going to have a third copy of the tuples (for a\ntime). trigger.c is already a pretty complicated / archaic piece of\ninfrastructure, and this patchset seems to make that even worse. We'll\ngrow yet another separate representation of tuples, there's a lot new\nbranches (less concerned about the runtime costs, more about the code\ncomplexity) etc.\n\n\n\n> +/* ----------\n> + * Construct the query to check inserted/updated rows of the FK table.\n> + *\n> + * If \"insert\" is true, the rows are inserted, otherwise they are updated.\n> + *\n> + * The query string built is\n> + *\tSELECT t.fkatt1 [, ...]\n> + *\t\tFROM <tgtable> t LEFT JOIN LATERAL\n> + *\t\t (SELECT t.fkatt1 [, ...]\n> + * FROM [ONLY] <pktable> p\n> + *\t\t WHERE t.fkatt1 = p.pkatt1 [AND ...]\n> + *\t\t FOR KEY SHARE OF p) AS m\n> + *\t\t ON t.fkatt1 = m.fkatt1 [AND ...]\n> + *\t\tWHERE m.fkatt1 ISNULL\n> + *\t LIMIT 1\n> + *\n\nWhy do we need the lateral query here?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jun 2020 18:17:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Fri, Jun 05, 2020 at 05:16:43PM +0200, Antonin Houska wrote:\n> Antonin Houska <ah@cybertec.at> wrote:\n> \n> > In general, the checks are significantly faster if there are many rows to\n> > process, and a bit slower when we only need to check a single row.\n> \n> Attached is a new version that uses the existing simple queries if there's\n> only one row to check. SPI is used for both single-row and bulk checks - as\n> discussed in this thread, it can perhaps be replaced with a different approach\n> if appears to be beneficial, at least for the single-row checks.\n> \n> I think using a separate query for the single-row check is more practicable\n> than convincing the planner that the bulk-check query should only check a\n> single row. So this patch version tries to show what it'd look like.\n\nI'm interested in testing this patch, however there's a lot of internals to\ndigest.\n\nAre there any documentation updates or regression tests to add ? If FKs\nsupport \"bulk\" validation, users should know when that applies, and be able to\ncheck that it's working as intended. Even if the test cases are overly verbose\nor not stable, and not intended for commit, that would be a useful temporary\naddition.\n\nI think that calls=4 indicates this is using bulk validation.\n\npostgres=# begin; explain(analyze, timing off, costs off, summary off, verbose) DELETE FROM t WHERE i<999; rollback;\nBEGIN\n QUERY PLAN \n-----------------------------------------------------------------------\n Delete on public.t (actual rows=0 loops=1)\n -> Index Scan using t_pkey on public.t (actual rows=998 loops=1)\n Output: ctid\n Index Cond: (t.i < 999)\n Trigger RI_ConstraintTrigger_a_16399 for constraint t_i_fkey: calls=4\n\nI started thinking about this 1+ years ago wondering if a BRIN index could be\nused for (bulk) FK validation.\n\nSo I would like to be able to see the *plan* for the query.\n\nI was able to show the plan and see that BRIN can be used like so:\n|SET auto_explain.log_nested_statements=on; SET client_min_messages=debug; SET auto_explain.log_min_duration=0;\nShould the plan be visible in explain (not auto-explain) ?\n\nBTW did you see this older thread ?\nhttps://www.postgresql.org/message-id/flat/CA%2BU5nMLM1DaHBC6JXtUMfcG6f7FgV5mPSpufO7GRnbFKkF2f7g%40mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 26 Sep 2020 21:59:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "On Sat, Sep 26, 2020 at 09:59:17PM -0500, Justin Pryzby wrote:\n> I'm interested in testing this patch, however there's a lot of internals to\n> digest.\n\nEven with that, the thread has been waiting on author for a couple of\nweeks now, so I have marke dthe entry as RwF.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 15:57:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: More efficient RI checks - take 2" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I'm interested in testing this patch, however there's a lot of internals to\n> digest.\n> \n> Are there any documentation updates or regression tests to add ?\n\nI'm not sure if user documentation should be changed unless a new GUC or\nstatistics information is added. As for regression tests, perhaps in the next\nversion of the patch. But right now I don't know how to implement the feature\nin a less invasive way (see the complaint by Andres in [1]), nor do I have\nenough time to work on the patch.\n\n> If FKs support \"bulk\" validation, users should know when that applies, and\n> be able to check that it's working as intended. Even if the test cases are\n> overly verbose or not stable, and not intended for commit, that would be a\n> useful temporary addition.\n> \n> I think that calls=4 indicates this is using bulk validation.\n> \n> postgres=# begin; explain(analyze, timing off, costs off, summary off, verbose) DELETE FROM t WHERE i<999; rollback;\n> BEGIN\n> QUERY PLAN \n> -----------------------------------------------------------------------\n> Delete on public.t (actual rows=0 loops=1)\n> -> Index Scan using t_pkey on public.t (actual rows=998 loops=1)\n> Output: ctid\n> Index Cond: (t.i < 999)\n> Trigger RI_ConstraintTrigger_a_16399 for constraint t_i_fkey: calls=4\n\n> I started thinking about this 1+ years ago wondering if a BRIN index could be\n> used for (bulk) FK validation.\n> \n> So I would like to be able to see the *plan* for the query.\n\n> I was able to show the plan and see that BRIN can be used like so:\n> |SET auto_explain.log_nested_statements=on; SET client_min_messages=debug; SET auto_explain.log_min_duration=0;\n> Should the plan be visible in explain (not auto-explain) ?\n\nFor development purposes, I thin I could get the plan this way:\n\nSET debug_print_plan TO on;\nSET client_min_messages TO debug;\n\n(The plan is cached, so I think the query will only be displayed during the\nfirst execution in the session).\n\nDo you think that the documentation should advise the user to create BRIN\nindex on the FK table?\n\n> BTW did you see this older thread ?\n> https://www.postgresql.org/message-id/flat/CA%2BU5nMLM1DaHBC6JXtUMfcG6f7FgV5mPSpufO7GRnbFKkF2f7g%40mail.gmail.com\n\nNot yet. Thanks.\n\n[1] https://www.postgresql.org/message-id/20200630011729.mr25bmmbvsattxe2%40alap3.anarazel.de\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 14 Oct 2020 07:22:33 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: More efficient RI checks - take 2" } ]
[ { "msg_contents": "I reviewed docs for v13, like:\ngit log --cherry-pick origin/master...origin/REL_12_STABLE -p doc\n\nI did something similar for v12 [0]. I've included portions of that here which\nstill seem lacking 12 months later (but I'm not intending to continue defending\neach individual patch hunk).\n\nI previously mailed separately about a few individual patches, some of which\nhave separate, ongoing discussion and aren't included here (incr sort, parallel\nvacuum).\n\nJustin\n\n[0] https://www.postgresql.org/message-id/flat/20190709161256.GH22387%40telsasoft.com#56889b868e5886e36b90e9f5a1165186", "msg_date": "Wed, 8 Apr 2020 11:56:53 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "doc review for v13" }, { "msg_contents": "On Wed, Apr 08, 2020 at 11:56:53AM -0500, Justin Pryzby wrote:\n> I previously mailed separately about a few individual patches, some of which\n> have separate, ongoing discussion and aren't included here (incr sort, parallel\n> vacuum).\n\nI have gone through your changes, and committed what looked like the\nmost obvious mistakes in my view. Please see below for more\ncomments.\n\n required pages to remove both downlink and rightlink are already locked. That\n-evades potential right to left page locking order, which could deadlock with\n+avoid potential right to left page locking order, which could deadlock with\nNot sure which one is better, but the new change is grammatically\nincorrect.\n\n <varname>auto_explain.log_settings</varname> controls whether information\n- about modified configuration options are printed when execution plan is logged.\n- Only options affecting query planning with value different from the built-in\n+ about modified configuration options is printed when an execution plan is logged.\n+ Only those options which affect query planning and whose value differs from its built-in\nDepends on how you read the sentence, but here is seemt to me that \n\"statistics\" is the correct subject, no?\n\n- replication is disabled. Abrupt streaming client disconnection might\n+ replication is disabled. Abrupt disconnection of a streaming client might\nOriginal looks correct to me here.\n\n <literal>_tz</literal> suffix. These functions have been implemented to\n- support comparison of date/time values that involves implicit\n+ support comparison of date/time values that involve implicit\nThe subject is \"comparison\" here, no?\n\n may be included. It also stores the size, last modification time, and\n- an optional checksum for each file.\n+ optionally a checksum for each file.\nThe original sounds fine to me as well.\n\nAnything related to imath.c needs to be reported upstream, though I\nrecall reporting these two already.\n--\nMichael", "msg_date": "Fri, 10 Apr 2020 11:27:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Fri, Apr 10, 2020 at 11:27:46AM +0900, Michael Paquier wrote:\n> On Wed, Apr 08, 2020 at 11:56:53AM -0500, Justin Pryzby wrote:\n> > I previously mailed separately about a few individual patches, some of which\n> > have separate, ongoing discussion and aren't included here (incr sort, parallel\n> > vacuum).\n> \n> I have gone through your changes, and committed what looked like the\n> most obvious mistakes in my view. Please see below for more\n> comments.\n\nThanks - rebased for cfbot and continued review.\n\n> required pages to remove both downlink and rightlink are already locked. That\n> -evades potential right to left page locking order, which could deadlock with\n> +avoid potential right to left page locking order, which could deadlock with\n> Not sure which one is better, but the new change is grammatically\n> incorrect.\n\nThanks for noticing\n\n\"Evades\" usually means to act to avoid detection by the government. Like tax\nevasion.\n\n> <varname>auto_explain.log_settings</varname> controls whether information\n> - about modified configuration options are printed when execution plan is logged.\n> - Only options affecting query planning with value different from the built-in\n> + about modified configuration options is printed when an execution plan is logged.\n> + Only those options which affect query planning and whose value differs from its built-in\n> Depends on how you read the sentence, but here is seemt to me that \n> \"statistics\" is the correct subject, no?\n\nStatistics ?\n\n> <literal>_tz</literal> suffix. These functions have been implemented to\n> - support comparison of date/time values that involves implicit\n> + support comparison of date/time values that involve implicit\n> The subject is \"comparison\" here, no?\n\nYou're right.\n\n-- \nJustin", "msg_date": "Thu, 9 Apr 2020 22:01:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Thu, Apr 09, 2020 at 10:01:51PM -0500, Justin Pryzby wrote:\n> On Fri, Apr 10, 2020 at 11:27:46AM +0900, Michael Paquier wrote:\n>> required pages to remove both downlink and rightlink are already locked. That\n>> -evades potential right to left page locking order, which could deadlock with\n>> +avoid potential right to left page locking order, which could deadlock with\n>> Not sure which one is better, but the new change is grammatically\n>> incorrect.\n> \n> \"Evades\" usually means to act to avoid detection by the government. Like tax\n> evasion.\n\nThis change is from Alexander Korotkov as of 32ca32d, so I am adding\nhim in CC to get his opinion.\n\n>> <varname>auto_explain.log_settings</varname> controls whether information\n>> - about modified configuration options are printed when execution plan is logged.\n>> - Only options affecting query planning with value different from the built-in\n>> + about modified configuration options is printed when an execution plan is logged.\n>> + Only those options which affect query planning and whose value differs from its built-in\n>> Depends on how you read the sentence, but here is seemt to me that \n>> \"statistics\" is the correct subject, no?\n> \n> Statistics ?\n\nOops. I may have messed up with a different part of the patch set.\nYour suggestion is right as the subject is \"information\" here.\n--\nMichael", "msg_date": "Fri, 10 Apr 2020 15:29:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "Added a few more.\nAnd rebased on top of dbc60c5593f26dc777a3be032bff4fb4eab1ddd1\n\n-- \nJustin", "msg_date": "Sun, 12 Apr 2020 16:35:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Sun, Apr 12, 2020 at 04:35:45PM -0500, Justin Pryzby wrote:\n> Added a few more.\n> And rebased on top of dbc60c5593f26dc777a3be032bff4fb4eab1ddd1\n\nThanks for the patch set, I have applied the most obvious parts (more\nor less 1/3) to reduce the load. Here is a review of the rest.\n\n> @@ -2829,7 +2829,6 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n> \n> \t\tExplainPropertyList(\"Sort Methods Used\", methodNames, es);\n> \n> -\t\tif (groupInfo->maxMemorySpaceUsed > 0)\n> \t\t{\n> \t\t\tlong\t\tavgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount;\n> \t\t\tconst char *spaceTypeName;\n> @@ -2846,7 +2845,7 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n> \n> \t\t\tExplainCloseGroup(\"Sort Spaces\", memoryName.data, true, es);\n> \t\t}\n> -\t\tif (groupInfo->maxDiskSpaceUsed > 0)\n> +\n> \t\t{\n> \t\t\tlong\t\tavgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount;\n> \t\t\tconst char *spaceTypeName;\n\nIf this can be reworked, it seems to me that more cleanup could be\ndone.\n\n> @@ -987,7 +987,7 @@ ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags)\n> \n> \t/*\n> \t * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> -\t * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> +\t * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only ???? one of many sort\n> \t * batches in the current sort state.\n> \t */\n> \tAssert((eflags & (EXEC_FLAG_BACKWARD |\n\nThe following is inconsistent with this comment block, and I guess\nthat \"????\" should be \"have\":\n Assert((eflags & (EXEC_FLAG_BACKWARD |\n EXEC_FLAG_MARK)) == 0);\nThis is only a doc-related change though, so I'll start a different\nthread about that after looking more at it.\n\n> @@ -1153,7 +1153,7 @@ ExecReScanIncrementalSort(IncrementalSortState *node)\n> \t/*\n> \t * If we've set up either of the sort states yet, we need to reset them.\n> \t * We could end them and null out the pointers, but there's no reason to\n> -\t * repay the setup cost, and because guard setting up pivot comparator\n> +\t * repay the setup cost, and because ???? guard setting up pivot comparator\n> \t * state similarly, doing so might actually cause a leak.\n> \t */\n> \tif (node->fullsort_state != NULL)\n\nI don't quite understand this comment either, but it seems to me that\nthe last part should be a fully-separate sentence, aka \"This guards\nagainst..\".\n\n> @@ -631,7 +631,7 @@ logicalrep_partition_open(LogicalRepRelMapEntry *root,\n> \t/*\n> \t * If the partition's attributes don't match the root relation's, we'll\n> \t * need to make a new attrmap which maps partition attribute numbers to\n> -\t * remoterel's, instead the original which maps root relation's attribute\n> +\t * remoterel's, instead of the original which maps root relation's attribute\n> \t * numbers to remoterel's.\n> \t *\n> \t * Note that 'map' which comes from the tuple routing data structure\n\nOkay, this is not really clear to start with. I think that I would\nrewrite that completely as follows:\n\"If the partition's attributes do not match the root relation's\nattributes, we cannot use the original attribute map which maps the\nroot relation's attributes with remoterel's attributes. Instead,\nbuild a new attribute map which maps the partition's attributes with\nremoterel's attributes.\"\n\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -1373,7 +1373,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)\n> \t\t\telse\n> \t\t\t\tLWLockRelease(ProcArrayLock);\n> \n> -\t\t\t/* prevent signal from being resent more than once */\n> +\t\t\t/* prevent signal from being re-sent more than once */\n> \t\t\tallow_autovacuum_cancel = false;\n> \t\t}\n\nShouldn't that just be \"sent more than two times\"?\n\n\n> @@ -1428,11 +1428,11 @@ tuplesort_updatemax(Tuplesortstate *state)\n> \t}\n> \n> \t/*\n> -\t * Sort evicts data to the disk when it didn't manage to fit those data to\n> -\t * the main memory. This is why we assume space used on the disk to be\n> +\t * Sort evicts data to the disk when it didn't manage to fit the data in\n> +\t * main memory. This is why we assume space used on the disk to be\n\nBoth the original and the suggestion are wrong? It seems to me that\nit should be \"this data\" instead because it refers to the data evicted\nin the first part of the sentence. \n\n> \t * more important for tracking resource usage than space used in memory.\n> -\t * Note that amount of space occupied by some tuple set on the disk might\n> -\t * be less than amount of space occupied by the same tuple set in the\n> +\t * Note that amount of space occupied by some tupleset on the disk might\n> +\t * be less than amount of space occupied by the same tupleset in the\n> \t * memory due to more compact representation.\n> \t */\n> \tif ((isSpaceDisk && !state->isMaxSpaceDisk) ||\n\nYep, right.\n\n> +++ b/doc/src/sgml/logicaldecoding.sgml\n> @@ -223,7 +223,7 @@ $ pg_recvlogical -d postgres --slot=test --drop-slot\n> A logical slot will emit each change just once in normal operation.\n> The current position of each slot is persisted only at checkpoint, so in\n> the case of a crash the slot may return to an earlier LSN, which will\n> - then cause recent changes to be resent when the server restarts.\n> + then cause recent changes to be re-sent when the server restarts.\n> Logical decoding clients are responsible for avoiding ill effects from\n> handling the same message more than once. Clients may wish to record\n> the last LSN they saw when decoding and skip over any repeated data or\n\n\"sent again\" instead of \"resent\" or \"re-sent\"?\n\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -889,7 +889,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> from the parent table will be created in the partition, if they don't\n> already exist.\n> If any of the <literal>CHECK</literal> constraints of the table being\n> - attached is marked <literal>NO INHERIT</literal>, the command will fail;\n> + attached are marked <literal>NO INHERIT</literal>, the command will fail;\n> such constraints must be recreated without the\n> <literal>NO INHERIT</literal> clause.\n> </para>\n\nIt seems to me that both are actually correct here.\n\n> +++ b/doc/src/sgml/ref/pg_basebackup.sgml\n> @@ -604,7 +604,7 @@ PostgreSQL documentation\n> not contain any checksums. Otherwise, it will contain a checksum\n> of each file in the backup using the specified algorithm. In addition,\n> the manifest will always contain a <literal>SHA256</literal>\n> - checksum of its own contents. The <literal>SHA</literal> algorithms\n> + checksum of its own content. The <literal>SHA</literal> algorithms\n> are significantly more CPU-intensive than <literal>CRC32C</literal>,\n> so selecting one of them may increase the time required to complete\n> the backup.\n\nAnd the original is correct here IMO.\n\n> +++ b/doc/src/sgml/ref/psql-ref.sgml\n> @@ -1244,7 +1244,7 @@ testdb=&gt;\n> (see <xref linkend=\"catalog-pg-opclass\"/>).\n> If <replaceable class=\"parameter\">access-method-patttern</replaceable>\n> is specified, only operator classes associated with access methods whose\n> - names match pattern are listed.\n> + names match the pattern are listed.\n> If <replaceable class=\"parameter\">input-type-pattern</replaceable>\n> is specified, only operator classes associated with input types whose\n> names match the pattern are listed.\n\nAnother error I see here is that pattern has three 't', while the\noriginal parameter name is correct.\n\n> +++ b/doc/src/sgml/runtime.sgml\n> @@ -2643,7 +2643,7 @@ openssl x509 -req -in server.csr -text -days 365 \\\n> <para>\n> The <productname>PostgreSQL</productname> server will listen for both\n> normal and <acronym>GSSAPI</acronym>-encrypted connections on the same TCP\n> - port, and will negotiate with any connecting client on whether to\n> + port, and will negotiate with any connecting client whether to\n> use <acronym>GSSAPI</acronym> for encryption (and for authentication). By\n> default, this decision is up to the client (which means it can be\n> downgraded by an attacker); see <xref linkend=\"auth-pg-hba-conf\"/> about\n\nActually correct?\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 14:47:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Tue, Apr 14, 2020 at 02:47:54PM +0900, Michael Paquier wrote:\n> On Sun, Apr 12, 2020 at 04:35:45PM -0500, Justin Pryzby wrote:\n> > Added a few more.\n> > And rebased on top of dbc60c5593f26dc777a3be032bff4fb4eab1ddd1\n> \n> Thanks for the patch set, I have applied the most obvious parts (more\n> or less 1/3) to reduce the load. Here is a review of the rest.\n\nThanks - attached are the remaining undisputed portions..\n\n> > +++ b/doc/src/sgml/ref/alter_table.sgml\n> > @@ -889,7 +889,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> > from the parent table will be created in the partition, if they don't\n> > already exist.\n> > If any of the <literal>CHECK</literal> constraints of the table being\n> > - attached is marked <literal>NO INHERIT</literal>, the command will fail;\n> > + attached are marked <literal>NO INHERIT</literal>, the command will fail;\n> > such constraints must be recreated without the\n> > <literal>NO INHERIT</literal> clause.\n> > </para>\n>\n> It seems to me that both are actually correct here.\n\nI think my text is correct. This would *also* be correct:\n\n| If any <literal>CHECK</literal> constraint on the table being\n| attached is marked <literal>NO INHERIT</literal>, the command will fail;\n\nBut not the hybrid: \"If any OF THE .. is ..\"\n\n-- \nJustin", "msg_date": "Sun, 26 Apr 2020 11:13:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Sun, Apr 26, 2020 at 12:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Apr 14, 2020 at 02:47:54PM +0900, Michael Paquier wrote:\n> > On Sun, Apr 12, 2020 at 04:35:45PM -0500, Justin Pryzby wrote:\n> > > Added a few more.\n> > > And rebased on top of dbc60c5593f26dc777a3be032bff4fb4eab1ddd1\n> >\n> > Thanks for the patch set, I have applied the most obvious parts (more\n> > or less 1/3) to reduce the load. Here is a review of the rest.\n>\n> Thanks - attached are the remaining undisputed portions..\n>\n> > > +++ b/doc/src/sgml/ref/alter_table.sgml\n> > > @@ -889,7 +889,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> > > from the parent table will be created in the partition, if they don't\n> > > already exist.\n> > > If any of the <literal>CHECK</literal> constraints of the table being\n> > > - attached is marked <literal>NO INHERIT</literal>, the command will fail;\n> > > + attached are marked <literal>NO INHERIT</literal>, the command will fail;\n> > > such constraints must be recreated without the\n> > > <literal>NO INHERIT</literal> clause.\n> > > </para>\n> >\n> > It seems to me that both are actually correct here.\n>\n> I think my text is correct. This would *also* be correct:\n>\n> | If any <literal>CHECK</literal> constraint on the table being\n> | attached is marked <literal>NO INHERIT</literal>, the command will fail;\n>\n> But not the hybrid: \"If any OF THE .. is ..\"\n\n\"any of the...are\" sounds more natural to my ears, and some searching\nyielded some grammar sites that agree (specifically that \"any of\" is\nonly used with singular verbs if the construction is uncountable or\nnegative).\n\nHowever there are also multiple claims by grammarians that either\nsingular or plural verbs are acceptable with the \"any of\"\nconstruction. So that's not all that helpful.\n\nJames\n\n\n", "msg_date": "Sun, 26 Apr 2020 20:03:06 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Sun, Apr 26, 2020 at 12:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I think my text is correct. This would *also* be correct:\n>> | If any <literal>CHECK</literal> constraint on the table being\n>> | attached is marked <literal>NO INHERIT</literal>, the command will fail;\n>> But not the hybrid: \"If any OF THE .. is ..\"\n\n> \"any of the...are\" sounds more natural to my ears,\n\nYeah, I think the same. If you want to argue grammar, I'd point\nout that the \"any\" could refer to several of the constraints,\nmaking it correct to use the plural verb. The alternative that\nJustin mentions could be written as \"If any one constraint is ...\",\nwhich is correct in that form; but the plural way seems less stilted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Apr 2020 20:59:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Sun, Apr 26, 2020 at 08:59:05PM -0400, Tom Lane wrote:\n> James Coleman <jtc331@gmail.com> writes:\n>> On Sun, Apr 26, 2020 at 12:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> I think my text is correct. This would *also* be correct:\n>>> | If any <literal>CHECK</literal> constraint on the table being\n>>> | attached is marked <literal>NO INHERIT</literal>, the command will fail;\n>>> But not the hybrid: \"If any OF THE .. is ..\"\n> \n>> \"any of the...are\" sounds more natural to my ears,\n> \n> Yeah, I think the same. If you want to argue grammar, I'd point\n> out that the \"any\" could refer to several of the constraints,\n> making it correct to use the plural verb. The alternative that\n> Justin mentions could be written as \"If any one constraint is ...\",\n> which is correct in that form; but the plural way seems less stilted.\n\nHm, okay. There are still pieces in those patches about which I am\nnot sure, so I have let that aside for the time being.\n\nAnyway, I have applied patch 12, and reported the typos from imath.c\ndirectly to upstream:\nhttps://github.com/creachadair/imath/issues/45\nhttps://github.com/creachadair/imath/issues/46\n--\nMichael", "msg_date": "Mon, 27 Apr 2020 15:03:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Mon, Apr 27, 2020 at 03:03:05PM +0900, Michael Paquier wrote:\n> Hm, okay. There are still pieces in those patches about which I am\n> not sure, so I have let that aside for the time being.\n> \n> Anyway, I have applied patch 12, and reported the typos from imath.c\n\nThank you.\n\nI will leave this here in case someone else wants to make a pass or vet them.\n\ndiff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml\nindex 192d6574c3..de2be61bff 100644\n--- a/doc/src/sgml/auto-explain.sgml\n+++ b/doc/src/sgml/auto-explain.sgml\n@@ -200,9 +200,9 @@ LOAD 'auto_explain';\n <listitem>\n <para>\n <varname>auto_explain.log_settings</varname> controls whether information\n- about modified configuration options is printed when execution plan is logged.\n- Only options affecting query planning with value different from the built-in\n- default value are included in the output. This parameter is off by default.\n+ about modified configuration options is printed when an execution plan is logged.\n+ Only those options which affect query planning and whose value differs from their\n+ built-in default are included in the output. This parameter is off by default.\n Only superusers can change this setting.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/btree.sgml b/doc/src/sgml/btree.sgml\nindex e9cab4a55d..ff1e49e509 100644\n--- a/doc/src/sgml/btree.sgml\n+++ b/doc/src/sgml/btree.sgml\n@@ -609,7 +609,7 @@ equalimage(<replaceable>opcintype</replaceable> <type>oid</type>) returns bool\n </para>\n <para>\n Deduplication works by periodically merging groups of duplicate\n- tuples together, forming a single posting list tuple for each\n+ tuples together, forming a single <firstterm>posting list</firstterm> tuple for each\n group. The column key value(s) only appear once in this\n representation. This is followed by a sorted array of\n <acronym>TID</acronym>s that point to rows in the table. This\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex a14df06292..87e0183a89 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -3667,7 +3667,7 @@ restore_command = 'copy \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n servers or streaming base backup clients (i.e., the maximum number of\n simultaneously running WAL sender processes). The default is\n <literal>10</literal>. The value <literal>0</literal> means\n- replication is disabled. Abrupt streaming client disconnection might\n+ replication is disabled. Abrupt disconnection of a streaming client might\n leave an orphaned connection slot behind until a timeout is reached,\n so this parameter should be set slightly higher than the maximum\n number of expected clients so disconnected clients can immediately\n@@ -3790,9 +3790,9 @@ restore_command = 'copy \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n slots</link> are allowed to retain in the <filename>pg_wal</filename>\n directory at checkpoint time.\n If <varname>max_slot_wal_keep_size</varname> is -1 (the default),\n- replication slots retain unlimited amount of WAL files. If\n- restart_lsn of a replication slot gets behind more than that megabytes\n- from the current LSN, the standby using the slot may no longer be able\n+ replication slots retain unlimited amount of WAL files. Otherwise, if\n+ restart_lsn of a replication slot falls behind the current LSN by more\n+ than the specified size, the standby using the slot may no longer be able\n to continue replication due to removal of required WAL files. You\n can see the WAL availability of replication slots\n in <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 2a478c4f73..9ec0d4c783 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3959,7 +3959,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02\n Before running the <command>ATTACH PARTITION</command> command, it is\n recommended to create a <literal>CHECK</literal> constraint on the table to\n be attached matching the desired partition constraint. That way,\n- the system will be able to skip the scan to validate the implicit\n+ the system will be able to skip the scan which is otherwise needed to validate the implicit\n partition constraint. Without the <literal>CHECK</literal> constraint,\n the table will be scanned to validate the partition constraint while\n holding an <literal>ACCESS EXCLUSIVE</literal> lock on that partition\ndiff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml\nindex 75d2224a61..b9c789eb6f 100644\n--- a/doc/src/sgml/libpq.sgml\n+++ b/doc/src/sgml/libpq.sgml\n@@ -925,11 +925,11 @@ postgresql:///mydb?host=localhost&amp;port=5433\n </para>\n \n <para>\n- Connection <acronym>URI</acronym> needs to be encoded with \n+ A connection <acronym>URI</acronym> needs to be encoded with \n <ulink url=\"https://tools.ietf.org/html/rfc3986#section-2.1\">Percent-encoding</ulink> \n- if it includes symbols with special meaning in any of its parts. \n- Here is an example where equal sign (<literal>=</literal>) is replaced\n- with <literal>%3D</literal> and whitespace character with\n+ if it includes symbols with special meanings in any of its parts. \n+ Here is an example where an equal sign (<literal>=</literal>) is replaced\n+ with <literal>%3D</literal> and a whitespace character with\n <literal>%20</literal>:\n <programlisting>\n postgresql://user@localhost:5433/mydb?options=-c%20synchronous_commit%3Doff\n@@ -1223,7 +1223,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname\n <term><literal>connect_timeout</literal></term>\n <listitem>\n <para>\n- Maximum wait for connection, in seconds (write as a decimal integer,\n+ Maximum time to wait while connecting, in seconds (write as a decimal integer,\n e.g. <literal>10</literal>). Zero, negative, or not specified means\n wait indefinitely. The minimum allowed timeout is 2 seconds, therefore\n a value of <literal>1</literal> is interpreted as <literal>2</literal>.\ndiff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\nindex eba331a72b..faf6d56bed 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -404,7 +404,7 @@\n <para>\n Replication is only supported by tables, including partitioned tables.\n Attempts to replicate other types of relations such as views, materialized\n- views, or foreign tables, will result in an error.\n+ views, or foreign tables will result in an error.\n </para>\n </listitem>\n \ndiff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml\nindex bad3bfe620..e08ae9e2af 100644\n--- a/doc/src/sgml/logicaldecoding.sgml\n+++ b/doc/src/sgml/logicaldecoding.sgml\n@@ -223,7 +223,7 @@ $ pg_recvlogical -d postgres --slot=test --drop-slot\n A logical slot will emit each change just once in normal operation.\n The current position of each slot is persisted only at checkpoint, so in\n the case of a crash the slot may return to an earlier LSN, which will\n- then cause recent changes to be resent when the server restarts.\n+ then cause recent changes to be re-sent when the server restarts.\n Logical decoding clients are responsible for avoiding ill effects from\n handling the same message more than once. Clients may wish to record\n the last LSN they saw when decoding and skip over any repeated data or\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex 6562cc400b..3cabc24721 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -1484,11 +1484,11 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n </row>\n <row>\n <entry><literal>RecoveryConflictSnapshot</literal></entry>\n- <entry>Waiting for recovery conflict resolution on a vacuum cleanup.</entry>\n+ <entry>Waiting for recovery conflict resolution during vacuum cleanup.</entry>\n </row>\n <row>\n <entry><literal>RecoveryConflictTablespace</literal></entry>\n- <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n+ <entry>Waiting for recovery conflict resolution while dropping tablespace.</entry>\n </row>\n <row>\n <entry><literal>RecoveryPause</literal></entry>\n@@ -1526,9 +1526,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n <row>\n <entry><literal>RecoveryRetrieveRetryInterval</literal></entry>\n <entry>\n- Waiting when WAL data is not available from any kind of sources\n- (<filename>pg_wal</filename>, archive or stream) before trying\n- again to retrieve WAL data, at recovery.\n+ Waiting in recovery when WAL data is not available from any source\n+ (<filename>pg_wal</filename>, archive or stream) before re-trying\n+ to retrieve WAL data.\n </entry>\n </row>\n <row>\n@@ -4577,8 +4577,8 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,\n <entry><literal>waiting for checkpoint to finish</literal></entry>\n <entry>\n The WAL sender process is currently performing\n- <function>pg_start_backup</function> to set up for\n- taking a base backup, and waiting for backup start\n+ <function>pg_start_backup</function> to prepare to\n+ take a base backup, and waiting for the start-of-backup\n checkpoint to finish.\n </entry>\n </row>\ndiff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\nindex 20d1fe0ad8..06919bd87c 100644\n--- a/doc/src/sgml/protocol.sgml\n+++ b/doc/src/sgml/protocol.sgml\n@@ -2586,7 +2586,7 @@ The commands accepted in replication mode are:\n and sent along with the backup. The manifest is a list of every\n file present in the backup with the exception of any WAL files that\n may be included. It also stores the size, last modification time, and\n- an optional checksum for each file.\n+ optionally a checksum for each file.\n A value of <literal>force-encode</literal> forces all filenames\n to be hex-encoded; otherwise, this type of encoding is performed only\n for files whose names are non-UTF8 octet sequences.\n@@ -2602,7 +2602,7 @@ The commands accepted in replication mode are:\n <term><literal>MANIFEST_CHECKSUMS</literal> <replaceable>checksum_algorithm</replaceable></term>\n <listitem>\n <para>\n- Specifies the algorithm that should be applied to each file included\n+ Specifies the checksum algorithm that should be applied to each file included\n in the backup manifest. Currently, the available\n algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n <literal>SHA224</literal>, <literal>SHA256</literal>,\ndiff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\nindex 6563bd5ab2..39e9f9a7c7 100644\n--- a/doc/src/sgml/ref/alter_table.sgml\n+++ b/doc/src/sgml/ref/alter_table.sgml\n@@ -671,7 +671,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n When applied to a partitioned table, nothing is moved, but any\n partitions created afterwards with\n <command>CREATE TABLE PARTITION OF</command> will use that tablespace,\n- unless the <literal>TABLESPACE</literal> clause is used to override it.\n+ unless overridden by its <literal>TABLESPACE</literal> clause.\n </para>\n \n <para>\n@@ -891,7 +891,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n from the parent table will be created in the partition, if they don't\n already exist.\n If any of the <literal>CHECK</literal> constraints of the table being\n- attached is marked <literal>NO INHERIT</literal>, the command will fail;\n+ attached are marked <literal>NO INHERIT</literal>, the command will fail;\n such constraints must be recreated without the\n <literal>NO INHERIT</literal> clause.\n </para>\ndiff --git a/doc/src/sgml/ref/create_subscription.sgml b/doc/src/sgml/ref/create_subscription.sgml\nindex 1a90c244fb..c18618d5c8 100644\n--- a/doc/src/sgml/ref/create_subscription.sgml\n+++ b/doc/src/sgml/ref/create_subscription.sgml\n@@ -159,7 +159,7 @@ CREATE SUBSCRIPTION <replaceable class=\"parameter\">subscription_name</replaceabl\n <para>\n It is safe to use <literal>off</literal> for logical replication:\n If the subscriber loses transactions because of missing\n- synchronization, the data will be resent from the publisher.\n+ synchronization, the data will be re-sent from the publisher.\n </para>\n \n <para>\ndiff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml\nindex 01ce44ee22..b742470f13 100644\n--- a/doc/src/sgml/ref/pg_basebackup.sgml\n+++ b/doc/src/sgml/ref/pg_basebackup.sgml\n@@ -604,7 +604,7 @@ PostgreSQL documentation\n not contain any checksums. Otherwise, it will contain a checksum\n of each file in the backup using the specified algorithm. In addition,\n the manifest will always contain a <literal>SHA256</literal>\n- checksum of its own contents. The <literal>SHA</literal> algorithms\n+ checksum of its own content. The <literal>SHA</literal> algorithms\n are significantly more CPU-intensive than <literal>CRC32C</literal>,\n so selecting one of them may increase the time required to complete\n the backup.\n@@ -614,7 +614,7 @@ PostgreSQL documentation\n of each file for users who wish to verify that the backup has not been\n tampered with, while the CRC32C algorithm provides a checksum which is\n much faster to calculate and good at catching errors due to accidental\n- changes but is not resistant to targeted modifications. Note that, to\n+ changes but is not resistant to malicious modifications. Note that, to\n be useful against an adversary who has access to the backup, the backup\n manifest would need to be stored securely elsewhere or otherwise\n verified not to have been modified since the backup was taken.\n@@ -808,7 +808,7 @@ PostgreSQL documentation\n </para>\n \n <para>\n- Tablespaces will in plain format by default be backed up to the same path\n+ In plain format, tablespaces will by default be backed up to the same path\n they have on the server, unless the\n option <literal>--tablespace-mapping</literal> is used. Without\n this option, running a plain format base backup on the same host as the\ndiff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml\nindex a9bc397165..d58cd05f46 100644\n--- a/doc/src/sgml/ref/pg_dump.sgml\n+++ b/doc/src/sgml/ref/pg_dump.sgml\n@@ -323,7 +323,7 @@ PostgreSQL documentation\n <listitem>\n <para>\n Run the dump in parallel by dumping <replaceable class=\"parameter\">njobs</replaceable>\n- tables simultaneously. This option reduces the time of the dump but it also\n+ tables simultaneously. This option reduces the duration of the dump but it also\n increases the load on the database server. You can only use this option with the\n directory output format because this is the only output format where multiple processes\n can write their data at the same time.\ndiff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml\nindex 07c49e4719..acdefe58b8 100644\n--- a/doc/src/sgml/ref/pg_rewind.sgml\n+++ b/doc/src/sgml/ref/pg_rewind.sgml\n@@ -215,7 +215,7 @@ PostgreSQL documentation\n <command>pg_rewind</command> to return without waiting, which is\n faster, but means that a subsequent operating system crash can leave\n the synchronized data directory corrupt. Generally, this option is\n- useful for testing but should not be used when creating a production\n+ useful for testing but should not be used on a production\n installation.\n </para>\n </listitem>\n@@ -309,7 +309,7 @@ GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint, bigint, b\n <para>\n When executing <application>pg_rewind</application> using an online\n cluster as source which has been recently promoted, it is necessary\n- to execute a <command>CHECKPOINT</command> after promotion so as its\n+ to execute a <command>CHECKPOINT</command> after promotion such that its\n control file reflects up-to-date timeline information, which is used by\n <application>pg_rewind</application> to check if the target cluster\n can be rewound using the designated source cluster.\ndiff --git a/doc/src/sgml/ref/pg_verifybackup.sgml b/doc/src/sgml/ref/pg_verifybackup.sgml\nindex 4f9759414f..9618275364 100644\n--- a/doc/src/sgml/ref/pg_verifybackup.sgml\n+++ b/doc/src/sgml/ref/pg_verifybackup.sgml\n@@ -46,7 +46,7 @@ PostgreSQL documentation\n every check which will be performed by a running server when attempting\n to make use of the backup. Even if you use this tool, you should still\n perform test restores and verify that the resulting databases work as\n- expected and that they appear to contain the correct data. However,\n+ expected and that they contain the correct data. However,\n <application>pg_verifybackup</application> can detect many problems\n that commonly occur due to storage problems or user error.\n </para>\n@@ -84,7 +84,7 @@ PostgreSQL documentation\n for any files for which the computed checksum does not match the\n checksum stored in the manifest. This step is not performed for any files\n which produced errors in the previous step, since they are already known\n- to have problems. Also, files which were ignored in the previous step are\n+ to have problems. Files which were ignored in the previous step are\n also ignored in this step.\n </para>\n \n@@ -123,7 +123,7 @@ PostgreSQL documentation\n <title>Options</title>\n \n <para>\n- The following command-line options control the behavior.\n+ The following command-line options control the behavior of this program.\n \n <variablelist>\n <varlistentry>\ndiff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml\nindex c54a7c420d..bde5eca164 100644\n--- a/doc/src/sgml/ref/reindex.sgml\n+++ b/doc/src/sgml/ref/reindex.sgml\n@@ -249,7 +249,7 @@ REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] { IN\n <para>\n Reindexing a single index or table requires being the owner of that\n index or table. Reindexing a schema or database requires being the\n- owner of that schema or database. Note that is therefore sometimes\n+ owner of that schema or database. Note specifically that it's\n possible for non-superusers to rebuild indexes of tables owned by\n other users. However, as a special exception, when\n <command>REINDEX DATABASE</command>, <command>REINDEX SCHEMA</command>\ndiff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml\nindex f6c3d9538b..4388d1329c 100644\n--- a/doc/src/sgml/ref/reindexdb.sgml\n+++ b/doc/src/sgml/ref/reindexdb.sgml\n@@ -173,8 +173,8 @@ PostgreSQL documentation\n <para>\n Execute the reindex commands in parallel by running\n <replaceable class=\"parameter\">njobs</replaceable>\n- commands simultaneously. This option reduces the time of the\n- processing but it also increases the load on the database server.\n+ commands simultaneously. This option reduces the processing time\n+ but it also increases the load on the database server.\n </para>\n <para>\n <application>reindexdb</application> will open\ndiff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml\nindex fd1dc140ab..93a3eed813 100644\n--- a/doc/src/sgml/ref/vacuumdb.sgml\n+++ b/doc/src/sgml/ref/vacuumdb.sgml\n@@ -155,8 +155,8 @@ PostgreSQL documentation\n <para>\n Execute the vacuum or analyze commands in parallel by running\n <replaceable class=\"parameter\">njobs</replaceable>\n- commands simultaneously. This option reduces the time of the\n- processing but it also increases the load on the database server.\n+ commands simultaneously. This option reduces the processing\n+ duration but it also increases the load on the database server.\n </para>\n <para>\n <application>vacuumdb</application> will open\ndiff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml\nindex a34d31d297..3f90c15f3e 100644\n--- a/doc/src/sgml/runtime.sgml\n+++ b/doc/src/sgml/runtime.sgml\n@@ -2643,7 +2643,7 @@ openssl x509 -req -in server.csr -text -days 365 \\\n <para>\n The <productname>PostgreSQL</productname> server will listen for both\n normal and <acronym>GSSAPI</acronym>-encrypted connections on the same TCP\n- port, and will negotiate with any connecting client on whether to\n+ port, and will negotiate with any connecting client whether to\n use <acronym>GSSAPI</acronym> for encryption (and for authentication). By\n default, this decision is up to the client (which means it can be\n downgraded by an attacker); see <xref linkend=\"auth-pg-hba-conf\"/> about\ndiff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml\nindex 283c3e0357..5a8dbcb4d3 100644\n--- a/doc/src/sgml/sources.sgml\n+++ b/doc/src/sgml/sources.sgml\n@@ -373,7 +373,7 @@ ereport(ERROR,\n specify suppression of the <literal>CONTEXT:</literal> portion of a message in\n the postmaster log. This should only be used for verbose debugging\n messages where the repeated inclusion of context would bloat the log\n- volume too much.\n+ too much.\n </para>\n </listitem>\n </itemizedlist>\n@@ -466,8 +466,8 @@ Hint: the addendum\n enough for error messages. Detail and hint messages can be relegated to a\n verbose mode, or perhaps a pop-up error-details window. Also, details and\n hints would normally be suppressed from the server log to save\n- space. Reference to implementation details is best avoided since users\n- aren't expected to know the details.\n+ space. References to implementation details are best avoided since users\n+ aren't expected to know them.\n </para>\n \n </simplesect>\n@@ -518,7 +518,7 @@ Hint: the addendum\n <title>Use of Quotes</title>\n \n <para>\n- Use quotes always to delimit file names, user-supplied identifiers, and\n+ Always use quotes to delimit file names, user-supplied identifiers, and\n other variables that might contain words. Do not use them to mark up\n variables that will not contain words (for example, operator names).\n </para>\ndiff --git a/src/backend/access/gin/README b/src/backend/access/gin/README\nindex 125a82219b..41d4e1e8a0 100644\n--- a/src/backend/access/gin/README\n+++ b/src/backend/access/gin/README\n@@ -413,7 +413,7 @@ leftmost leaf of the tree.\n Deletion algorithm keeps exclusive locks on left siblings of pages comprising\n currently investigated path. Thus, if current page is to be removed, all\n required pages to remove both downlink and rightlink are already locked. That\n-evades potential right to left page locking order, which could deadlock with\n+avoids potential right to left page locking order, which could deadlock with\n concurrent stepping right.\n \n A search concurrent to page deletion might already have read a pointer to the\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex 7ae6131676..7cdf4b84e2 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -2869,7 +2869,7 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n }\n \n /*\n- * If it's EXPLAIN ANALYZE, show tuplesort stats for a incremental sort node\n+ * If it's EXPLAIN ANALYZE, show tuplesort stats for an incremental sort node\n */\n static void\n show_incremental_sort_info(IncrementalSortState *incrsortstate,\n@@ -2917,7 +2917,7 @@ show_incremental_sort_info(IncrementalSortState *incrsortstate,\n \t\t\t&incrsortstate->shared_info->sinfo[n];\n \n \t\t\t/*\n-\t\t\t * If a worker hasn't process any sort groups at all, then exclude\n+\t\t\t * If a worker hasn't processed any sort groups at all, then exclude\n \t\t\t * it from output since it either didn't launch or didn't\n \t\t\t * contribute anything meaningful.\n \t\t\t */\ndiff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c\nindex 39ba11cdf7..da99453c91 100644\n--- a/src/backend/executor/nodeIncrementalSort.c\n+++ b/src/backend/executor/nodeIncrementalSort.c\n@@ -987,7 +987,7 @@ ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags)\n \n \t/*\n \t * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n-\t * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n+\t * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only ???? one of many sort\n \t * batches in the current sort state.\n \t */\n \tAssert((eflags & (EXEC_FLAG_BACKWARD |\n@@ -1153,8 +1153,10 @@ ExecReScanIncrementalSort(IncrementalSortState *node)\n \t/*\n \t * If we've set up either of the sort states yet, we need to reset them.\n \t * We could end them and null out the pointers, but there's no reason to\n-\t * repay the setup cost, and because guard setting up pivot comparator\n-\t * state similarly, doing so might actually cause a leak.\n+\t * repay the setup cost, and because ExecIncrementalSort guards\n+\t * presorted column functions by checking to see if the full sort state\n+\t * has been initialized yet, setting the sort states to null here might\n+\t * actually cause a leak.\n \t */\n \tif (node->fullsort_state != NULL)\n \t{\ndiff --git a/src/backend/replication/logical/relation.c b/src/backend/replication/logical/relation.c\nindex fec39354c0..351b0950c0 100644\n--- a/src/backend/replication/logical/relation.c\n+++ b/src/backend/replication/logical/relation.c\n@@ -631,7 +631,7 @@ logicalrep_partition_open(LogicalRepRelMapEntry *root,\n \t/*\n \t * If the partition's attributes don't match the root relation's, we'll\n \t * need to make a new attrmap which maps partition attribute numbers to\n-\t * remoterel's, instead the original which maps root relation's attribute\n+\t * remoterel's, instead of the original which maps root relation's attribute\n \t * numbers to remoterel's.\n \t *\n \t * Note that 'map' which comes from the tuple routing data structure\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 5aa19d3f78..22fe566f48 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -1373,7 +1373,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)\n \t\t\telse\n \t\t\t\tLWLockRelease(ProcArrayLock);\n \n-\t\t\t/* prevent signal from being resent more than once */\n+\t\t\t/* prevent signal from being re-sent more than once */\n \t\t\tallow_autovacuum_cancel = false;\n \t\t}\n \ndiff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c\nindex bc063061cf..0faa66551f 100644\n--- a/src/backend/utils/adt/jsonpath_exec.c\n+++ b/src/backend/utils/adt/jsonpath_exec.c\n@@ -35,7 +35,7 @@\n * executeItemOptUnwrapTarget() function have 'unwrap' argument, which indicates\n * whether unwrapping of array is needed. When unwrap == true, each of array\n * members is passed to executeItemOptUnwrapTarget() again but with unwrap == false\n- * in order to evade subsequent array unwrapping.\n+ * in order to avoid subsequent array unwrapping.\n *\n * All boolean expressions (predicates) are evaluated by executeBoolItem()\n * function, which returns tri-state JsonPathBool. When error is occurred\ndiff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c\nindex de38c6c7e0..c25a22f79b 100644\n--- a/src/backend/utils/sort/tuplesort.c\n+++ b/src/backend/utils/sort/tuplesort.c\n@@ -1428,11 +1428,11 @@ tuplesort_updatemax(Tuplesortstate *state)\n \t}\n \n \t/*\n-\t * Sort evicts data to the disk when it didn't manage to fit those data to\n-\t * the main memory. This is why we assume space used on the disk to be\n+\t * Sort evicts data to the disk when it didn't fit data in\n+\t * main memory. This is why we assume space used on the disk to be\n \t * more important for tracking resource usage than space used in memory.\n-\t * Note that amount of space occupied by some tuple set on the disk might\n-\t * be less than amount of space occupied by the same tuple set in the\n+\t * Note that the amount of space occupied by some tupleset on the disk might\n+\t * be less than amount of space occupied by the same tupleset in\n \t * memory due to more compact representation.\n \t */\n \tif ((isSpaceDisk && !state->isMaxSpaceDisk) ||\ndiff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h\nindex f7af921f5a..88f4c9a53f 100644\n--- a/src/include/lib/simplehash.h\n+++ b/src/include/lib/simplehash.h\n@@ -560,7 +560,7 @@ restart:\n \t\tuint32\t\tcuroptimal;\n \t\tSH_ELEMENT_TYPE *entry = &data[curelem];\n \n-\t\t/* any empty bucket can directly be used */\n+\t\t/* any empty bucket can be used directly */\n \t\tif (entry->status == SH_STATUS_EMPTY)\n \t\t{\n \t\t\ttb->members++;\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Apr 2020 10:02:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "Some new bits,\nAnd some old ones.\n\n-- \nJustin", "msg_date": "Thu, 11 Jun 2020 21:37:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Thu, Jun 11, 2020 at 09:37:09PM -0500, Justin Pryzby wrote:\n> Some new bits,\n> And some old ones.\n\nI was looking at this patch set, and 0005 has attracted my attention\nhere:\n\n> --- a/src/backend/utils/cache/relcache.c\n> +++ b/src/backend/utils/cache/relcache.c\n> @@ -4240,7 +4240,6 @@ AttrDefaultFetch(Relation relation)\n> \tHeapTuple\thtup;\n> \tDatum\t\tval;\n> \tbool\t\tisnull;\n> -\tint\t\t\tfound;\n> \tint\t\t\ti;\n\nSince 16828d5, this variable is indeed unused. Now, the same commit\nhas removed the following code:\n- if (found != ndef)\n- elog(WARNING, \"%d attrdef record(s) missing for rel %s\",\n- ndef - found, RelationGetRelationName(relation));\n\nShould we actually keep this variable and have this sanity check in\nplace? It seems to me that it would be good to have that, so as we\ncan make sure that the number of default attributes cached matches\nwith the number of defaults actually found when scanning each\nattribute. Adding in CC Andrew as the author of 16828d5 for more\ninput.\n--\nMichael", "msg_date": "Fri, 12 Jun 2020 16:48:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Thu, Jun 11, 2020 at 09:37:09PM -0500, Justin Pryzby wrote:\n> Some new bits,\n> And some old ones.\n\nI have merged 0003 and 0004 together and applied them. 0005 seems to\nhave a separate issue as mentioned upthread, and I have not really\nlooked at 0001 and 0002. Thanks.\n--\nMichael", "msg_date": "Fri, 12 Jun 2020 21:13:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Fri, Jun 12, 2020 at 09:13:02PM +0900, Michael Paquier wrote:\n> I have merged 0003 and 0004 together and applied them. 0005 seems to\n> have a separate issue as mentioned upthread, and I have not really\n> looked at 0001 and 0002. Thanks.\n\nAnd committed 0001 and 0002 after some tiny adjustments as of\n7a3543c.\n--\nMichael", "msg_date": "Mon, 15 Jun 2020 21:20:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "I stand by these changes which I proposed handful of times since April, but not\nyet included by Michael's previous commits.\n\n-- \nJustin", "msg_date": "Tue, 18 Aug 2020 12:17:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Tue, Aug 18, 2020 at 12:17:03PM -0500, Justin Pryzby wrote:\n> The WAL sender process is currently performing\n> - <function>pg_start_backup</function> to set up for\n> - taking a base backup, and waiting for backup start\n> + <function>pg_start_backup</function> to prepare to\n> + take a base backup, and waiting for the start-of-backup\n> checkpoint to finish.\n\nWouldn't it be more simple to use \"to prepare for a base backup\" here?\n\n> Run the dump in parallel by dumping <replaceable class=\"parameter\">njobs</replaceable>\n> - tables simultaneously. This option reduces the time of the dump but it also\n> + tables simultaneously. This option may reduces the time needed to perform the dump but it also\n> increases the load on the database server. You can only use this option with the\n> [...]\n> Execute the reindex commands in parallel by running\n> <replaceable class=\"parameter\">njobs</replaceable>\n> - commands simultaneously. This option reduces the time of the\n> - processing but it also increases the load on the database server.\n> + commands simultaneously. This option may reduce the processing time\n> + but it also increases the load on the database server.\n> [...]\n> Execute the vacuum or analyze commands in parallel by running\n> <replaceable class=\"parameter\">njobs</replaceable>\n> - commands simultaneously. This option reduces the time of the\n> - processing but it also increases the load on the database server.\n> + commands simultaneously. This option may reduce the processing time\n> + but it also increases the load on the database server.\n> </para>\n> <para>\n> <application>vacuumdb</application> will open\n\nThe original versions are fine IMO.\n\n> Replication is only supported by tables, including partitioned tables.\n> Attempts to replicate other types of relations such as views, materialized\n> - views, or foreign tables, will result in an error.\n> + views, or foreign tables will result in an error.\n> </para>\n\nI think that the original is fine.\n\n> \t * If the partition's attributes don't match the root relation's, we'll\n> \t * need to make a new attrmap which maps partition attribute numbers to\n> -\t * remoterel's, instead the original which maps root relation's attribute\n> +\t * remoterel's, instead of the original which maps root relation's attribute\n> \t * numbers to remoterel's.\n\nIndeed.\n\n> from the parent table will be created in the partition, if they don't\n> already exist.\n> If any of the <literal>CHECK</literal> constraints of the table being\n> - attached is marked <literal>NO INHERIT</literal>, the command will fail;\n> + attached are marked <literal>NO INHERIT</literal>, the command will fail;\n> such constraints must be recreated without the\n> <literal>NO INHERIT</literal> clause.\n\nSingular or plural depends on the context when if comes to any with a\ncountable word, and plural looks more natural to me here. So, right.\n\n> enough for error messages. Detail and hint messages can be relegated to a\n> verbose mode, or perhaps a pop-up error-details window. Also, details and\n> hints would normally be suppressed from the server log to save\n> - space. Reference to implementation details is best avoided since users\n> - aren't expected to know the details.\n> + space. References to implementation details are best avoided since users\n> + aren't expected to know them.\n\nOriginal is fine IMO (see 6335c80).\n\n> not contain any checksums. Otherwise, it will contain a checksum\n> of each file in the backup using the specified algorithm. In addition,\n> the manifest will always contain a <literal>SHA256</literal>\n> - checksum of its own contents. The <literal>SHA</literal> algorithms\n> + checksum of its own content. The <literal>SHA</literal> algorithms\n> are significantly more CPU-intensive than <literal>CRC32C</literal>,\n> so selecting one of them may increase the time required to complete\n> the backup.\n> [...]\n> every check which will be performed by a running server when attempting\n> to make use of the backup. Even if you use this tool, you should still\n> perform test restores and verify that the resulting databases work as\n> - expected and that they appear to contain the correct data. However,\n> + expected and that they contain the correct data. However,\n> <application>pg_verifybackup</application> can detect many problems\n> that commonly occur due to storage problems or user error.\n> [...]\n> @@ -82,7 +82,7 @@ PostgreSQL documentation\n> for any files for which the computed checksum does not match the\n> checksum stored in the manifest. This step is not performed for any files\n> which produced errors in the previous step, since they are already known\n> - to have problems. Also, files which were ignored in the previous step are\n> + to have problems. Files which were ignored in the previous step are\n> also ignored in this step.\n\nNo sure this needs to change\n\n> </para>\n> \n> @@ -121,7 +121,7 @@ PostgreSQL documentation\n> <title>Options</title>\n> \n> <para>\n> - The following command-line options control the behavior.\n> + The following command-line options control the behavior of this program.\n\n\"pg_verifybackup accepts the following command-line arguments:\" is\nmore consistent with the style of all the other tools. This needs to\nbe fixed.\n\n> The <productname>PostgreSQL</productname> server will listen for both\n> normal and <acronym>GSSAPI</acronym>-encrypted connections on the same TCP\n> - port, and will negotiate with any connecting client on whether to\n> + port, and will negotiate with any connecting client whether to\n> use <acronym>GSSAPI</acronym> for encryption (and for authentication). By\n\nRight.\n\n> specify suppression of the <literal>CONTEXT:</literal> portion of a message in\n> the postmaster log. This should only be used for verbose debugging\n> messages where the repeated inclusion of context would bloat the log\n> - volume too much.\n> + too much.\n\nOkay here.\n\n> A logical slot will emit each change just once in normal operation.\n> The current position of each slot is persisted only at checkpoint, so in\n> the case of a crash the slot may return to an earlier LSN, which will\n> - then cause recent changes to be resent when the server restarts.\n> + then cause recent changes to be re-sent when the server restarts.\n> Logical decoding clients are responsible for avoiding ill effects from\n> handling the same message more than once. Clients may wish to record\n> the last LSN they saw when decoding and skip over any repeated data or\n> [...]\n> It is safe to use <literal>off</literal> for logical replication:\n> If the subscriber loses transactions because of missing\n> - synchronization, the data will be resent from the publisher.\n> + synchronization, the data will be re-sent from the publisher.\n> </para>\n> [...]\n> -\t\t\t/* prevent signal from being resent more than once */\n> +\t\t\t/* prevent signal from being re-sent more than once */\n> \t\t\tallow_autovacuum_cancel = false;\n\n\"resent\" is wrong, but \"re-sent\" does not sound like the best choice\nto me. Shouldn't we just say \"sent again\" for all three places?\n--\nMichael", "msg_date": "Mon, 31 Aug 2020 16:28:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Mon, Aug 31, 2020 at 04:28:20PM +0900, Michael Paquier wrote:\n> On Tue, Aug 18, 2020 at 12:17:03PM -0500, Justin Pryzby wrote:\n> > The WAL sender process is currently performing\n> > - <function>pg_start_backup</function> to set up for\n> > - taking a base backup, and waiting for backup start\n> > + <function>pg_start_backup</function> to prepare to\n> > + take a base backup, and waiting for the start-of-backup\n> > checkpoint to finish.\n> \n> Wouldn't it be more simple to use \"to prepare for a base backup\" here?\n\nI think it's useful to say \"prepare to take\" since it's more specific.. It's\nnot \"preparing to receive\" or \"preparing to scan\" or \"preparing to parse\".\n\n> > Replication is only supported by tables, including partitioned tables.\n> > Attempts to replicate other types of relations such as views, materialized\n> > - views, or foreign tables, will result in an error.\n> > + views, or foreign tables will result in an error.\n> > </para>\n> \n> I think that the original is fine.\n\nI think this is indisputably wrong, but I realized that it's actually better\nwith an *additional* comma:\n\n| Attempts to replicate other types of relations COMMA such as views, materialized\n| views, or foreign tables, will result in an error.\n\n> > </para>\n> > \n> > @@ -121,7 +121,7 @@ PostgreSQL documentation\n> > <title>Options</title>\n> > \n> > <para>\n> > - The following command-line options control the behavior.\n> > + The following command-line options control the behavior of this program.\n> \n> \"pg_verifybackup accepts the following command-line arguments:\" is\n> more consistent with the style of all the other tools. This needs to\n> be fixed.\n\n> > - to have problems. Also, files which were ignored in the previous step are\n> > + to have problems. Files which were ignored in the previous step are\n> > also ignored in this step.\n> \n> No sure this needs to change\n\nTwo \"also\"s seems poor, and the first one detracts from the 2nd.\n\n> > the case of a crash the slot may return to an earlier LSN, which will\n> > - then cause recent changes to be resent when the server restarts.\n> > + then cause recent changes to be re-sent when the server restarts.\n> > Logical decoding clients are responsible for avoiding ill effects from\n> > handling the same message more than once. Clients may wish to record\n> > the last LSN they saw when decoding and skip over any repeated data or\n> > [...]\n> > It is safe to use <literal>off</literal> for logical replication:\n> > If the subscriber loses transactions because of missing\n> > - synchronization, the data will be resent from the publisher.\n> > + synchronization, the data will be re-sent from the publisher.\n> > </para>\n> > [...]\n> > -\t\t\t/* prevent signal from being resent more than once */\n> > +\t\t\t/* prevent signal from being re-sent more than once */\n> > \t\t\tallow_autovacuum_cancel = false;\n> \n> \"resent\" is wrong, but \"re-sent\" does not sound like the best choice\n> to me. Shouldn't we just say \"sent again\" for all three places?\n\nI don't think so.\n\n-- \nJustin", "msg_date": "Mon, 31 Aug 2020 08:42:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Mon, Aug 31, 2020 at 08:42:08AM -0500, Justin Pryzby wrote:\n> On Mon, Aug 31, 2020 at 04:28:20PM +0900, Michael Paquier wrote:\n>> Wouldn't it be more simple to use \"to prepare for a base backup\" here?\n> \n> I think it's useful to say \"prepare to take\" since it's more specific.. It's\n> not \"preparing to receive\" or \"preparing to scan\" or \"preparing to parse\".\n\nNot sure I see the point in complicating the sentence here more than\nnecessary.\n\n>>> - to have problems. Also, files which were ignored in the previous step are\n>>> + to have problems. Files which were ignored in the previous step are\n>>> also ignored in this step.\n>> \n>> No sure this needs to change\n> \n> Two \"also\"s seems poor, and the first one detracts from the 2nd.\n\nAh, OK. Indeed.\n\n>> \"resent\" is wrong, but \"re-sent\" does not sound like the best choice\n>> to me. Shouldn't we just say \"sent again\" for all three places?\n> \n> I don't think so.\n\nWell, using \"sent again\" has the advantage to about any ambiguity in\nthe way it gets read. So I'd still prefer that when using the past\ntense of \"send\" in those sentences. Any opinions from others?\n--\nMichael", "msg_date": "Tue, 1 Sep 2020 12:12:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "I've added a few more.\n\n-- \nJustin", "msg_date": "Wed, 9 Sep 2020 09:37:42 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Wed, Sep 09, 2020 at 09:37:42AM -0500, Justin Pryzby wrote:\n> I've added a few more.\n\nI have done an extra round of review on this patch series, and applied\nwhat looked obvious to me (basically the points already discussed\nupthread). Some parts applied down to 9.6 for the docs.\n--\nMichael", "msg_date": "Thu, 10 Sep 2020 15:58:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "On Thu, Sep 10, 2020 at 03:58:31PM +0900, Michael Paquier wrote:\n> On Wed, Sep 09, 2020 at 09:37:42AM -0500, Justin Pryzby wrote:\n> > I've added a few more.\n> \n> I have done an extra round of review on this patch series, and applied\n> what looked obvious to me (basically the points already discussed\n> upthread). Some parts applied down to 9.6 for the docs.\n\nThanks. Here's the remainder, with some new ones.\n\n-- \nJustin", "msg_date": "Sat, 19 Sep 2020 12:58:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for v13" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Thanks. Here's the remainder, with some new ones.\n\nLGTM. I tweaked one or two places a bit more, and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 12:46:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doc review for v13" } ]
[ { "msg_contents": "Hackers,\n\nRecently, as part of testing something else, I had need of a tool to create\nsurgically precise corruption within heap pages. I wanted to make the\ncorruption from within TAP tests, so I wrote the tool as a set of perl modules.\n\nThe modules allow you to \"tie\" a perl array to a heap file, in essence thinking\nof the file as an array of heap pages. Each page within the file manifests as\na tied perl hash, where each of the page header fields are an element in the\nhash, and the tuples in the page are an array of tied hashes, with each field\nin the tuple header as a field in that tied hash.\n\nThis is all done in pure perl. There is no eXtended Subroutine component of\nthis.\n\nThe body of each tuple (stuff beyond the tuple header) is thought of merely as\nbinary data. I haven't done any work to decode it into perl datastructures\nequivalent to integer, text, timestamp, etc., nor have I needed that\nfunctionality as yet. That seems doable as an extension of this work, at least\nif the caller passes tuple descriptor type information into the `tie @file`\ncommand.\n\nStuff like the following example works in the implementation already completed.\nNote in particular that the file is bound in O_RDWR mode. That means it all\ngets written back to the underlying file and truly updates (corrupts) your\ndata. It all also works in O_RDONLY mode, in which case the updates are made\nto a copy of the data in perl's memory, but none of it goes back to disk. Of course,\nnothing forces you to update anything. You could use this to read the fields from\nthe file/page/tuple without making modifications.\n\n\t#!/usr/bin/perl\n\n\tuse HeapTuple;\n\tuse HeapPage;\n\tuse HeapFile;\n\tuse Fcntl;\n\n\tmy @file;\n\ttie @file, 'HeapFile', path => 'base/12925/3599', pagesize => 8192, mode => O_RDWR;\n\tfor my $page (@file)\n\t{\n\t\t$page->{pd_lsn_xrecoff}++;\n\t\tprint $page->{pd_checksum}, \"\\n\";\n\t\tfor (@{$page->{'tuples'}})\n\t\t{\n\t\t\t$_->{HEAP_COMBOCID} = 1 if ($_->{HEAP_HASNULL});\n\t\t\t$_->{t_xmin} = $_->{t_xmax} if $_->{HEAP_XMAX_COMMITTED}; \n\t\t}\n\t}\n\tuntie @file;\n\nIn my TAP test usage of these modules, I tend to fall into the pattern of:\n\n\tmy $node = get_new_node('master');\n\t$node->init;\n\tmy $pgdata = $node->data_dir;\n\t$node->safe_psql('postgres', 'create table public.test (bar text)');\n\tmy $path = join('/', $pgdata, $node->safe_psql(\n\t\t'postgres', \"SELECT pg_relation_filepath('public.test')\"));\n\t$node->stop;\n\n\tmy @file;\n\ttie @file, 'HeapFile', path => $path, pagesize => 8192, mode => O_RDWR;\n\t# do some corruption\n\n\t$node->start;\n\t# do some queries against the corrupt table, see what happens\n\nFor kicks, I just ran this one-liner and got many screenfuls of data. I'll just include\nthe tail end:\n\n\tperl -e 'use HeapFile; tie @file, \"HeapFile\", path => \"pgdata/base/12925/1255\"; print(scalar(%$_)) for(@file);'\n\nBODY AS HEX ===> PRINTABLE ASCII\nff 0f 06 00 00 00 00 00 ===> . . . . . . . .\n47 20 00 00 46 06 46 43 ===> q 2 . . p l p g\n49 47 06 05 3f 3d 06 06 ===> s q l _ c a l l\n05 44 3d 06 40 06 41 48 ===> _ h a n d l e r\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 50 03 00 00 00 00 ===> . . . ? . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n42 00 00 00 00 4c 4b 00 ===> f . . . . v u .\n00 00 00 00 00 08 00 00 ===> . . . . . . . .\n3c 00 00 00 01 00 00 00 ===> ` . . . . . . .\n00 00 00 00 01 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n02 46 06 46 43 49 47 06 ===> + p l p g s q l\n05 3f 3d 06 06 05 44 3d ===> _ c a l l _ h a\n06 40 06 41 48 15 18 06 ===> n d l e r ! $ l\n45 3e 40 45 48 02 46 06 ===> i b d i r / p l\n46 43 49 47 06 ===> p g s q l\nb6 01 00 00 t_xmin: 438\n00 00 00 00 t_xmax: 0\n02 00 00 00 t_field3: 2\n00 00 bi_hi: 0\n50 00 bi_lo: 80\n06 00 ip_posid: 6\n1d 00 t_infomask2: 29\n Natts: 29\n HEAP_KEYS_UPDATED: 0\n HEAP_HOT_UPDATED: 0\n HEAP_ONLY_TUPLE: 0\n03 0b t_infomask: 2819\n HEAP_HASNULL: 1\n HEAP_HASVARWIDTH: 1\n HEAP_HASEXTERNAL: 0\n HEAP_HASOID_OLD: 0\n HEAP_XMAX_KEYSHR_LOCK: 0\n HEAP_COMBOCID: 0\n HEAP_XMAX_EXCL_LOCK: 0\n HEAP_XMAX_LOCK_ONLY: 0\n HEAP_XMIN_COMMITTED: 1\n HEAP_XMIN_INVALID: 1\n HEAP_XMAX_COMMITTED: 0\n HEAP_XMAX_INVALID: 1\n HEAP_XMAX_IS_MULTI: 0\n HEAP_UPDATED: 0\n HEAP_MOVED_OFF: 0\n HEAP_MOVED_IN: 0\n20 t_hoff: 32\nffff0f06 NULL_BITFIELD: 11111111111111111111000001100\n OID_OLD: \n\nBODY AS HEX ===> PRINTABLE ASCII\nff 0f 06 00 00 00 00 00 ===> . . . . . . . .\n48 20 00 00 46 06 46 43 ===> r 2 . . p l p g\n49 47 06 05 45 06 06 45 ===> s q l _ i n l i\n06 41 05 44 3d 06 40 06 ===> n e _ h a n d l\n41 48 00 00 00 00 00 00 ===> e r . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 50 03 00 00 00 00 ===> . . . ? . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n42 00 00 01 00 4c 4b 00 ===> f . . . . v u .\n01 00 00 00 00 08 00 00 ===> . . . . . . . .\n46 00 00 00 01 00 00 00 ===> p . . . . . . .\n00 00 00 00 01 00 00 00 ===> . . . . . . . .\n01 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 08 00 00 02 46 06 46 ===> . . . . / p l p\n43 49 47 06 05 45 06 06 ===> g s q l _ i n l\n45 06 41 05 44 3d 06 40 ===> i n e _ h a n d\n06 41 48 15 18 06 45 3e ===> l e r ! $ l i b\n40 45 48 02 46 06 46 43 ===> d i r / p l p g\n49 47 06 ===> s q l\nb6 01 00 00 t_xmin: 438\n00 00 00 00 t_xmax: 0\n03 00 00 00 t_field3: 3\n00 00 bi_hi: 0\n50 00 bi_lo: 80\n07 00 ip_posid: 7\n1d 00 t_infomask2: 29\n Natts: 29\n HEAP_KEYS_UPDATED: 0\n HEAP_HOT_UPDATED: 0\n HEAP_ONLY_TUPLE: 0\n03 0b t_infomask: 2819\n HEAP_HASNULL: 1\n HEAP_HASVARWIDTH: 1\n HEAP_HASEXTERNAL: 0\n HEAP_HASOID_OLD: 0\n HEAP_XMAX_KEYSHR_LOCK: 0\n HEAP_COMBOCID: 0\n HEAP_XMAX_EXCL_LOCK: 0\n HEAP_XMAX_LOCK_ONLY: 0\n HEAP_XMIN_COMMITTED: 1\n HEAP_XMIN_INVALID: 1\n HEAP_XMAX_COMMITTED: 0\n HEAP_XMAX_INVALID: 1\n HEAP_XMAX_IS_MULTI: 0\n HEAP_UPDATED: 0\n HEAP_MOVED_OFF: 0\n HEAP_MOVED_IN: 0\n20 t_hoff: 32\nffff0f06 NULL_BITFIELD: 11111111111111111111000001100\n OID_OLD: \n\nBODY AS HEX ===> PRINTABLE ASCII\nff 0f 06 00 00 00 00 00 ===> . . . . . . . .\n49 20 00 00 46 06 46 43 ===> s 2 . . p l p g\n49 47 06 05 4c 3d 06 45 ===> s q l _ v a l i\n40 3d 4a 06 48 00 00 00 ===> d a t o r . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n00 00 50 03 00 00 00 00 ===> . . . ? . . . .\n00 00 00 00 00 00 00 00 ===> . . . . . . . .\n42 00 00 01 00 4c 4b 00 ===> f . . . . v u .\n01 00 00 00 00 08 00 00 ===> . . . . . . . .\n46 00 00 00 01 00 00 00 ===> p . . . . . . .\n00 00 00 00 01 00 00 00 ===> . . . . . . . .\n01 00 00 00 00 00 00 00 ===> . . . . . . . .\n01 00 00 00 19 46 06 46 ===> . . . . % p l p\n43 49 47 06 05 4c 3d 06 ===> g s q l _ v a l\n45 40 3d 4a 06 48 15 18 ===> i d a t o r ! $\n06 45 3e 40 45 48 02 46 ===> l i b d i r / p\n06 46 43 49 47 06 ===> l p g s q l\n\n\n\nIs there any interest in this stuff, and if so, where should it live? I'm happy to\nreorganize this a bit if there is general interest in such a submission.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:51:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Perl modules for testing/viewing/corrupting/repairing your heap files" }, { "msg_contents": "Not having received any feedback on this, I've dusted the modules off for submission as-is.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 14 Apr 2020 17:55:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Perl modules for testing/viewing/corrupting/repairing your heap\n files" }, { "msg_contents": "On Wed, Apr 8, 2020 at 3:51 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Recently, as part of testing something else, I had need of a tool to create\n> surgically precise corruption within heap pages. I wanted to make the\n> corruption from within TAP tests, so I wrote the tool as a set of perl modules.\n\nThere is also pg_hexedit:\n\nhttps://github.com/petergeoghegan/pg_hexedit\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 14 Apr 2020 18:17:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Perl modules for testing/viewing/corrupting/repairing your heap\n files" }, { "msg_contents": "\n\n> On Apr 14, 2020, at 6:17 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Wed, Apr 8, 2020 at 3:51 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> Recently, as part of testing something else, I had need of a tool to create\n>> surgically precise corruption within heap pages. I wanted to make the\n>> corruption from within TAP tests, so I wrote the tool as a set of perl modules.\n> \n> There is also pg_hexedit:\n> \n> https://github.com/petergeoghegan/pg_hexedit\n\nI steered away from software released under the GPL, such as pg_hexedit, owing to difficulties in getting anything I develop accepted. (That's a hard enough problem without licensing issues.). I'm not taking a political stand for or against the GPL here, just a pragmatic position that I wouldn't be able to integrate pg_hexedit into a postgres submission.\n\n(Thanks for writing pg_hexedit, BTW. I'm not criticizing it.)\n\nThe purpose of these perl modules is not the viewing of files, but the intentional and targeted corruption of files from within TAP tests. There are limited examples of tests in the postgres source tree that intentionally corrupt files, and as I read them, they employ a blunt force trauma approach:\n\nIn src/bin/pg_basebackup/t/010_pg_basebackup.pl:\n\n> # induce corruption\n> system_or_bail 'pg_ctl', '-D', $pgdata, 'stop';\n> open $file, '+<', \"$pgdata/$file_corrupt1\";\n> seek($file, $pageheader_size, 0);\n> syswrite($file, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\");\n> close $file;\n> system_or_bail 'pg_ctl', '-D', $pgdata, 'start';\n\nIn src/bin/pg_checksums/t/002_actions.pl:\n> # Time to create some corruption\n> open my $file, '+<', \"$pgdata/$file_corrupted\";\n> seek($file, $pageheader_size, 0);\n> syswrite($file, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\");\n> close $file;\n\nThese blunt force trauma tests are fine, as far as they go. But I wanted to be able to do things like\n\n # Corrupt the tuple to look like it has lots of attributes, some of\n # them null. This falsely creates the impression that the t_bits\n # array is longer than just one byte, but t_hoff still says otherwise.\n $tup->{HEAP_HASNULL} = 1;\n $tup->{HEAP_NATTS_MASK} = 0x3FF;\n $tup->{t_bits} = 0xAA;\n\nor\n\n\t# Same as above, but this time t_hoff plays along\n $tup->{HEAP_HASNULL} = 1;\n $tup->{HEAP_NATTS_MASK} = 0x3FF;\n $tup->{t_bits} = 0xAA;\n $tup->{t_hoff} = 32;\n\nThat's hard to do from a TAP test without modules like this, as you have to calculate by hand the offsets where you're going to write the corruption, and the bit pattern you are going to write to that location. Even if you do all that, nobody else is likely going to be able to read and maintain your tests.\n\nI'd like an easy way from within TAP tests to selectively corrupt files, to test whether various parts of the system fail gracefully in the presence of corruption. What happens when a child partition is corrupted? Does that impact queries that only access other partitions? What kinds of corruption cause pg_upgrade to fail? ...to expand the scope of the corruption? What happens to logical replication when there is corruption on the primary? ...on the standby? What kinds of corruption cause a query to return data from neighboring tuples that the querying role has not permission to view? What happens when a NAS is only intermittently corrupt?\n\nThe modules I've submitted thus far are incomplete for this purpose. They don't yet handle toast tables, btree, hash, gist, gin, fsm, or vm, and I might be forgetting a few other things in the list. Before I go and implement all of that, I thought perhaps others would express preferences about how this should all work, even stuff like, \"Don't bother implementing that in perl, as I'm reimplementing the entire testing structure in COBOL\", or similarly unexpected feedback.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 15 Apr 2020 07:22:48 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Perl modules for testing/viewing/corrupting/repairing your heap\n files" }, { "msg_contents": "On Wed, Apr 15, 2020 at 7:22 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I steered away from software released under the GPL, such as pg_hexedit, owing to difficulties in getting anything I develop accepted. (That's a hard enough problem without licensing issues.). I'm not taking a political stand for or against the GPL here, just a pragmatic position that I wouldn't be able to integrate pg_hexedit into a postgres submission.\n>\n> (Thanks for writing pg_hexedit, BTW. I'm not criticizing it.)\n\nThe only reason that pg_hexedit is under the GPL is that it's derived\nfrom pg_filedump, which was and is also GPL 2. Note that pg_filedump\nis hosted on community resources, and is something that index access\nmethods know about and try not to break (grep for pg_filedump in the\nPostgres source code). pg_hexedit supports all index access methods\nwith the core distribution, including even the unpopular ones, like\nSP-GiST.\n\n> That's hard to do from a TAP test without modules like this, as you have to calculate by hand the offsets where you're going to write the corruption, and the bit pattern you are going to write to that location. Even if you do all that, nobody else is likely going to be able to read and maintain your tests.\n\nLogical corruption is almost inherently a once-off thing. I think that\na tool like pg_hexedit is useful for seeing how the system behaves\nwith certain novel kinds of logical corruption, which it will tolerate\nto varying degrees and with diverse symptoms. Pretty much for\ninvestigating on a once-off basis.\n\nI have occasionally wished for an SQL-like interface to bufpage.c\nroutines like PageIndexTupleDelete(), PageRepairFragmentation(), etc.\nThat would probably be a great deal more maintainable than what you\npropose to do. It's not really equivalent, of course, but it would\ngive tests a way to dynamically manipulate/damage pages at the\n\"logical level\". That seems like the thing that's hard to simulate\nright now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:48:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Perl modules for testing/viewing/corrupting/repairing your heap\n files" } ]
[ { "msg_contents": "Hi,\n\nI'm playing with partitioned tables and found a minor thing with the\nerror reporting of bounds checking when create partitions.\n\nIn function check_new_partition_bound(), there are three places where\nwe call ereport() with a parser_errposition(pstate, spec->location)\nargument. However, that pstate is a dummy ParseState made from NULL,\nso the error message never reports the position of the error in the\nsource query line.\n\n\nI have attached a patch to pass in a ParseState to\ncheck_new_partition_bound() to enable the reporting of the error\nposition. Below is what the error message looks like before and after\napplying the patch.\n\n-- Create parent table\ncreate table foo (a int, b date) partition by range (b);\n\n-- Before:\ncreate table foo_part_1 partition of foo for values from (date\n'2007-01-01') to (date '2006-01-01');\nERROR: empty range bound specified for partition \"foo_part_1\"\nDETAIL: Specified lower bound ('2007-01-01') is greater than or equal to\nupper bound ('2006-01-01').\n\n-- After:\ncreate table foo_part_1 partition of foo for values from (date\n'2007-01-01') to (date '2006-01-01');\nERROR: empty range bound specified for partition \"foo_part_1\"\nLINE 1: ...eate table foo_part_1 partition of foo for values from (date...\n ^\nDETAIL: Specified lower bound ('2007-01-01') is greater than or equal to\nupper bound ('2006-01-01').\n\nAnother option is to not pass the parser_errposition() argument at all\nto ereport() in this function, since the query is relatively short and\nthe error message is already descriptive enough.\n\nAlex and Ashwin", "msg_date": "Wed, 8 Apr 2020 17:15:57 -0700", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": true, "msg_subject": "Report error position in partition bound check" }, { "msg_contents": "Forgot to run make installcheck. Here's the new version of the patch that\nupdated the test answer file.", "msg_date": "Wed, 8 Apr 2020 18:05:58 -0700", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Wed, Apr 08, 2020 at 05:15:57PM -0700, Alexandra Wang wrote:\n> I have attached a patch to pass in a ParseState to\n> check_new_partition_bound() to enable the reporting of the error\n> position. Below is what the error message looks like before and after\n> applying the patch.\n> \n> Another option is to not pass the parser_errposition() argument at all\n> to ereport() in this function, since the query is relatively short and\n> the error message is already descriptive enough.\n\nIt depends on the complexity of the relation definition, so adding a\nposition looks like a good idea to me. Anyway, even if this looks\nlike an oversight to me, we are post feature freeze for 13 and that's\nan improvement, so this looks like material for PG14 to me. Are there\nmore opinions on the matter?\n\nPlease note that you forgot to update the regression test output.\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 10:11:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Wed, Apr 8, 2020 at 6:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Please note that you forgot to update the regression test output.\n\nYep thanks! Please see my previous email for the updated patch.\n\nOn Wed, Apr 8, 2020 at 6:11 PM Michael Paquier <michael@paquier.xyz> wrote:> Please note that you forgot to update the regression test output.Yep thanks! Please see my previous email for the updated patch.", "msg_date": "Wed, 8 Apr 2020 20:17:55 -0700", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Wed, Apr 08, 2020 at 08:17:55PM -0700, Alexandra Wang wrote:\n> On Wed, Apr 8, 2020 at 6:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Please note that you forgot to update the regression test output.\n> \n> Yep thanks! Please see my previous email for the updated patch.\n\nThanks, I saw the update. It looks like my email was a couple of\nminutes too late :)\n\nCould you add this patch to the next commit fest [1]?\n\n[1]: https://commitfest.postgresql.org/28/\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 12:40:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "Hi Alexandra,\nAs Michael said it will be considered for the next commitfest. But\nfrom a quick glance, a suggestion.\nInstead of passing NULL parsestate from ATExecAttachPartition, pass\nmake_parsestate(NULL). parse_errorposition() takes care of NULL parse\nstate input, but it might be safer this way. Better if we could cook\nup a parse state with the query text available in\nAlterTableUtilityContext available in ATExecCmd().\n\nOn Thu, Apr 9, 2020 at 6:36 AM Alexandra Wang <lewang@pivotal.io> wrote:\n>\n> Forgot to run make installcheck. Here's the new version of the patch that updated the test answer file.\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 9 Apr 2020 19:21:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "While I'm quite on board with providing useful error cursors,\nthe example cases in this patch don't seem all that useful:\n\n -- trying to create range partition with empty range\n CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) TO (0);\n ERROR: empty range bound specified for partition \"fail_part\"\n+LINE 1: ...E fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) T...\n+ ^\n DETAIL: Specified lower bound (1) is greater than or equal to upper bound (0).\n\nAs best I can tell from these examples, the cursor will always\npoint at the FROM keyword, making it pretty unhelpful. It seems\nlike in addition to getting the query string passed down, you\nneed to do some work on the code that's actually reporting the\nerror position. I'd expect at a minimum that the pointer allows\nidentifying which column of a multi-column partition key is\ngiving trouble. The phrasing of this particular message, for\nexample, suggests that it ought to point at the \"1\" expression.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Apr 2020 10:03:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Thu, Apr 9, 2020 at 10:51 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Alexandra,\n> As Michael said it will be considered for the next commitfest. But\n> from a quick glance, a suggestion.\n> Instead of passing NULL parsestate from ATExecAttachPartition, pass\n> make_parsestate(NULL). parse_errorposition() takes care of NULL parse\n> state input, but it might be safer this way. Better if we could cook\n> up a parse state with the query text available in\n> AlterTableUtilityContext available in ATExecCmd().\n\n+1. Maybe pass the *context* down to ATExecAttachPartition() from\nATExecCmd() rather than a ParseState.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 23:08:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Thu, Apr 9, 2020 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While I'm quite on board with providing useful error cursors,\n> the example cases in this patch don't seem all that useful:\n>\n> -- trying to create range partition with empty range\n> CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) TO (0);\n> ERROR: empty range bound specified for partition \"fail_part\"\n> +LINE 1: ...E fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) T...\n> + ^\n> DETAIL: Specified lower bound (1) is greater than or equal to upper bound (0).\n>\n> As best I can tell from these examples, the cursor will always\n> point at the FROM keyword, making it pretty unhelpful. It seems\n> like in addition to getting the query string passed down, you\n> need to do some work on the code that's actually reporting the\n> error position. I'd expect at a minimum that the pointer allows\n> identifying which column of a multi-column partition key is\n> giving trouble. The phrasing of this particular message, for\n> example, suggests that it ought to point at the \"1\" expression.\n\nI agree with that. Tried that in the attached 0002, although trying\nto get the cursor to point to exactly the offending column seems a bit\ntough for partition overlap errors. The patch does allow to single\nout which one of the lower and upper bounds is causing the overlap\nwith an existing partition, which is better than now and seems helpful\nenough.\n\nAlso, updated Alexandra's patch to incorporate Ashutosh's comment such\nthat we get the same output with ATTACH PARTITION commands too.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 10 Apr 2020 18:01:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, 10 Apr 2020 at 14:31, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Thu, Apr 9, 2020 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > While I'm quite on board with providing useful error cursors,\n> > the example cases in this patch don't seem all that useful:\n> >\n> > -- trying to create range partition with empty range\n> > CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (1)\n> TO (0);\n> > ERROR: empty range bound specified for partition \"fail_part\"\n> > +LINE 1: ...E fail_part PARTITION OF range_parted2 FOR VALUES FROM (1)\n> T...\n> > + ^\n> > DETAIL: Specified lower bound (1) is greater than or equal to upper\n> bound (0).\n> >\n> > As best I can tell from these examples, the cursor will always\n> > point at the FROM keyword, making it pretty unhelpful. It seems\n> > like in addition to getting the query string passed down, you\n> > need to do some work on the code that's actually reporting the\n> > error position. I'd expect at a minimum that the pointer allows\n> > identifying which column of a multi-column partition key is\n> > giving trouble. The phrasing of this particular message, for\n> > example, suggests that it ought to point at the \"1\" expression.\n>\n> I agree with that. Tried that in the attached 0002, although trying\n> to get the cursor to point to exactly the offending column seems a bit\n> tough for partition overlap errors. The patch does allow to single\n> out which one of the lower and upper bounds is causing the overlap\n> with an existing partition, which is better than now and seems helpful\n> enough.\n>\n> Also, updated Alexandra's patch to incorporate Ashutosh's comment such\n> that we get the same output with ATTACH PARTITION commands too.\n>\n\nI looked at this briefly. It looks good, but I will review more in the next\nCF. Do we have entry there yet? To nit-pick: for a multi-key value the ^\npoints to the first column and the reader may think that that's the\nproblematci column. Should it instead point to ( ?\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 10 Apr 2020 at 14:31, Amit Langote <amitlangote09@gmail.com> wrote:On Thu, Apr 9, 2020 at 11:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While I'm quite on board with providing useful error cursors,\n> the example cases in this patch don't seem all that useful:\n>\n>  -- trying to create range partition with empty range\n>  CREATE TABLE fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) TO (0);\n>  ERROR:  empty range bound specified for partition \"fail_part\"\n> +LINE 1: ...E fail_part PARTITION OF range_parted2 FOR VALUES FROM (1) T...\n> +                                                             ^\n>  DETAIL:  Specified lower bound (1) is greater than or equal to upper bound (0).\n>\n> As best I can tell from these examples, the cursor will always\n> point at the FROM keyword, making it pretty unhelpful.  It seems\n> like in addition to getting the query string passed down, you\n> need to do some work on the code that's actually reporting the\n> error position.  I'd expect at a minimum that the pointer allows\n> identifying which column of a multi-column partition key is\n> giving trouble.  The phrasing of this particular message, for\n> example, suggests that it ought to point at the \"1\" expression.\n\nI agree with that.  Tried that in the attached 0002, although trying\nto get the cursor to point to exactly the offending column seems a bit\ntough for partition overlap errors.  The patch does allow to single\nout which one of the lower and upper bounds is causing the overlap\nwith an existing partition, which is better than now and seems helpful\nenough.\n\nAlso, updated Alexandra's patch to incorporate Ashutosh's comment such\nthat we get the same output with ATTACH PARTITION commands too.I looked at this briefly. It looks good, but I will review more in the next CF. Do we have entry there yet? To nit-pick: for a multi-key value the ^ points to the first column and the reader may think that that's the problematci column. Should it instead point to ( ? -- Best Wishes,Ashutosh", "msg_date": "Fri, 10 Apr 2020 21:07:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, 10 Apr 2020 at 14:31, Amit Langote <amitlangote09@gmail.com> wrote:\n> I agree with that. Tried that in the attached 0002, although trying\n> to get the cursor to point to exactly the offending column seems a bit\n> tough for partition overlap errors. The patch does allow to single\n> out which one of the lower and upper bounds is causing the overlap\n> with an existing partition, which is better than now and seems helpful\n> enough.\n>\n> Also, updated Alexandra's patch to incorporate Ashutosh's comment such\n> that we get the same output with ATTACH PARTITION commands too.\n\nThank you Amit for updating the patches, the cursor looks much helpful now.\nI\ncreated the commitfest entry https://commitfest.postgresql.org/28/2533/\n\nOn Fri, 10 Apr 2020 at 14:31, Amit Langote <amitlangote09@gmail.com> wrote:> I agree with that.  Tried that in the attached 0002, although trying> to get the cursor to point to exactly the offending column seems a bit> tough for partition overlap errors.  The patch does allow to single> out which one of the lower and upper bounds is causing the overlap> with an existing partition, which is better than now and seems helpful> enough.> > Also, updated Alexandra's patch to incorporate Ashutosh's comment such> that we get the same output with ATTACH PARTITION commands too.Thank you Amit for updating the patches, the cursor looks much helpful now. Icreated the commitfest entry https://commitfest.postgresql.org/28/2533/", "msg_date": "Fri, 10 Apr 2020 09:59:49 -0700", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "On Fri, Apr 10, 2020 at 8:37 AM Ashutosh Bapat <\nashutosh.bapat@2ndquadrant.com> wrote:\n> for a multi-key value the ^\n> points to the first column and the reader may think that that's the\n> problematci column. Should it instead point to ( ?\n\nI attached a v2 of Amit's 0002 patch to also report the exact column\nfor the partition overlap errors.", "msg_date": "Fri, 10 Apr 2020 14:50:17 -0700", "msg_from": "Alexandra Wang <lewang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "> On 10 Apr 2020, at 23:50, Alexandra Wang <lewang@pivotal.io> wrote:\n\n> On Fri, Apr 10, 2020 at 8:37 AM Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com <mailto:ashutosh.bapat@2ndquadrant.com>> wrote:\n> > for a multi-key value the ^\n> > points to the first column and the reader may think that that's the\n> > problematci column. Should it instead point to ( ?\n> \n> I attached a v2 of Amit's 0002 patch to also report the exact column\n> for the partition overlap errors.\n\nThis patch fails to apply to HEAD due to conflicts in the create_table expected\noutput. Can you please submit a rebased version? I'm marking the CF entry\nWaiting on Author in the meantime.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 15:39:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" }, { "msg_contents": "> On 2 July 2020, at 06:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 10 Apr 2020, at 23:50, Alexandra Wang <lewang@pivotal.io> wrote:\n>\n> > On Fri, Apr 10, 2020 at 8:37 AM Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com <mailto:ashutosh.bapat@2ndquadrant.com>> wrote:\n> > > for a multi-key value the ^\n> > > points to the first column and the reader may think that that's the\n> > > problematci column. Should it instead point to ( ?\n> >\n> > I attached a v2 of Amit's 0002 patch to also report the exact column\n> > for the partition overlap errors.\n>\n> This patch fails to apply to HEAD due to conflicts in the create_table expected\n> output. Can you please submit a rebased version? I'm marking the CF entry\n> Waiting on Author in the meantime.\n\nThank you Daniel. Here's the rebased patch. I also squashed the two\npatches into one so it's easier to review.\n\n--\nAlex", "msg_date": "Mon, 13 Jul 2020 17:53:46 +0000", "msg_from": "Alexandra Wang <walexandra@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Report error position in partition bound check" } ]
[ { "msg_contents": "Hello PostgreSQL 14 hackers,\n\nFreeBSD is much faster than Linux (and probably Windows) at parallel\nhash joins on the same hardware, primarily because its DSM segments\nrun in huge pages out of the box. There are various ways to convince\nrecent-ish Linux to put our DSMs on huge pages (see below for one),\nbut that's not the only problem I wanted to attack.\n\nThe attached highly experimental patch adds a new GUC\ndynamic_shared_memory_main_size. If you set it > 0, it creates a\nfixed sized shared memory region that supplies memory for \"fast\" DSM\nsegments. When there isn't enough free space, dsm_create() falls back\nto the traditional approach using eg shm_open(). This allows parallel\nqueries to run faster, because:\n\n* no more expensive system calls\n* no repeated VM allocation (whether explicit posix_fallocate() or first-touch)\n* can be in huge pages on Linux and Windows\n\nThis makes lots of parallel queries measurably faster, especially\nparallel hash join. To demonstrate with a very simple query:\n\n create table t (i int);\n insert into t select generate_series(1, 10000000);\n select pg_prewarm('t');\n set work_mem = '1GB';\n\n select count(*) from t t1 join t t2 using (i);\n\nHere are some quick and dirty results from a Linux 4.19 laptop. The\nfirst column is the new GUC, and the last column is from \"perf stat -e\ndTLB-load-misses -p <backend>\".\n\n size huge_pages time speedup TLB misses\n 0 off 2.595s 9,131,285\n 0 on 2.571s 1% 8,951,595\n 1GB off 2.398s 8% 9,082,803\n 1GB on 1.898s 37% 169,867\n\nYou can get some of this speedup unpatched on a Linux 4.7+ system by\nputting \"huge=always\" in your /etc/fstab options for /dev/shm (= where\nshm_open() lives). For comparison, that gives me:\n\n size huge_pages time speedup TLB misses\n 0 on 2.007s 29% 221,910\n\nThat still leave the other 8% on the table, and in fact that 8%\nexplodes to a much larger number as you throw more cores at the\nproblem (here I was using defaults, 2 workers). Unfortunately, dsa.c\n-- used by parallel hash join to allocate vast amounts of memory\nreally fast during the build phase -- holds a lock while creating new\nsegments, as you'll soon discover if you test very large hash join\nbuilds on a 72-way box. I considered allowing concurrent segment\ncreation, but as far as I could see that would lead to terrible\nfragmentation problems, especially in combination with our geometric\ngrowth policy for segment sizes due to limited slots. I think this is\nthe main factor that causes parallel hash join scalability to fall off\naround 8 cores. The present patch should really help with that (more\ndigging in that area needed; there are other ways to improve that\nsituation, possibly including something smarter than a stream of of\ndsa_allocate(32kB) calls).\n\nA competing idea would have freelists of lingering DSM segments for\nreuse. Among other problems, you'd probably have fragmentation\nproblems due to their differing sizes. Perhaps there could be a\nhybrid of these two ideas, putting a region for \"fast\" DSM segments\ninside many OS-supplied segments, though it's obviously much more\ncomplicated.\n\nAs for what a reasonable setting would be for this patch, well, erm,\nit depends. Obviously that's RAM that the system can't use for other\npurposes while you're not running parallel queries, and if it's huge\npages, it can't be swapped out; if it's not huge pages, then it can be\nswapped out, and that'd be terrible for performance next time you need\nit. So you wouldn't want to set it too large. If you set it too\nsmall, it falls back to the traditional behaviour.\n\nOne argument I've heard in favour of creating fresh segments every\ntime is that NUMA systems configured to prefer local memory allocation\n(as opposed to interleaved allocation) probably avoid cross node\ntraffic. I haven't looked into that topic yet; I suppose one way to\ndeal with it in this scheme would be to have one such region per node,\nand prefer to allocate from the local one.", "msg_date": "Thu, 9 Apr 2020 17:45:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Fast DSM segments" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:46 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The attached highly experimental patch adds a new GUC\n> dynamic_shared_memory_main_size. If you set it > 0, it creates a\n> fixed sized shared memory region that supplies memory for \"fast\" DSM\n> segments. When there isn't enough free space, dsm_create() falls back\n> to the traditional approach using eg shm_open().\n\nI think this is a reasonable option to have available for people who\nwant to use it. I didn't want to have parallel query be limited to a\nfixed-size amount of shared memory because I think there are some\ncases where efficient performance really requires a large chunk of\nmemory, and it seemed impractical to keep the largest amount of memory\nthat any query might need to use permanently allocated, let alone that\namount multiplied by the maximum possible number of parallel queries\nthat could be running at the same time. But none of that is any\nargument against giving people the option to preallocate some memory\nfor parallel query.\n\nMy guess is that on smaller boxes this won't find a lot of use, but on\nbigger ones it will be handy. It's hard to imagine setting aside 1GB\nof memory for this if you only have 8GB total, but if you have 512GB\ntotal, it's pretty easy to imagine.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:55:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Sat, Apr 11, 2020 at 1:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Apr 9, 2020 at 1:46 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The attached highly experimental patch adds a new GUC\n> > dynamic_shared_memory_main_size. If you set it > 0, it creates a\n> > fixed sized shared memory region that supplies memory for \"fast\" DSM\n> > segments. When there isn't enough free space, dsm_create() falls back\n> > to the traditional approach using eg shm_open().\n>\n> I think this is a reasonable option to have available for people who\n> want to use it. I didn't want to have parallel query be limited to a\n> fixed-size amount of shared memory because I think there are some\n> cases where efficient performance really requires a large chunk of\n> memory, and it seemed impractical to keep the largest amount of memory\n> that any query might need to use permanently allocated, let alone that\n> amount multiplied by the maximum possible number of parallel queries\n> that could be running at the same time. But none of that is any\n> argument against giving people the option to preallocate some memory\n> for parallel query.\n\nThat all makes sense. Now I'm wondering if I should use exactly that\nword in the GUC... dynamic_shared_memory_preallocate?\n\n\n", "msg_date": "Wed, 10 Jun 2020 10:02:25 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Tue, Jun 9, 2020 at 6:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> That all makes sense. Now I'm wondering if I should use exactly that\n> word in the GUC... dynamic_shared_memory_preallocate?\n\nI tend to prefer verb-object rather than object-verb word ordering,\nbecause that's how English normally works, but I realize this is not a\nunanimous view.\n\nIt's a little strange because the fact of preallocating it makes it\nnot dynamic any more. I don't know what to do about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Jun 2020 13:37:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Thu, Jun 11, 2020 at 5:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jun 9, 2020 at 6:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > That all makes sense. Now I'm wondering if I should use exactly that\n> > word in the GUC... dynamic_shared_memory_preallocate?\n>\n> I tend to prefer verb-object rather than object-verb word ordering,\n> because that's how English normally works, but I realize this is not a\n> unanimous view.\n\nIt's pretty much just me and Yoda against all the rest of you, so\nlet's try preallocate_dynamic_shared_memory. I guess it could also be\nmin_dynamic_shared_memory to drop the verb. Other ideas welcome.\n\n> It's a little strange because the fact of preallocating it makes it\n> not dynamic any more. I don't know what to do about that.\n\nWell, it's not dynamic at the operating system level, but it's still\ndynamic in the sense that PostgreSQL code can get some and give it\nback, and there's no change from the point of view of any DSM client\ncode.\n\nAdmittedly, the shared memory architecture is a bit confusing. We\nhave main shared memory, DSM memory, DSA memory that is inside main\nshared memory with extra DSMs as required, DSA memory that is inside a\nDSM and creates extra DSMs as required, and with this patch also DSMs\nthat are inside main shared memory. Not to mention palloc and\nMemoryContexts and all that. As you probably remember I once managed\nto give an internal presentation at EDB for one hour of solid talking\nabout all the different kinds of allocators and what they're good for.\nIt was like a Möbius slide deck already.\n\nHere's a version that adds some documentation.", "msg_date": "Thu, 18 Jun 2020 18:05:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Thu, Jun 18, 2020 at 6:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a version that adds some documentation.\n\nI jumped on a dual socket machine with 36 cores/72 threads and 144GB\nof RAM (Azure F72s_v2) running Linux, configured with 50GB of huge\npages available, and I ran a very simple test: select count(*) from t\nt1 join t t2 using (i), where the table was created with create table\nt as select generate_series(1, 400000000)::int i, and then prewarmed\ninto 20GB of shared_buffers. I compared the default behaviour to\npreallocate_dynamic_shared_memory=20GB, with work_mem set sky high so\nthat there would be no batching (you get a hash table of around 16GB),\nand I set things up so that I could test with a range of worker\nprocesses, and computed the speedup compared to a serial hash join.\n\nHere's what I got:\n\nProcesses Default Preallocated\n1 627.6s\n9 101.3s = 6.1x 68.1s = 9.2x\n18 56.1s = 11.1x 34.9s = 17.9x\n27 42.5s = 14.7x 23.5s = 26.7x\n36 36.0s = 17.4x 18.2s = 34.4x\n45 33.5s = 18.7x 15.5s = 40.5x\n54 35.6s = 17.6x 13.6s = 46.1x\n63 35.4s = 17.7x 12.2s = 51.4x\n72 33.8s = 18.5x 11.3s = 55.5x\n\nIt scaled nearly perfectly up to somewhere just under 36 threads, and\nthen the slope tapered off a bit so that each extra process was\nsupplying somewhere a bit over half of its potential. I can improve\nthe slope after the halfway point a bit by cranking HASH_CHUNK_SIZE up\nto 128KB (and it doesn't get much better after that):\n\nProcesses Default Preallocated\n1 627.6s\n9 102.7s = 6.1x 67.7s = 9.2x\n18 56.8s = 11.1x 34.8s = 18.0x\n27 41.0s = 15.3x 23.4s = 26.8x\n36 33.9s = 18.5x 18.2s = 34.4x\n45 30.1s = 20.8x 15.4s = 40.7x\n54 27.2s = 23.0x 13.3s = 47.1x\n63 25.1s = 25.0x 11.9s = 52.7x\n72 23.8s = 26.3x 10.8s = 58.1x\n\nI don't claim that this is representative of any particular workload\nor server configuration, but it's a good way to show that bottleneck,\nand it's pretty cool to be able to run a query that previously took\nover 10 minutes in 10 seconds. (I can shave a further 10% off these\ntimes with my experimental hash join prefetching patch, but I'll\nprobably write about that separately when I've figured out why it's\nnot doing better than that...).", "msg_date": "Fri, 19 Jun 2020 17:42:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "Hi,\n\nOn 2020-06-19 17:42:41 +1200, Thomas Munro wrote:\n> On Thu, Jun 18, 2020 at 6:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a version that adds some documentation.\n> \n> I jumped on a dual socket machine with 36 cores/72 threads and 144GB\n> of RAM (Azure F72s_v2) running Linux, configured with 50GB of huge\n> pages available, and I ran a very simple test: select count(*) from t\n> t1 join t t2 using (i), where the table was created with create table\n> t as select generate_series(1, 400000000)::int i, and then prewarmed\n> into 20GB of shared_buffers.\n\nI assume all the data fits into 20GB?\n\nWhich kernel version is this?\n\nHow much of the benefit comes from huge pages being used, how much from\navoiding the dsm overhead, and how much from the page table being shared\nfor that mapping? Do you have a rough idea?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 Jun 2020 12:17:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Sat, Jun 20, 2020 at 7:17 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-06-19 17:42:41 +1200, Thomas Munro wrote:\n> > On Thu, Jun 18, 2020 at 6:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Here's a version that adds some documentation.\n> >\n> > I jumped on a dual socket machine with 36 cores/72 threads and 144GB\n> > of RAM (Azure F72s_v2) running Linux, configured with 50GB of huge\n> > pages available, and I ran a very simple test: select count(*) from t\n> > t1 join t t2 using (i), where the table was created with create table\n> > t as select generate_series(1, 400000000)::int i, and then prewarmed\n> > into 20GB of shared_buffers.\n>\n> I assume all the data fits into 20GB?\n\nYep.\n\n> Which kernel version is this?\n\nTested on 4.19 (Debian stable/10).\n\n> How much of the benefit comes from huge pages being used, how much from\n> avoiding the dsm overhead, and how much from the page table being shared\n> for that mapping? Do you have a rough idea?\n\nWithout huge pages, the 36 process version of the test mentioned above\nshows around a 1.1x speedup, which is in line with the numbers from my\nfirst message (which was from a much smaller computer). The rest of\nthe speedup (2x) is due to huge pages.\n\nFurther speedups are available by increasing the hash chunk size, and\nprobably doing NUMA-aware allocation, in later work.\n\nHere's a new version, using the name min_dynamic_shared_memory, which\nsounds better to me. Any objections? I also fixed the GUC's maximum\nsetting so that it's sure to fit in size_t.", "msg_date": "Mon, 27 Jul 2020 14:45:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fast DSM segments" }, { "msg_contents": "On Mon, Jul 27, 2020 at 2:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a new version, using the name min_dynamic_shared_memory, which\n> sounds better to me. Any objections? I also fixed the GUC's maximum\n> setting so that it's sure to fit in size_t.\n\nI pushed it like that. Happy to rename the GUC if someone has a better idea.\n\nI don't really love the way dsm_create()'s code flows, but I didn't\nsee another way to do this within the existing constraints. I think\nit'd be nice to rewrite this thing to get rid of the random\nnumber-based handles that are directly convertible to key_t/pathname,\nand instead use something holding {slot number, generation number}.\nThen you could improve that code flow and get rid of several cases of\nlinear array scans under an exclusive lock. The underlying\nkey_t/pathname would live in the slot. You'd need a new way to find\nthe control segment itself after a restart, where\ndsm_cleanup_using_control_segment() cleans up after the previous\nincarnation, but I think that just requires putting the key_t/pathname\ndirectly in PGShmemHeader, instead of a new {slot number, generation\nnumber} style handle. Or maybe a separate mapped file opened by well\nknown pathname, or something like that.\n\n\n", "msg_date": "Fri, 31 Jul 2020 17:55:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fast DSM segments" } ]
[ { "msg_contents": "Hi all,\n\nDuring investigating the issue our customer had, I realized that\n_bt_killitems() can record the same FPI pages multiple times\nsimultaneously. This can happen when several concurrent index scans\nare processing pages that contain killable tuples. Because killing\nindex items could be performed while holding a buffer lock in shared\nmode concurrent processes record multiple FPI_FOR_HINT for the same\nblock.\n\nHere is the reproducer:\n\ncat <<EOF | psql -d postgres\ndrop table if exists tbl;\ncreate table tbl (c int primary key) with (autovacuum_enabled = off);\ninsert into tbl select generate_series(1,300);\nupdate tbl set c = c * -1 where c = 100;\ncheckpoint;\nEOF\n\nfor n in `seq 1 4`\ndo\n psql -d postgres -c \"select from tbl where c = 100\" &\ndone\n\nThe server needs to enable wal_log_hints and this might need to run\nseveral times. After running the script we can see this issue by\npg_waldump:\n\nrmgr: XLOG len (rec/tot): 49/ 8209, tx: 0, top: 0, lsn: 1/8FD1C3D8,\nprev 1/8FD1C368, desc: FPI_FOR_HINT , blkref #0: rel 1663/12643/16767\nblk 0 FPW\nrmgr: XLOG len (rec/tot): 49/ 8209, tx: 0, top: 0, lsn: 1/8FD1E408,\nprev 1/8FD1C3D8, desc: FPI_FOR_HINT , blkref #0: rel 1663/12643/16767\nblk 0 FPW\n\nThis is an excerpt from _bt_killitems() of version 12.2. By recent\nchanges the code of HEAD looks different much but the part in question\nis essentially not changed much. That is, it's reproducible even with\nHEAD.\n\n for (i = 0; i < numKilled; i++)\n {\n int itemIndex = so->killedItems[i];\n BTScanPosItem *kitem = &so->currPos.items[itemIndex];\n OffsetNumber offnum = kitem->indexOffset;\n\n Assert(itemIndex >= so->currPos.firstItem &&\n itemIndex <= so->currPos.lastItem);\n if (offnum < minoff)\n continue; /* pure paranoia */\n while (offnum <= maxoff)\n {\n ItemId iid = PageGetItemId(page, offnum);\n IndexTuple ituple = (IndexTuple) PageGetItem(page, iid);\n\n if (ItemPointerEquals(&ituple->t_tid, &kitem->heapTid))\n {\n /* found the item */\n ItemIdMarkDead(iid);\n killedsomething = true;\n break; /* out of inner search loop */\n }\n offnum = OffsetNumberNext(offnum);\n }\n }\n\n /*\n * Since this can be redone later if needed, mark as dirty hint.\n *\n * Whenever we mark anything LP_DEAD, we also set the page's\n * BTP_HAS_GARBAGE flag, which is likewise just a hint.\n */\n if (killedsomething)\n {\n opaque->btpo_flags |= BTP_HAS_GARBAGE;\n MarkBufferDirtyHint(so->currPos.buf, true);\n }\n\nThe inner test in the comment \"found the item\" never tests the item\nfor being dead. So maybe we can add !ItemIdIsDead(iid) to that\ncondition. But there still is a race condition of recording multiple\nFPIs can happen. Maybe a better solution is to change the lock to\nexclusive, at least when wal_log_hints = on, so that only one process\ncan run this code -- the reduction in concurrency might be won back by\nthe fact that we don't wal-log the page multiple times.\n\nI understand that we can call MarkBufferDirtyHint while holding a\nbuffer lock in share mode as the comment of MarkBufferDirtyHint()\nsays, but I'd like to improve this behavior so that we can avoid\nmultiple FPI_FOR_HINT for the same block.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 14:55:52 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Multiple FPI_FOR_HINT for the same block during killing btree index\n items" }, { "msg_contents": "On 2020-Apr-09, Masahiko Sawada wrote:\n\n> The inner test in the comment \"found the item\" never tests the item\n> for being dead. So maybe we can add !ItemIdIsDead(iid) to that\n> condition. But there still is a race condition of recording multiple\n> FPIs can happen. Maybe a better solution is to change the lock to\n> exclusive, at least when wal_log_hints = on, so that only one process\n> can run this code -- the reduction in concurrency might be won back by\n> the fact that we don't wal-log the page multiple times.\n\nI agree.\n\nIt seems worth pointing out that when this code was written, these hint\nbit changes were not logged, so this consideration did not apply then.\nBut we added data checksums and wal_log_hints, which changed the\nequation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 14:05:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Wed, Apr 8, 2020 at 10:56 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> Here is the reproducer:\n\nWhat version of Postgres did you notice the actual customer issue on?\nI ask because I wonder if the work on B-Tree indexes in Postgres 12\naffects the precise behavior you get here with real world workloads.\nIt probably makes _bt_killitems() more effective with some workloads,\nwhich naturally increases the likelihood of having multiple FPI issued\nin the manner that you describe. OTOH, it might make it less likely\nwith low cardinality indexes, since large groups of garbage duplicate\ntuples tend to get concentrated on just a few leaf pages.\n\n> The inner test in the comment \"found the item\" never tests the item\n> for being dead. So maybe we can add !ItemIdIsDead(iid) to that\n> condition. But there still is a race condition of recording multiple\n> FPIs can happen. Maybe a better solution is to change the lock to\n> exclusive, at least when wal_log_hints = on, so that only one process\n> can run this code -- the reduction in concurrency might be won back by\n> the fact that we don't wal-log the page multiple times.\n\nI like the idea of checking !ItemIdIsDead(iid) as a further condition\nof killing the item -- there is clearly no point in doing work to kill\nan item that is already dead. I don't like the idea of using an\nexclusive buffer lock (even if it's just with wal_log_hints = on),\nthough.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Apr 2020 12:05:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 3:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 8, 2020 at 10:56 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > Here is the reproducer:\n>\n> What version of Postgres did you notice the actual customer issue on?\n> I ask because I wonder if the work on B-Tree indexes in Postgres 12\n> affects the precise behavior you get here with real world workloads.\n> It probably makes _bt_killitems() more effective with some workloads,\n> which naturally increases the likelihood of having multiple FPI issued\n> in the manner that you describe. OTOH, it might make it less likely\n> with low cardinality indexes, since large groups of garbage duplicate\n> tuples tend to get concentrated on just a few leaf pages.\n\nWe saw the issue on our PG11 clusters. The specific index we noticed\nin the wal dump (I don't think we confirmed if there were others) as\none on a `created_at` column, to give you an idea of cardinality.\n\n> > The inner test in the comment \"found the item\" never tests the item\n> > for being dead. So maybe we can add !ItemIdIsDead(iid) to that\n> > condition. But there still is a race condition of recording multiple\n> > FPIs can happen. Maybe a better solution is to change the lock to\n> > exclusive, at least when wal_log_hints = on, so that only one process\n> > can run this code -- the reduction in concurrency might be won back by\n> > the fact that we don't wal-log the page multiple times.\n>\n> I like the idea of checking !ItemIdIsDead(iid) as a further condition\n> of killing the item -- there is clearly no point in doing work to kill\n> an item that is already dead. I don't like the idea of using an\n> exclusive buffer lock (even if it's just with wal_log_hints = on),\n> though.\n\nI don't have a strong opinion on the lock.\n\nJames\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:37:33 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:37 PM James Coleman <jtc331@gmail.com> wrote:\n> We saw the issue on our PG11 clusters. The specific index we noticed\n> in the wal dump (I don't think we confirmed if there were others) as\n> one on a `created_at` column, to give you an idea of cardinality.\n\nYou tend to get a lot of problems with indexes like that when there\nare consistent updates (actually, that's more of a thing with an\nupdated_at index). But non-HOT updates alone might result in what you\ncould describe as \"updates\" to the index.\n\nWith Postgres 11, a low cardinality index could place new/successor\nduplicate index tuples (those needed for non-HOT updates) on a more or\nless random leaf page (you'll recall that this is determined by the\nold \"getting tired\" logic). This is the kind of thing I had in mind\nwhen I asked Sawada-san about it.\n\nWas this a low cardinality index in the way I describe? If it was,\nthen we can hope (and maybe even verify) that the Postgres 12 work\nnoticeably ameliorates the problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:25:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 5:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Was this a low cardinality index in the way I describe? If it was,\n> then we can hope (and maybe even verify) that the Postgres 12 work\n> noticeably ameliorates the problem.\n\nWhat I really meant was an index where hundreds or even thousands of\nrows for each distinct timestamp value are expected. Not an index\nwhere almost every row has a distinct timestamp value. Both timestamp\nindex patterns are common, obviously.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:32:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 8:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Apr 9, 2020 at 5:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Was this a low cardinality index in the way I describe? If it was,\n> > then we can hope (and maybe even verify) that the Postgres 12 work\n> > noticeably ameliorates the problem.\n>\n> What I really meant was an index where hundreds or even thousands of\n> rows for each distinct timestamp value are expected. Not an index\n> where almost every row has a distinct timestamp value. Both timestamp\n> index patterns are common, obviously.\n\nI'll try to run some numbers tomorrow to confirm, but I believe that\nthe created_at value is almost (if not completely) unique. So, no,\nit's not a low cardinality case like that.\n\nI believe the write pattern to this table likely looks like:\n- INSERT\n- UPDATE\n- DELETE\nfor every row. But tomorrow I can do some more digging if needed.\n\nJames\n\n\n", "msg_date": "Thu, 9 Apr 2020 21:47:27 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 6:47 PM James Coleman <jtc331@gmail.com> wrote:\n> I believe the write pattern to this table likely looks like:\n> - INSERT\n> - UPDATE\n> - DELETE\n> for every row. But tomorrow I can do some more digging if needed.\n\nThe pg_stats.null_frac for the column/index might be interesting here. I\nbelieve that Active Record will sometimes generate created_at columns\nthat sometimes end up containing NULL values. Not sure why.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 9 Apr 2020 19:07:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Fri, 10 Apr 2020 at 04:05, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 8, 2020 at 10:56 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > Here is the reproducer:\n>\n> What version of Postgres did you notice the actual customer issue on?\n> I ask because I wonder if the work on B-Tree indexes in Postgres 12\n> affects the precise behavior you get here with real world workloads.\n> It probably makes _bt_killitems() more effective with some workloads,\n> which naturally increases the likelihood of having multiple FPI issued\n> in the manner that you describe. OTOH, it might make it less likely\n> with low cardinality indexes, since large groups of garbage duplicate\n> tuples tend to get concentrated on just a few leaf pages.\n>\n> > The inner test in the comment \"found the item\" never tests the item\n> > for being dead. So maybe we can add !ItemIdIsDead(iid) to that\n> > condition. But there still is a race condition of recording multiple\n> > FPIs can happen. Maybe a better solution is to change the lock to\n> > exclusive, at least when wal_log_hints = on, so that only one process\n> > can run this code -- the reduction in concurrency might be won back by\n> > the fact that we don't wal-log the page multiple times.\n>\n> I like the idea of checking !ItemIdIsDead(iid) as a further condition\n> of killing the item -- there is clearly no point in doing work to kill\n> an item that is already dead. I don't like the idea of using an\n> exclusive buffer lock (even if it's just with wal_log_hints = on),\n> though.\n>\n\nOkay. I think only adding the check would also help with reducing the\nlikelihood. How about the changes for the current HEAD I've attached?\n\nRelated to this behavior on btree indexes, this can happen even on\nheaps during searching heap tuples. To reduce the likelihood of that\nmore generally I wonder if we can acquire a lock on buffer descriptor\nright before XLogSaveBufferForHint() and set a flag to the buffer\ndescriptor that indicates that we're about to log FPI for hint bit so\nthat concurrent process can be aware of that.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 10 Apr 2020 12:32:31 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "At Fri, 10 Apr 2020 12:32:31 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Fri, 10 Apr 2020 at 04:05, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Wed, Apr 8, 2020 at 10:56 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > Here is the reproducer:\n> >\n> > What version of Postgres did you notice the actual customer issue on?\n> > I ask because I wonder if the work on B-Tree indexes in Postgres 12\n> > affects the precise behavior you get here with real world workloads.\n> > It probably makes _bt_killitems() more effective with some workloads,\n> > which naturally increases the likelihood of having multiple FPI issued\n> > in the manner that you describe. OTOH, it might make it less likely\n> > with low cardinality indexes, since large groups of garbage duplicate\n> > tuples tend to get concentrated on just a few leaf pages.\n> >\n> > > The inner test in the comment \"found the item\" never tests the item\n> > > for being dead. So maybe we can add !ItemIdIsDead(iid) to that\n> > > condition. But there still is a race condition of recording multiple\n> > > FPIs can happen. Maybe a better solution is to change the lock to\n> > > exclusive, at least when wal_log_hints = on, so that only one process\n> > > can run this code -- the reduction in concurrency might be won back by\n> > > the fact that we don't wal-log the page multiple times.\n> >\n> > I like the idea of checking !ItemIdIsDead(iid) as a further condition\n> > of killing the item -- there is clearly no point in doing work to kill\n> > an item that is already dead. I don't like the idea of using an\n> > exclusive buffer lock (even if it's just with wal_log_hints = on),\n> > though.\n> >\n> \n> Okay. I think only adding the check would also help with reducing the\n> likelihood. How about the changes for the current HEAD I've attached?\n\nFWIW, looks good to me.\n\n> Related to this behavior on btree indexes, this can happen even on\n> heaps during searching heap tuples. To reduce the likelihood of that\n> more generally I wonder if we can acquire a lock on buffer descriptor\n> right before XLogSaveBufferForHint() and set a flag to the buffer\n> descriptor that indicates that we're about to log FPI for hint bit so\n> that concurrent process can be aware of that.\n\nMakes sense if the lock were acquired just before the \"BM_DIRTY |\nBM_JUST_DIRTIED) check. Could we use double-checking, as similar to\nthe patch for ItemIdIsDead()?\n\n> if ((pg_atomic_read_u32(&bufHdr->state) & (BM_DIRTY | BM_JUST_DIRTIED)) !=\n> (BM_DIRTY | BM_JUST_DIRTIED))\n> {\n...\n> * essential that CreateCheckpoint waits for virtual transactions\n> * rather than full transactionids.\n> */\n> /* blah, blah */ \n> buf_state = LockBufHdr(bufHdr);\n>\n> if (buf_state & (BM_ | BM_JUST) != (..))\n> {\n> MyProc->delayChkpt = delayChkpt = true;\n> lsn = XLogSaveBufferForHint(buffer, buffer_std);\n> }\n> }\n> else\n> buf_state = LockBuffer(bufHdr);\n \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 10 Apr 2020 13:30:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On Thu, Apr 9, 2020 at 10:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Apr 9, 2020 at 6:47 PM James Coleman <jtc331@gmail.com> wrote:\n> > I believe the write pattern to this table likely looks like:\n> > - INSERT\n> > - UPDATE\n> > - DELETE\n> > for every row. But tomorrow I can do some more digging if needed.\n>\n> The pg_stats.null_frac for the column/index might be interesting here. I\n> believe that Active Record will sometimes generate created_at columns\n> that sometimes end up containing NULL values. Not sure why.\n\nnull_frac is 0 for created_at (what I expected). Also (under current\ndata) all created_at values are unique except a single row duplicate.\n\nThat being said, remember the write pattern above: every row gets\ndeleted eventually, so there'd be a lots of dead tuples overall.\n\nJames\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:18:38 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "On 2020-Apr-10, Masahiko Sawada wrote:\n\n> Okay. I think only adding the check would also help with reducing the\n> likelihood. How about the changes for the current HEAD I've attached?\n\nPushed this to all branches. (Branches 12 and older obviously needed an\nadjustment.) Thanks!\n\n> Related to this behavior on btree indexes, this can happen even on\n> heaps during searching heap tuples. To reduce the likelihood of that\n> more generally I wonder if we can acquire a lock on buffer descriptor\n> right before XLogSaveBufferForHint() and set a flag to the buffer\n> descriptor that indicates that we're about to log FPI for hint bit so\n> that concurrent process can be aware of that.\n\nI'm not sure how that helps; the other process would have to go back and\nredo their whole operation from scratch in order to find out whether\nthere's still something alive that needs killing.\n\nI think you need to acquire the exclusive lock sooner: if, when scanning\nthe page, you find a killable item, *then* upgrade the lock to exclusive\nand restart the scan. This means that we'll have to wait for any other\nprocess that's doing the scan, and they will all give up their share\nlock to wait for the exclusive lock they need. So the one that gets it\nfirst will do all the killing, log the page, then release the lock. At\nthat point the other processes will wake up and see that items have been\nkilled, so they will return having done nothing.\n\nLike the attached. I didn't verify that it works well or that it\nactually improves performance ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 17:52:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "Em sex., 15 de mai. de 2020 às 18:53, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Apr-10, Masahiko Sawada wrote:\n>\n> > Okay. I think only adding the check would also help with reducing the\n> > likelihood. How about the changes for the current HEAD I've attached?\n>\n> Pushed this to all branches. (Branches 12 and older obviously needed an\n> adjustment.) Thanks!\n>\n> > Related to this behavior on btree indexes, this can happen even on\n> > heaps during searching heap tuples. To reduce the likelihood of that\n> > more generally I wonder if we can acquire a lock on buffer descriptor\n> > right before XLogSaveBufferForHint() and set a flag to the buffer\n> > descriptor that indicates that we're about to log FPI for hint bit so\n> > that concurrent process can be aware of that.\n>\n> I'm not sure how that helps; the other process would have to go back and\n> redo their whole operation from scratch in order to find out whether\n> there's still something alive that needs killing.\n>\n> I think you need to acquire the exclusive lock sooner: if, when scanning\n> the page, you find a killable item, *then* upgrade the lock to exclusive\n> and restart the scan. This means that we'll have to wait for any other\n> process that's doing the scan, and they will all give up their share\n> lock to wait for the exclusive lock they need. So the one that gets it\n> first will do all the killing, log the page, then release the lock. At\n> that point the other processes will wake up and see that items have been\n> killed, so they will return having done nothing.\n>\n> Like the attached. I didn't verify that it works well or that it\n> actually improves performance ...\n>\nThis is not related to your latest patch.\nBut I believe I can improve the performance.\n\nSo:\n1. If killedsomething is false\n2. Any killtuple is true and (not ItemIdIsDead(iid)) is false\n3. Nothing to be done.\n\nSo why do all the work and then discard it.\nWe can eliminate the current item much earlier, testing if it is already\ndead.\n\nregards,\nRanier VIlela", "msg_date": "Sat, 16 May 2020 13:28:28 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" }, { "msg_contents": "Em sex., 15 de mai. de 2020 às 18:53, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Apr-10, Masahiko Sawada wrote:\n>\n> > Okay. I think only adding the check would also help with reducing the\n> > likelihood. How about the changes for the current HEAD I've attached?\n>\n> Pushed this to all branches. (Branches 12 and older obviously needed an\n> adjustment.) Thanks!\n>\n> > Related to this behavior on btree indexes, this can happen even on\n> > heaps during searching heap tuples. To reduce the likelihood of that\n> > more generally I wonder if we can acquire a lock on buffer descriptor\n> > right before XLogSaveBufferForHint() and set a flag to the buffer\n> > descriptor that indicates that we're about to log FPI for hint bit so\n> > that concurrent process can be aware of that.\n>\n> I'm not sure how that helps; the other process would have to go back and\n> redo their whole operation from scratch in order to find out whether\n> there's still something alive that needs killing.\n>\n> I think you need to acquire the exclusive lock sooner: if, when scanning\n> the page, you find a killable item, *then* upgrade the lock to exclusive\n> and restart the scan. This means that we'll have to wait for any other\n> process that's doing the scan, and they will all give up their share\n> lock to wait for the exclusive lock they need. So the one that gets it\n> first will do all the killing, log the page, then release the lock. At\n> that point the other processes will wake up and see that items have been\n> killed, so they will return having done nothing.\n>\nRegarding the block, I disagree in part, because in the worst case,\nthe block can be requested in the last item analyzed, leading to redo all\nthe work from the beginning.\nIf we are in _bt_killitems it is because there is a high probability that\nthere will be items to be deleted,\nwhy not request the block soon, if this meets the conditions?\n\n1. XLogHintBitIsNeeded ()\n2.! AutoVacuumingActive ()\n3. New exclusive configuration variable option to activate the lock?\n\nMasahiko reported that it occurs only when (autovacuum_enabled = off);\n\nregards,\nRanier Vilela", "msg_date": "Sat, 16 May 2020 16:32:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multiple FPI_FOR_HINT for the same block during killing btree\n index items" } ]
[ { "msg_contents": ">On Wed, Apr 8, 2020 at 7:39 PM Juan José Santamaría Flecha\n\n>> Let me explain further, in pg_config_os.h you can check that the value of\n>> _WIN32_WINNT is solely based on checking _MSC_VER. This patch should also\n>> be meaningful for WIN32 builds using MinGW, or we might see this issue\n>> reappear in those systems if update the MIN_WINNT value to more current\n>> OS versions. So, I still think _WIN32_WINNT is a better option.\n>>\n>Thanks for explanation, I was not aware of that, you are right it make\n>sense to use \" _WIN32_WINNT\", Now I am using this only.\n\n>I still see the same last lines in both #ifdef blocks, and pgindent might\n>> change a couple of lines to:\n>> + MultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,\n>> + LOCALE_NAME_MAX_LENGTH);\n>> +\n>> + if ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,\n>> + (LPWSTR)&buffer, LOCALE_NAME_MAX_LENGTH)) > 0)\n>> + {\n>>\n>Now I have resolved these comments also, Please check updated version of\n>the patch.\n\n>> Please open an item in the commitfest for this patch.\n>>\n>I have created with same title.\n\nHi,\n\nI have a few comments about the patch, if I may.\n\n1. Variable rc runs the risk of being used uninitialized.\n\n2. Variable loct has a redundant declaration ( = NULL).\n\n3. Return \"C\", does not solve the first case?\n\nAttached, your patch with those considerations.\n\nregards,\n\nRanier VIlela\n\n\n>On Wed, Apr 8, 2020 at 7:39 PM Juan José Santamaría Flecha\n>> Let me explain further, in pg_config_os.h you can check that the value of>> _WIN32_WINNT is solely based on checking _MSC_VER. This patch should also>> be meaningful for WIN32 builds using MinGW, or we might see this issue>> reappear in those systems if update the MIN_WINNT value to more current>> OS versions. So, I still think _WIN32_WINNT is a better option.>>>Thanks for explanation, I was not aware of that, you are right it make>sense to use \" _WIN32_WINNT\", Now I am using this only.\n>I still see the same last lines in both #ifdef blocks, and pgindent might>> change a couple of lines to:>> + MultiByteToWideChar(CP_ACP, 0, winlocname, -1, wc_locale_name,>> + LOCALE_NAME_MAX_LENGTH);>> +>> + if ((GetLocaleInfoEx(wc_locale_name, LOCALE_SNAME,>> + (LPWSTR)&buffer, LOCALE_NAME_MAX_LENGTH)) > 0)>> + {>>>Now I have resolved these comments also, Please check updated version of>the patch.\n>> Please open an item in the commitfest for this patch.>>>I have created with same title.Hi, I have a few comments about the patch, if I may.1. Variable rc runs the risk of being used uninitialized.2. Variable loct has a redundant declaration ( = NULL).3. Return \"C\", does not solve the first case?Attached, your patch with those considerations.regards,Ranier VIlela", "msg_date": "Thu, 9 Apr 2020 08:54:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:56 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Attached, your patch with those considerations.\n>\nI see no attachment.\n\nRegards\n\nOn Thu, Apr 9, 2020 at 1:56 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\nAttached, your patch with those considerations.I see no attachment.Regards", "msg_date": "Thu, 9 Apr 2020 14:14:31 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" }, { "msg_contents": "Em qui., 9 de abr. de 2020 às 09:14, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escreveu:\n\n>\n> On Thu, Apr 9, 2020 at 1:56 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Attached, your patch with those considerations.\n>>\n> I see no attachment.\n>\nSorry, my mystake.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 9 Apr 2020 10:19:29 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG compilation error with Visual Studio 2015/2017/2019" } ]
[ { "msg_contents": "\nWe currently only run perlcritic at severity level 5, which is fairly\npermissive. I'd like to reduce that, ideally to, say, level 3, which is\nwhat I use for the buildfarm code.\n\nBut let's start by going to severity level 4. Give this perlcriticrc,\nderived from the buildfarm's:\n\n\n # for policy descriptions see\n # https://metacpan.org/release/Perl-Critic\n\n severity = 4\n\n theme = core\n\n # allow octal constants with leading zeros\n [-ValuesAndExpressions::ProhibitLeadingZeros]\n\n # allow assignments to %ENV and %SIG without 'local'\n [Variables::RequireLocalizedPunctuationVars]\n allow = %ENV %SIG\n\n # allow 'no warnings qw(once)\n [TestingAndDebugging::ProhibitNoWarnings]\n allow = once\n\n # allow opened files to stay open for more than 9 lines of code\n [-InputOutput::RequireBriefOpen]\n\nHere's a summary of the perlcritic warnings:\n\n\n      39 Always unpack @_ first\n      30 Code before warnings are enabled\n      12 Subroutine \"new\" called using indirect syntax\n       9 Multiple \"package\" declarations\n       9 Expression form of \"grep\"\n       7 Symbols are exported by default\n       5 Warnings disabled\n       4 Magic variable \"$/\" should be assigned as \"local\"\n       4 Comma used to separate statements\n       2 Readline inside \"for\" loop\n       2 Pragma \"constant\" used\n       2 Mixed high and low-precedence booleans\n       2 Don't turn off strict for large blocks of code\n       1 Magic variable \"@a\" should be assigned as \"local\"\n       1 Magic variable \"$|\" should be assigned as \"local\"\n       1 Magic variable \"$\\\" should be assigned as \"local\"\n       1 Magic variable \"$?\" should be assigned as \"local\"\n       1 Magic variable \"$,\" should be assigned as \"local\"\n       1 Magic variable \"$\"\" should be assigned as \"local\"\n       1 Expression form of \"map\"\n\nwhich isn't a huge number.\n\nI'm going to start posting patches to address these issues, and when\nwe're done we can lower the severity level and start again on the level\n3s :-)\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 9 Apr 2020 11:44:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "cleaning perl code" }, { "msg_contents": "On Thu, Apr 9, 2020 at 11:44 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> We currently only run perlcritic at severity level 5, which is fairly\n> permissive. I'd like to reduce that, ideally to, say, level 3, which is\n> what I use for the buildfarm code.\n>\n> But let's start by going to severity level 4.\n\nI continue to be skeptical of perlcritic. I think it complains about a\nlot of things which don't matter very much. We should consider whether\nthe effort it takes to keep it warning-clean has proportionate\nbenefits.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 13:47:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 2020-04-09 19:47, Robert Haas wrote:\n> On Thu, Apr 9, 2020 at 11:44 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> We currently only run perlcritic at severity level 5, which is fairly\n>> permissive. I'd like to reduce that, ideally to, say, level 3, which is\n>> what I use for the buildfarm code.\n>>\n>> But let's start by going to severity level 4.\n> \n> I continue to be skeptical of perlcritic. I think it complains about a\n> lot of things which don't matter very much. We should consider whether\n> the effort it takes to keep it warning-clean has proportionate\n> benefits.\n\nLet's see what the patches look like. At least some of the warnings \nlook reasonable, especially in the sense that they are things casual \nPerl programmers might accidentally do wrong.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 20:26:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/9/20 2:26 PM, Peter Eisentraut wrote:\n> On 2020-04-09 19:47, Robert Haas wrote:\n>> On Thu, Apr 9, 2020 at 11:44 AM Andrew Dunstan\n>> <andrew.dunstan@2ndquadrant.com> wrote:\n>>> We currently only run perlcritic at severity level 5, which is fairly\n>>> permissive. I'd like to reduce that, ideally to, say, level 3, which is\n>>> what I use for the buildfarm code.\n>>>\n>>> But let's start by going to severity level 4.\n>>\n>> I continue to be skeptical of perlcritic. I think it complains about a\n>> lot of things which don't matter very much. We should consider whether\n>> the effort it takes to keep it warning-clean has proportionate\n>> benefits.\n>\n> Let's see what the patches look like.  At least some of the warnings\n> look reasonable, especially in the sense that they are things casual\n> Perl programmers might accidentally do wrong.\n\n\n\nOK, I'll prep one or two. I used to be of Robert's opinion, but I've\ncome around some on it.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:13:20 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Thu, Apr 09, 2020 at 11:44:11AM -0400, Andrew Dunstan wrote:\n> ���� 39 Always unpack @_ first\n\nRequiring a \"my @args = @_\" does not improve this code:\n\nsub CreateSolution\n{\n ...\n\tif ($visualStudioVersion eq '12.00')\n\t{\n\t\treturn new VS2013Solution(@_);\n\t}\n\n> ���� 30 Code before warnings are enabled\n\nSounds good. We already require \"use strict\" before code. Requiring \"use\nwarnings\" in the exact same place does not impose much burden.\n\n> ���� 12 Subroutine \"new\" called using indirect syntax\n\nNo, thanks. \"new VS2013Solution(@_)\" and \"VS2013Solution->new(@_)\" are both\nfine; enforcing the latter is an ongoing waste of effort.\n\n> ����� 9 Multiple \"package\" declarations\n\nThis is good advice if you're writing for CPAN, but it would make PostgreSQL\ncode worse by having us split affiliated code across multiple files.\n\n> ����� 9 Expression form of \"grep\"\n\nNo, thanks. I'd be happier with the opposite, requiring grep(/x/, $arg)\ninstead of grep { /x/ } $arg. Neither is worth enforcing.\n\n> ����� 7 Symbols are exported by default\n\nThis is good advice if you're writing for CPAN. For us, it just adds typing.\n\n> ����� 5 Warnings disabled\n> ����� 4 Magic variable \"$/\" should be assigned as \"local\"\n> ����� 4 Comma used to separate statements\n> ����� 2 Readline inside \"for\" loop\n> ����� 2 Pragma \"constant\" used\n> ����� 2 Mixed high and low-precedence booleans\n> ����� 2 Don't turn off strict for large blocks of code\n> ����� 1 Magic variable \"@a\" should be assigned as \"local\"\n> ����� 1 Magic variable \"$|\" should be assigned as \"local\"\n> ����� 1 Magic variable \"$\\\" should be assigned as \"local\"\n> ����� 1 Magic variable \"$?\" should be assigned as \"local\"\n> ����� 1 Magic variable \"$,\" should be assigned as \"local\"\n> ����� 1 Magic variable \"$\"\" should be assigned as \"local\"\n> ����� 1 Expression form of \"map\"\n\nI looked less closely at the rest, but none give me a favorable impression.\n\n\nIn summary, among those warnings, I see non-negative value in \"Code before\nwarnings are enabled\" only. While we're changing this, I propose removing\nSubroutines::RequireFinalReturn. Implicit return values were not a material\nsource of PostgreSQL bugs, yet we've allowed this to litter our code:\n\n$ find src -name '*.p[lm]'| xargs grep -n '^.return;' | wc -l\n194\n\n\n", "msg_date": "Sat, 11 Apr 2020 04:30:14 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 2020-04-11 06:30, Noah Misch wrote:\n> In summary, among those warnings, I see non-negative value in \"Code before\n> warnings are enabled\" only.\n\nNow that you put it like this, that was also my impression when I first \nintroduced the level 5 warnings and then decided to stop there.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 11 Apr 2020 10:06:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> In summary, among those warnings, I see non-negative value in \"Code before\n> warnings are enabled\" only. While we're changing this, I propose removing\n> Subroutines::RequireFinalReturn.\n\nIf it's possible to turn off just that warning, then +several.\nIt's routinely caused buildfarm failures, yet I can detect exactly\nno value in it. If there were sufficient cross-procedural analysis\nbacking it to detect whether any caller examines the subroutine's\nresult value, then it'd be worth having. But there isn't, so those\nextra returns are just pedantic verbosity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 11:14:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 4/11/20 12:30 AM, Noah Misch wrote:\n> On Thu, Apr 09, 2020 at 11:44:11AM -0400, Andrew Dunstan wrote:\n>>      39 Always unpack @_ first\n> Requiring a \"my @args = @_\" does not improve this code:\n>\n> sub CreateSolution\n> {\n> ...\n> \tif ($visualStudioVersion eq '12.00')\n> \t{\n> \t\treturn new VS2013Solution(@_);\n> \t}\n>\n>>      30 Code before warnings are enabled\n> Sounds good. We already require \"use strict\" before code. Requiring \"use\n> warnings\" in the exact same place does not impose much burden.\n>\n>>      12 Subroutine \"new\" called using indirect syntax\n> No, thanks. \"new VS2013Solution(@_)\" and \"VS2013Solution->new(@_)\" are both\n> fine; enforcing the latter is an ongoing waste of effort.\n>\n>>       9 Multiple \"package\" declarations\n> This is good advice if you're writing for CPAN, but it would make PostgreSQL\n> code worse by having us split affiliated code across multiple files.\n>\n>>       9 Expression form of \"grep\"\n> No, thanks. I'd be happier with the opposite, requiring grep(/x/, $arg)\n> instead of grep { /x/ } $arg. Neither is worth enforcing.\n>\n>>       7 Symbols are exported by default\n> This is good advice if you're writing for CPAN. For us, it just adds typing.\n>\n>>       5 Warnings disabled\n>>       4 Magic variable \"$/\" should be assigned as \"local\"\n>>       4 Comma used to separate statements\n>>       2 Readline inside \"for\" loop\n>>       2 Pragma \"constant\" used\n>>       2 Mixed high and low-precedence booleans\n>>       2 Don't turn off strict for large blocks of code\n>>       1 Magic variable \"@a\" should be assigned as \"local\"\n>>       1 Magic variable \"$|\" should be assigned as \"local\"\n>>       1 Magic variable \"$\\\" should be assigned as \"local\"\n>>       1 Magic variable \"$?\" should be assigned as \"local\"\n>>       1 Magic variable \"$,\" should be assigned as \"local\"\n>>       1 Magic variable \"$\"\" should be assigned as \"local\"\n>>       1 Expression form of \"map\"\n> I looked less closely at the rest, but none give me a favorable impression.\n\n\n\nI don't have a problem with some of this. OTOH, it's nice to know what\nwe're ignoring and what we're not.\n\n\nWhat I have prepared is first a patch that lowers the severity level to\n3 but implements policy exceptions so that nothing is broken. Then 3\npatches. One fixes the missing warnings pragma and removes shebang -w\nswitches, so we are quite consistent about how we do this. I gather we\nare agreed about that one. The next one fixes those magic variable\nerror. That includes using some more idiomatic perl, and in one case\njust renaming a couple of variables that are fairly opaque anyway. The\nlast one fixes the mixture of high and low precedence boolean operators,\nthe inefficient <FOO> inside a foreach loop,  and the use of commas to\nseparate statements, and relaxes the policy about large blocks with 'no\nstrict'.\n\n\nSince I have written them they are attached, for posterity if nothing\nelse. :-)\n\n\n\n>\n>\n> In summary, among those warnings, I see non-negative value in \"Code before\n> warnings are enabled\" only. While we're changing this, I propose removing\n> Subroutines::RequireFinalReturn. Implicit return values were not a material\n> source of PostgreSQL bugs, yet we've allowed this to litter our code:\n>\n\nThat doesn't mean it won't be a source of problems in future, I've\nactually been bitten by this in the past.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 11 Apr 2020 12:13:08 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\n\n> On Apr 11, 2020, at 9:13 AM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\nHi Andrew. I appreciate your interest and efforts here. I hope you don't mind a few questions/observations about this effort:\n\n> \n> The\n> last one fixes the mixture of high and low precedence boolean operators,\n\nI did not spot examples of this in your diffs, but I assume you mean to prohibit conditionals like:\n\n if ($a || $b and $c || $d)\n\nAs I understand it, perl introduced low precedence operators precisely to allow this. Why disallow it?\n\n> and the use of commas to separate statements\n\nI don't understand the prejudice against commas used this way. What is wrong with:\n\n $i++, $j++ if defined $k;\n\nrather than:\n\n if (defined $k)\n {\n $i++;\n $j++;\n }\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 09:28:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/11/20 12:28 PM, Mark Dilger wrote:\n>\n>> On Apr 11, 2020, at 9:13 AM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> Hi Andrew. I appreciate your interest and efforts here. I hope you don't mind a few questions/observations about this effort:\n\n\nNot at all.\n\n\n>\n>> The\n>> last one fixes the mixture of high and low precedence boolean operators,\n> I did not spot examples of this in your diffs, but I assume you mean to prohibit conditionals like:\n>\n> if ($a || $b and $c || $d)\n>\n> As I understand it, perl introduced low precedence operators precisely to allow this. Why disallow it?\n\n\nThe docs say:\n\n\n Conway advises against combining the low-precedence booleans ( |and\n or not| ) with the high-precedence boolean operators ( |&& || !| )\n in the same expression. Unless you fully understand the differences\n between the high and low-precedence operators, it is easy to\n misinterpret expressions that use both. And even if you do\n understand them, it is not always clear if the author actually\n intended it.\n\n |next| |if| |not ||$foo| ||| ||$bar||;  ||#not ok|\n |next| |if| |!||$foo| ||| ||$bar||;     ||#ok|\n |next| |if| |!( ||$foo| ||| ||$bar| |); ||#ok|\n\n\nI don't feel terribly strongly about it, but personally I just about\nnever use the low precendence operators, and mostly prefer to resolve\nprecedence issue with parentheses.\n\n\n>\n>> and the use of commas to separate statements\n> I don't understand the prejudice against commas used this way. What is wrong with:\n>\n> $i++, $j++ if defined $k;\n>\n> rather than:\n>\n> if (defined $k)\n> {\n> $i++;\n> $j++;\n> }\n>\n\n\nI don't think the example is terribly clear. I have to look at it and\nthink \"Does it do $i++ if $k isn't defined?\"\n\nIn the cases we actually have there isn't even any shorthand advantage\nlike this. There are only a couple of cases.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 12:47:55 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 4/11/20 12:30 AM, Noah Misch wrote:\n>> In summary, among those warnings, I see non-negative value in \"Code before\n>> warnings are enabled\" only. While we're changing this, I propose removing\n>> Subroutines::RequireFinalReturn. Implicit return values were not a material\n>> source of PostgreSQL bugs, yet we've allowed this to litter our code:\n\n> That doesn't mean it won't be a source of problems in future, I've\n> actually been bitten by this in the past.\n\nYeah, as I recall, the reason for the restriction is that if you fall out\nwithout a \"return\", what's returned is the side-effect value of the last\nstatement, which might be fairly surprising. Adding explicit \"return;\"\nguarantees an undef result. So when this does prevent a bug it could\nbe a pretty hard-to-diagnose one. The problem is that it's a really\nverbose/pedantic requirement for subs that no one ever examines the\nresult value of.\n\nIs there a way to modify the test so that it only complains when\nthe final return is missing and there are other return(s) with values?\nThat would seem like a more narrowly tailored check.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 12:48:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/11/20 12:48 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 4/11/20 12:30 AM, Noah Misch wrote:\n>>> In summary, among those warnings, I see non-negative value in \"Code before\n>>> warnings are enabled\" only. While we're changing this, I propose removing\n>>> Subroutines::RequireFinalReturn. Implicit return values were not a material\n>>> source of PostgreSQL bugs, yet we've allowed this to litter our code:\n>> That doesn't mean it won't be a source of problems in future, I've\n>> actually been bitten by this in the past.\n> Yeah, as I recall, the reason for the restriction is that if you fall out\n> without a \"return\", what's returned is the side-effect value of the last\n> statement, which might be fairly surprising. Adding explicit \"return;\"\n> guarantees an undef result. So when this does prevent a bug it could\n> be a pretty hard-to-diagnose one. The problem is that it's a really\n> verbose/pedantic requirement for subs that no one ever examines the\n> result value of.\n>\n> Is there a way to modify the test so that it only complains when\n> the final return is missing and there are other return(s) with values?\n> That would seem like a more narrowly tailored check.\n>\n> \t\t\t\n\n\n\nNot AFAICS:\n<https://metacpan.org/pod/Perl::Critic::Policy::Subroutines::RequireFinalReturn>\n\n\nThat would probably require writing a replacement module. Looking at the\nsource if this module I think it might be possible, although I don't\nknow much of the internals of perlcritic.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 13:01:54 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\n\n> On Apr 11, 2020, at 9:47 AM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> \n> On 4/11/20 12:28 PM, Mark Dilger wrote:\n>> \n>>> On Apr 11, 2020, at 9:13 AM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>> Hi Andrew. I appreciate your interest and efforts here. I hope you don't mind a few questions/observations about this effort:\n> \n> \n> Not at all.\n> \n> \n>> \n>>> The\n>>> last one fixes the mixture of high and low precedence boolean operators,\n>> I did not spot examples of this in your diffs, but I assume you mean to prohibit conditionals like:\n>> \n>> if ($a || $b and $c || $d)\n>> \n>> As I understand it, perl introduced low precedence operators precisely to allow this. Why disallow it?\n> \n> \n> The docs say:\n> \n> \n> Conway advises against combining the low-precedence booleans ( |and\n> or not| ) with the high-precedence boolean operators ( |&& || !| )\n> in the same expression. Unless you fully understand the differences\n> between the high and low-precedence operators, it is easy to\n> misinterpret expressions that use both. And even if you do\n> understand them, it is not always clear if the author actually\n> intended it.\n> \n> |next| |if| |not ||$foo| ||| ||$bar||; ||#not ok|\n> |next| |if| |!||$foo| ||| ||$bar||; ||#ok|\n> |next| |if| |!( ||$foo| ||| ||$bar| |); ||#ok|\n\nI don't think any of those three are ok, from a code review perspective, but it's not because high and low precedence operators were intermixed.\n\n>> \n>>> and the use of commas to separate statements\n>> I don't understand the prejudice against commas used this way. What is wrong with:\n>> \n>> $i++, $j++ if defined $k;\n>> \n>> rather than:\n>> \n>> if (defined $k)\n>> {\n>> $i++;\n>> $j++;\n>> }\n>> \n> \n> \n> I don't think the example is terribly clear. I have to look at it and\n> think \"Does it do $i++ if $k isn't defined?\"\n\nIt works like the equivalent C-code:\n\n if (k)\n i++, j++;\n\nwhich to my eyes is also fine.\n\nI'm less concerned with which perlcritic features you enable than I am with accidentally submitting perl which looks fine to me but breaks the build. I mostly use perl from within TAP tests, which I run locally before submission to the project. Can your changes be integrated into the TAP_TESTS makefile target so that I get local errors about this stuff and can fix it before submitting a regression test to -hackers?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 10:27:58 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 4/11/20 12:48 PM, Tom Lane wrote:\n>> Is there a way to modify the test so that it only complains when\n>> the final return is missing and there are other return(s) with values?\n>> That would seem like a more narrowly tailored check.\n\n> Not AFAICS:\n> <https://metacpan.org/pod/Perl::Critic::Policy::Subroutines::RequireFinalReturn>\n\nYeah, the list of all policies in the parent page doesn't offer any\npromising alternatives either :-(\n\nBTW, this bit in the policy's man page seems pretty disheartening:\n\n Be careful when fixing problems identified by this Policy; don't\n blindly put a return; statement at the end of every subroutine.\n\nsince I'd venture that's *exactly* what we've done every time perlcritic\nmoaned about this. I wonder what else the author expected would happen.\n\n> That would probably require writing a replacement module. Looking at the\n> source if this module I think it might be possible, although I don't\n> know much of the internals of perlcritic.\n\nI doubt we want to go maintaining our own perlcritic policies; aside from\nthe effort involved, it'd become that much harder for anyone to reproduce\nthe results.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 13:31:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I'm less concerned with which perlcritic features you enable than I am with accidentally submitting perl which looks fine to me but breaks the build. I mostly use perl from within TAP tests, which I run locally before submission to the project. Can your changes be integrated into the TAP_TESTS makefile target so that I get local errors about this stuff and can fix it before submitting a regression test to -hackers?\n\nAs far as that goes, I think crake is just running\n\nsrc/tools/perlcheck/pgperlcritic\n\nwhich you can do for yourself as long as you've got perlcritic\ninstalled.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 13:41:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Sat, Apr 11, 2020 at 11:14:52AM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > In summary, among those warnings, I see non-negative value in \"Code before\n> > warnings are enabled\" only. While we're changing this, I propose removing\n> > Subroutines::RequireFinalReturn.\n> \n> If it's possible to turn off just that warning, then +several.\n\nWe'd not get that warning if src/tools/perlcheck/pgperlcritic stopped enabling\nit by name, so it is possible to turn off by removing lines from that config.\n\n> It's routinely caused buildfarm failures, yet I can detect exactly\n> no value in it. If there were sufficient cross-procedural analysis\n> backing it to detect whether any caller examines the subroutine's\n> result value, then it'd be worth having. But there isn't, so those\n> extra returns are just pedantic verbosity.\n\nAgreed.\n\n\n", "msg_date": "Sun, 12 Apr 2020 00:26:32 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Sat, Apr 11, 2020 at 12:13:08PM -0400, Andrew Dunstan wrote:\n> --- a/src/tools/msvc/Project.pm\n> +++ b/src/tools/msvc/Project.pm\n> @@ -420,13 +420,10 @@ sub read_file\n> {\n> \tmy $filename = shift;\n> \tmy $F;\n> -\tmy $t = $/;\n> -\n> -\tundef $/;\n> +\tlocal $/ = undef;\n> \topen($F, '<', $filename) || croak \"Could not open file $filename\\n\";\n> \tmy $txt = <$F>;\n> \tclose($F);\n> -\t$/ = $t;\n\n+1 for this and for the other three hunks like it. The resulting code is\nshorter and more robust, so this is a good one-time cleanup. It's not\nimportant to mandate this style going forward, so I wouldn't change\nperlcriticrc for this one.\n\n> --- a/src/tools/version_stamp.pl\n> +++ b/src/tools/version_stamp.pl\n> @@ -1,4 +1,4 @@\n> -#! /usr/bin/perl -w\n> +#! /usr/bin/perl\n> \n> #################################################################\n> # version_stamp.pl -- update version stamps throughout the source tree\n> @@ -21,6 +21,7 @@\n> #\n> \n> use strict;\n> +use warnings;\n\nThis and the other \"use warnings\" additions look good. I'm assuming you'd\nchange perlcriticrc like this:\n\n+[TestingAndDebugging::RequireUseWarnings]\n+severity = 5\n\n\n", "msg_date": "Sun, 12 Apr 2020 00:42:45 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Sat, Apr 11, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > In summary, among those warnings, I see non-negative value in \"Code before\n> > warnings are enabled\" only. While we're changing this, I propose removing\n> > Subroutines::RequireFinalReturn.\n>\n> If it's possible to turn off just that warning, then +several.\n> It's routinely caused buildfarm failures, yet I can detect exactly\n> no value in it. If there were sufficient cross-procedural analysis\n> backing it to detect whether any caller examines the subroutine's\n> result value, then it'd be worth having. But there isn't, so those\n> extra returns are just pedantic verbosity.\n\nWe've actually gone out of our way to enable that particular warning.\nSee src/tools/perlcheck/perlcriticrc.\n\nThe idea of that warning is not entirely without merit, but in\npractice it's usually pretty clear whether a function is intended to\nreturn anything or not, and it's unlikely that someone is going to\nrely on the return value when they really shouldn't be doing so. I'd\nventure to suggest that the language is lax about this sort of thing\nprecisely because it isn't very important, and thus not worth\nbothering users about.\n\nI agree with Noah's comment about CPAN: it would be worth being more\ncareful about things like this if we were writing code that was likely\nto be used by a wide variety of people and a lot of code over which we\nhave no control and which we do not get to even see. But that's not\nthe case here. It does not seem worth stressing the authors of TAP\ntests over such things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 12 Apr 2020 15:22:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 4/12/20 3:22 PM, Robert Haas wrote:\n> On Sat, Apr 11, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> In summary, among those warnings, I see non-negative value in \"Code before\n>>> warnings are enabled\" only. While we're changing this, I propose removing\n>>> Subroutines::RequireFinalReturn.\n>>\n>> If it's possible to turn off just that warning, then +several.\n>> It's routinely caused buildfarm failures, yet I can detect exactly\n>> no value in it. If there were sufficient cross-procedural analysis\n>> backing it to detect whether any caller examines the subroutine's\n>> result value, then it'd be worth having. But there isn't, so those\n>> extra returns are just pedantic verbosity.\n> \n> I agree with Noah's comment about CPAN: it would be worth being more\n> careful about things like this if we were writing code that was likely\n> to be used by a wide variety of people and a lot of code over which we\n> have no control and which we do not get to even see. But that's not\n> the case here. It does not seem worth stressing the authors of TAP\n> tests over such things.\n\nFWIW, pgBackRest used Perl Critic when we were distributing Perl code \nbut stopped when our Perl code was only used for integration testing. \nPerhaps that was the wrong call but we decided the extra time required \nto run it was not worth the benefit. Most new test code is written in C \nand the Perl test code is primarily in maintenance mode now.\n\nWhen we did use Perl Critic we set it at level 1 (--brutal) and then \nwrote an exception file for the stuff we wanted to ignore. The advantage \nof this is that if new code violated a policy that did not already have \nan exception we could evaluate it and either add an exception or modify \nthe code. In practice this was pretty rare, but we also had a short \nexcuse for many exceptions and a list of exceptions that should be \nre-evaluated in the future.\n\nAbout the time we introduced Perl Critic we were already considering the \nC migration so most of the exceptions stayed.\n\nJust in case it is useful, I have attached our old policy file with \nexceptions and excuses (when we had one).\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Sun, 12 Apr 2020 16:12:59 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/12/20 4:12 PM, David Steele wrote:\n> On 4/12/20 3:22 PM, Robert Haas wrote:\n>> On Sat, Apr 11, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Noah Misch <noah@leadboat.com> writes:\n>>>> In summary, among those warnings, I see non-negative value in \"Code\n>>>> before\n>>>> warnings are enabled\" only.  While we're changing this, I propose\n>>>> removing\n>>>> Subroutines::RequireFinalReturn.\n>>>\n>>> If it's possible to turn off just that warning, then +several.\n>>> It's routinely caused buildfarm failures, yet I can detect exactly\n>>> no value in it.  If there were sufficient cross-procedural analysis\n>>> backing it to detect whether any caller examines the subroutine's\n>>> result value, then it'd be worth having.  But there isn't, so those\n>>> extra returns are just pedantic verbosity.\n>>\n>> I agree with Noah's comment about CPAN: it would be worth being more\n>> careful about things like this if we were writing code that was likely\n>> to be used by a wide variety of people and a lot of code over which we\n>> have no control and which we do not get to even see. But that's not\n>> the case here. It does not seem worth stressing the authors of TAP\n>> tests over such things.\n>\n> FWIW, pgBackRest used Perl Critic when we were distributing Perl code\n> but stopped when our Perl code was only used for integration testing.\n> Perhaps that was the wrong call but we decided the extra time required\n> to run it was not worth the benefit. Most new test code is written in\n> C and the Perl test code is primarily in maintenance mode now.\n>\n> When we did use Perl Critic we set it at level 1 (--brutal) and then\n> wrote an exception file for the stuff we wanted to ignore. The\n> advantage of this is that if new code violated a policy that did not\n> already have an exception we could evaluate it and either add an\n> exception or modify the code. In practice this was pretty rare, but we\n> also had a short excuse for many exceptions and a list of exceptions\n> that should be re-evaluated in the future.\n>\n> About the time we introduced Perl Critic we were already considering\n> the C migration so most of the exceptions stayed.\n>\n> Just in case it is useful, I have attached our old policy file with\n> exceptions and excuses (when we had one).\n>\n>\n\nThat's a pretty short list for --brutal, well done. I agree there is\nvalue in keeping documented the policies you're not complying with.\nMaybe the burden of that is too much for this use, that's up to the\nproject to decide.\n\nFor good or ill we now have a significant investment in perl code - I\njust looked and it's 180 files with 38,135 LOC, and that's not counting\nthe catalog data files, so we have some interest in keeping it fairly clean.\n\nI did something similar to what's above with the buildfarm code,\nalthough on checking now I find it's a bit out of date for the sev 1 and\n2 warnings, so I'm fixing that. Having said that, my normal target is\nlevel 3.\n\nThe absolutely minimal things I want to do are a) fix the code that\nwe're agreed on fixing (use of warnings, idiomatic use of $/), and b)\nfix the output format to include the name of the policy being violated.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 12 Apr 2020 18:24:15 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 4/12/20 6:24 PM, Andrew Dunstan wrote:\n> On 4/12/20 4:12 PM, David Steele wrote:\n>>\n>> Just in case it is useful, I have attached our old policy file with\n>> exceptions and excuses (when we had one).\n> \n> That's a pretty short list for --brutal, well done. I agree there is\n> value in keeping documented the policies you're not complying with.\n> Maybe the burden of that is too much for this use, that's up to the\n> project to decide.\n\nThanks! Perl is, well Perl, and we made a lot of effort to keep it as \nclean and consistent as possible.\n\nObviously I'm +1 on documenting all the exceptions.\n\n> For good or ill we now have a significant investment in perl code - I\n> just looked and it's 180 files with 38,135 LOC, and that's not counting\n> the catalog data files, so we have some interest in keeping it fairly clean.\n\nAgreed. According to cloc pgBackRest still has 26,744 lines of Perl (not \nincluding comments or whitespace) so we're in the same boat.\n\n> The absolutely minimal things I want to do are a) fix the code that\n> we're agreed on fixing (use of warnings, idiomatic use of $/), and b)\n> fix the output format to include the name of the policy being violated.\n\nWe found limiting results and being very verbose about the violation was \nextremely helpful:\n\nperlcritic --quiet --verbose=8 --brutal --top=10 \\\n--verbose \"[%p] %f: %m at line %l, column %c. %e. (Severity: %s)\\n\"\n--profile=test/lint/perlcritic.policy \\\n<files>\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sun, 12 Apr 2020 18:45:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/12/20 3:42 AM, Noah Misch wrote:\n> On Sat, Apr 11, 2020 at 12:13:08PM -0400, Andrew Dunstan wrote:\n>> --- a/src/tools/msvc/Project.pm\n>> +++ b/src/tools/msvc/Project.pm\n>> @@ -420,13 +420,10 @@ sub read_file\n>> {\n>> \tmy $filename = shift;\n>> \tmy $F;\n>> -\tmy $t = $/;\n>> -\n>> -\tundef $/;\n>> +\tlocal $/ = undef;\n>> \topen($F, '<', $filename) || croak \"Could not open file $filename\\n\";\n>> \tmy $txt = <$F>;\n>> \tclose($F);\n>> -\t$/ = $t;\n> +1 for this and for the other three hunks like it. The resulting code is\n> shorter and more robust, so this is a good one-time cleanup. It's not\n> important to mandate this style going forward, so I wouldn't change\n> perlcriticrc for this one.\n>\n>> --- a/src/tools/version_stamp.pl\n>> +++ b/src/tools/version_stamp.pl\n>> @@ -1,4 +1,4 @@\n>> -#! /usr/bin/perl -w\n>> +#! /usr/bin/perl\n>> \n>> #################################################################\n>> # version_stamp.pl -- update version stamps throughout the source tree\n>> @@ -21,6 +21,7 @@\n>> #\n>> \n>> use strict;\n>> +use warnings;\n> This and the other \"use warnings\" additions look good. I'm assuming you'd\n> change perlcriticrc like this:\n>\n> +[TestingAndDebugging::RequireUseWarnings]\n> +severity = 5\n\n\n\nOK, I've committed all that stuff. I think that takes care of the\nnon-controversial part of what I proposed :-)\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 13 Apr 2020 12:47:15 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 4/13/20 12:47 PM, Andrew Dunstan wrote:\n>\n> OK, I've committed all that stuff. I think that takes care of the\n> non-controversial part of what I proposed :-)\n>\n>\n\nOK, it seems there is a majority of people commenting in this thread in\nfavor of not doing more except to reverse the policy of requiring\nsubroutine returns. I'll do that shortly. In the spirit of David\nSteele's contribution, here is a snippet that when added to the\nperlcriticrc would allow us to pass at the \"brutal\" setting (severity\n1). But I'm not proposing to add this, it's just here so anyone\ninterested can see what's involved.\n\nOne of the things that's a bit sad is that perlcritic doesn't generally\nlet you apply policies to a given set of files or files matching some\npattern. It would be nice, for instance, to be able to apply some\nadditional standards to strategic library files like PostgresNode.pm,\nTestLib.pm and Catalog.pm. There are good reasons as suggested upthread\nto apply higher standards to library files than to, say, a TAP test\nscript. The only easy way I can see to do that would be to have two\ndifferent perlcriticrc files and adjust pgperlcritic to make two runs.\nIf people think that's worth it I'll put a little work into it. If not,\nI'll just leave things here.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 14 Apr 2020 11:57:01 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 2020-Apr-14, Andrew Dunstan wrote:\n\n> One of the things that's a bit sad is that perlcritic doesn't generally\n> let you apply policies to a given set of files or files matching some\n> pattern. It would be nice, for instance, to be able to apply some\n> additional standards to strategic library files like PostgresNode.pm,\n> TestLib.pm and Catalog.pm. There are good reasons as suggested upthread\n> to apply higher standards to library files than to, say, a TAP test\n> script. The only easy way I can see to do that would be to have two\n> different perlcriticrc files and adjust pgperlcritic to make two runs.\n> If people think that's worth it I'll put a little work into it. If not,\n> I'll just leave things here.\n\nI think being more strict about it in strategic files (I'd say that's\nCatalog.pm plus src/test/perl/*.pm) might be a good idea. Maybe give it\na try and see what comes up.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 16:44:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 4/14/20 4:44 PM, Alvaro Herrera wrote:\n> On 2020-Apr-14, Andrew Dunstan wrote:\n>\n>> One of the things that's a bit sad is that perlcritic doesn't generally\n>> let you apply policies to a given set of files or files matching some\n>> pattern. It would be nice, for instance, to be able to apply some\n>> additional standards to strategic library files like PostgresNode.pm,\n>> TestLib.pm and Catalog.pm. There are good reasons as suggested upthread\n>> to apply higher standards to library files than to, say, a TAP test\n>> script. The only easy way I can see to do that would be to have two\n>> different perlcriticrc files and adjust pgperlcritic to make two runs.\n>> If people think that's worth it I'll put a little work into it. If not,\n>> I'll just leave things here.\n> I think being more strict about it in strategic files (I'd say that's\n> Catalog.pm plus src/test/perl/*.pm) might be a good idea. Maybe give it\n> a try and see what comes up.\n>\n\n\nOK, in fact those files are in reasonably good shape. I also took a pass\nthrough the library files in src/tools/msvc, which had a few more issues.\n\n\nHere's a patch that does the stricter testing for those library files,\nand fixes them so we get a clean pass\n\n\nThis brings to an end my perl gardening project.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 15 Apr 2020 15:43:36 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Wed, Apr 15, 2020 at 03:43:36PM -0400, Andrew Dunstan wrote:\n> On 4/14/20 4:44 PM, Alvaro Herrera wrote:\n> > On 2020-Apr-14, Andrew Dunstan wrote:\n> >> One of the things that's a bit sad is that perlcritic doesn't generally\n> >> let you apply policies to a given set of files or files matching some\n> >> pattern. It would be nice, for instance, to be able to apply some\n> >> additional standards to strategic library files like PostgresNode.pm,\n> >> TestLib.pm and Catalog.pm. There are good reasons as suggested upthread\n> >> to apply higher standards to library files than to, say, a TAP test\n> >> script. The only easy way I can see to do that would be to have two\n> >> different perlcriticrc files and adjust pgperlcritic to make two runs.\n> >> If people think that's worth it I'll put a little work into it. If not,\n> >> I'll just leave things here.\n> > I think being more strict about it in strategic files (I'd say that's\n> > Catalog.pm plus src/test/perl/*.pm) might be a good idea. Maybe give it\n> > a try and see what comes up.\n> \n> OK, in fact those files are in reasonably good shape. I also took a pass\n> through the library files in src/tools/msvc, which had a few more issues.\n\nIt would be an unpleasant surprise to cause a perlcritic buildfarm failure by\nmoving a function, verbatim, from a non-strategic file to a strategic file.\nHaving two Perl style regimes in one tree is itself a liability.\n\n> --- a/src/backend/catalog/Catalog.pm\n> +++ b/src/backend/catalog/Catalog.pm\n> @@ -67,7 +67,7 @@ sub ParseHeader\n> \t\tif (!$is_client_code)\n> \t\t{\n> \t\t\t# Strip C-style comments.\n> -\t\t\ts;/\\*(.|\\n)*\\*/;;g;\n> +\t\t\ts;/\\*(?:.|\\n)*\\*/;;g;\n\nThis policy against unreferenced groups makes the code harder to read, and the\nchance of preventing a bug is too low to justify that.\n\n> --- a/src/tools/perlcheck/pgperlcritic\n> +++ b/src/tools/perlcheck/pgperlcritic\n> @@ -14,7 +14,21 @@ PERLCRITIC=${PERLCRITIC:-perlcritic}\n> \n> . src/tools/perlcheck/find_perl_files\n> \n> -find_perl_files | xargs $PERLCRITIC \\\n> +flist=`mktemp`\n> +find_perl_files > $flist\n> +\n> +pattern='src/test/perl/|src/backend/catalog/Catalog.pm|src/tools/msvc/[^/]*.pm'\n\nI don't find these files to be especially strategic, and I'm mostly shrugging\nabout the stricter policy's effect on code quality. -1 for this patch.\n\n\n", "msg_date": "Wed, 15 Apr 2020 20:01:01 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/15/20 11:01 PM, Noah Misch wrote:\n> On Wed, Apr 15, 2020 at 03:43:36PM -0400, Andrew Dunstan wrote:\n>> On 4/14/20 4:44 PM, Alvaro Herrera wrote:\n>>> On 2020-Apr-14, Andrew Dunstan wrote:\n>>>> One of the things that's a bit sad is that perlcritic doesn't generally\n>>>> let you apply policies to a given set of files or files matching some\n>>>> pattern. It would be nice, for instance, to be able to apply some\n>>>> additional standards to strategic library files like PostgresNode.pm,\n>>>> TestLib.pm and Catalog.pm. There are good reasons as suggested upthread\n>>>> to apply higher standards to library files than to, say, a TAP test\n>>>> script. The only easy way I can see to do that would be to have two\n>>>> different perlcriticrc files and adjust pgperlcritic to make two runs.\n>>>> If people think that's worth it I'll put a little work into it. If not,\n>>>> I'll just leave things here.\n>>> I think being more strict about it in strategic files (I'd say that's\n>>> Catalog.pm plus src/test/perl/*.pm) might be a good idea. Maybe give it\n>>> a try and see what comes up.\n>> OK, in fact those files are in reasonably good shape. I also took a pass\n>> through the library files in src/tools/msvc, which had a few more issues.\n> It would be an unpleasant surprise to cause a perlcritic buildfarm failure by\n> moving a function, verbatim, from a non-strategic file to a strategic file.\n> Having two Perl style regimes in one tree is itself a liability.\n\n\nHonestly, I think you're reaching here.\n\n\n>\n>> --- a/src/backend/catalog/Catalog.pm\n>> +++ b/src/backend/catalog/Catalog.pm\n>> @@ -67,7 +67,7 @@ sub ParseHeader\n>> \t\tif (!$is_client_code)\n>> \t\t{\n>> \t\t\t# Strip C-style comments.\n>> -\t\t\ts;/\\*(.|\\n)*\\*/;;g;\n>> +\t\t\ts;/\\*(?:.|\\n)*\\*/;;g;\n> This policy against unreferenced groups makes the code harder to read, and the\n> chance of preventing a bug is too low to justify that.\n\n\n\nNon-capturing groups are also more efficient, and are something perl\nprogrammers should be familiar with.\n\n\nIn fact, there's a much better renovation of semantics of this\nparticular instance, which is to make . match \\n using the s modifier:\n\n\n    s;/\\*.*\\*/;;gs;\n\n\nIt would also be more robust using non-greedy matching:\n\n\n    s;/\\*.*?\\*/;;gs\n\n\nAfter I wrote the above I went and looked at what we do the buildfarm\ncode to strip comments when looking for typedefs, and it's exactly that,\nso at least I'm consistent :-)\n\n\nI don't care that much if we throw this whole thing away. This was sent\nin response to Alvaro's suggestion to \"give it a try and see what comes up\".\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 16 Apr 2020 08:50:35 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 4/15/20 11:01 PM, Noah Misch wrote:\n>> It would be an unpleasant surprise to cause a perlcritic buildfarm failure by\n>> moving a function, verbatim, from a non-strategic file to a strategic file.\n>> Having two Perl style regimes in one tree is itself a liability.\n\n> Honestly, I think you're reaching here.\n\nI think that argument is wrong, actually. Moving a function from a single\nuse-case into a library (with, clearly, the intention for it to have more\nuse-cases) is precisely the time when any weaknesses in its original\nimplementation might be exposed. So extra scrutiny seems well warranted.\n\nWhether the \"extra scrutiny\" involved in perlcritic's higher levels\nis actually worth anything is a different debate, though, and so far\nit's not looking like it's worth much :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 09:53:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Thu, Apr 16, 2020 at 08:50:35AM -0400, Andrew Dunstan wrote:\n> \n> It would also be more robust using non-greedy matching:\n\nThis seems more important.\nI don't know how/where this is being used, but if it has input like:\n\n/* one */ \nsomething;\n/* two */\n\nWith the old expression 'something;' would be stripped away. \nIs that an issue where this this is used? Why are we parsing\nthese headers?\n\nGarick\n\n", "msg_date": "Thu, 16 Apr 2020 14:20:53 +0000", "msg_from": "\"Hamlin, Garick L\" <ghamlin@isc.upenn.edu>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/16/20 10:20 AM, Hamlin, Garick L wrote:\n> On Thu, Apr 16, 2020 at 08:50:35AM -0400, Andrew Dunstan wrote:\n>> It would also be more robust using non-greedy matching:\n> This seems more important.\n> I don't know how/where this is being used, but if it has input like:\n>\n> /* one */ \n> something;\n> /* two */\n>\n> With the old expression 'something;' would be stripped away. \n> Is that an issue where this this is used? Why are we parsing\n> these headers?\n>\n\n\n\nIt's not quite as bad as that, because we're doing it line by line\nrather than on a whole file that's been slurped in. Multiline comments\nare handled using some redo logic. But\n\n\n    /* one */ something(); /* two */\n\n\nwould all be removed. Of course, we hope we don't have anything so\nhorrible, but still ...\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:34:39 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On 2020-Apr-16, Hamlin, Garick L wrote:\n\n> With the old expression 'something;' would be stripped away. \n> Is that an issue where this this is used? Why are we parsing\n> these headers?\n\nThese are files from which bootstrap catalog data is generated, which is\nwhy we parse from Perl; but also where C structs are declared, which is\nwhy they're C.\n\nI think switching to non-greedy is a win in itself. Non-capturing\nparens is probably a wash (this doesn't run often so the performance\nargument isn't very interesting).\n\nAn example. This eval in Catalog.pm\n\n+ ## no critic (ProhibitStringyEval)\n+ ## no critic (RequireCheckingReturnValueOfEval)\n+ eval '$hash_ref = ' . $_;\n\nis really weird stuff generally speaking, and the fact that we have to\nmark it specially for critic is a good indicator of that -- it serves as\ndocumentation. Catalog.pm is all a huge weird hack, but it's a critically\nimportant hack. Heck, what about RequireCheckingReturnValueOfEval --\nshould we instead consider actually checking the return value of eval?\nIt would seem to make sense, would it not? (Not for this patch, though\n-- I would be fine with just adding the nocritic line now, and removing\nit later while fixing that).\n\nAll in all, I think it's a positive value in having this code be checked\nwith a bit more strength -- checks that are pointless in, say, t/00*.pl\nprove files.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 11:12:48 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\nOn 4/16/20 11:12 AM, Alvaro Herrera wrote:\n> On 2020-Apr-16, Hamlin, Garick L wrote:\n>\n>> With the old expression 'something;' would be stripped away. \n>> Is that an issue where this this is used? Why are we parsing\n>> these headers?\n> These are files from which bootstrap catalog data is generated, which is\n> why we parse from Perl; but also where C structs are declared, which is\n> why they're C.\n>\n> I think switching to non-greedy is a win in itself. Non-capturing\n> parens is probably a wash (this doesn't run often so the performance\n> argument isn't very interesting).\n\n\nYeah, I'm inclined to fix this independently of the perlcritic stuff.\nThe change is more readable and more correct as well as being perlcritic\nfriendly.\n\n\nI might take a closer look at Catalog.pm.\n\n\nMeanwhile, the other regex highlighted in the patch, in Solution.pm:\n\n\nif (/^AC_INIT\\(\\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]*)\\],\n\\[([^\\]]+)\\]/)\n\n\nis sufficiently horrid that I think we should see if we can rewrite it,\nmaybe as an extended regex. And a better fix here instead of marking the\nfourth group as non-capturing would be simply to get rid of the parens\naltogether. The serve no purpose at all.\n\n\n>\n> An example. This eval in Catalog.pm\n>\n> + ## no critic (ProhibitStringyEval)\n> + ## no critic (RequireCheckingReturnValueOfEval)\n> + eval '$hash_ref = ' . $_;\n>\n> is really weird stuff generally speaking, and the fact that we have to\n> mark it specially for critic is a good indicator of that -- it serves as\n> documentation. Catalog.pm is all a huge weird hack, but it's a critically\n> important hack. Heck, what about RequireCheckingReturnValueOfEval --\n> should we instead consider actually checking the return value of eval?\n> It would seem to make sense, would it not? (Not for this patch, though\n> -- I would be fine with just adding the nocritic line now, and removing\n> it later while fixing that).\n\n\n+1\n\n\n>\n> All in all, I think it's a positive value in having this code be checked\n> with a bit more strength -- checks that are pointless in, say, t/00*.pl\n> prove files.\n\n\n\nthanks\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 16 Apr 2020 17:07:44 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "\n\n> On Apr 16, 2020, at 2:07 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> \n> On 4/16/20 11:12 AM, Alvaro Herrera wrote:\n>> On 2020-Apr-16, Hamlin, Garick L wrote:\n>> \n>>> With the old expression 'something;' would be stripped away. \n>>> Is that an issue where this this is used? Why are we parsing\n>>> these headers?\n>> These are files from which bootstrap catalog data is generated, which is\n>> why we parse from Perl; but also where C structs are declared, which is\n>> why they're C.\n>> \n>> I think switching to non-greedy is a win in itself. Non-capturing\n>> parens is probably a wash (this doesn't run often so the performance\n>> argument isn't very interesting).\n> \n> \n> Yeah, I'm inclined to fix this independently of the perlcritic stuff.\n> The change is more readable and more correct as well as being perlcritic\n> friendly.\n> \n> \n> I might take a closer look at Catalog.pm.\n> \n> \n> Meanwhile, the other regex highlighted in the patch, in Solution.pm:\n> \n> \n> if (/^AC_INIT\\(\\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]+)\\], \\[([^\\]]*)\\],\n> \\[([^\\]]+)\\]/)\n> \n> \n> is sufficiently horrid that I think we should see if we can rewrite it,\n\n my $re = qr/\n \\[ # literal opening bracket\n ( # Capture anything but a closing bracket\n (?> # without backtracking\n [^\\]]+\n )\n )\n \\] # literal closing bracket\n /x;\n if (/^AC_INIT\\($re, $re, $re, $re, $re/)\n\n\n\n> maybe as an extended regex. And a better fix here instead of marking the\n> fourth group as non-capturing would be simply to get rid of the parens\n> altogether. The serve no purpose at all.\n\nBut then you'd have to use something else in position 4, which complicates the code.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 16 Apr 2020 14:35:46 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" }, { "msg_contents": "On Thu, Apr 16, 2020 at 09:53:46AM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 4/15/20 11:01 PM, Noah Misch wrote:\n> >> It would be an unpleasant surprise to cause a perlcritic buildfarm failure by\n> >> moving a function, verbatim, from a non-strategic file to a strategic file.\n> >> Having two Perl style regimes in one tree is itself a liability.\n> \n> > Honestly, I think you're reaching here.\n> \n> I think that argument is wrong, actually. Moving a function from a single\n> use-case into a library (with, clearly, the intention for it to have more\n> use-cases) is precisely the time when any weaknesses in its original\n> implementation might be exposed. So extra scrutiny seems well warranted.\n\nMoving a function to a library does call for various scrutiny. I don't think\nit calls for replacing \"no warnings;\" with \"no warnings; ## no critic\", but\nthat observation is subordinate to your other point:\n\n> Whether the \"extra scrutiny\" involved in perlcritic's higher levels\n> is actually worth anything is a different debate, though, and so far\n> it's not looking like it's worth much :-(\n\nYeah, this is the central point. Many proposed style conformance changes are\n(a) double-entry bookkeeping to emphasize the author's sincerity and (b) regex\nperformance optimization. Those are not better for libraries than for\nnon-libraries, and I think they decrease code quality.\n\nEven if such policies were better for libraries, the proposed patch applies\nthem to .pm files with narrow audiences. If DBD::Pg were in this tree, that\nwould be a different conversation.\n\n\n", "msg_date": "Thu, 16 Apr 2020 23:56:49 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: cleaning perl code" } ]
[ { "msg_contents": "Hi hackers,\n\nWe found that several functions -- namely numeric_combine,\nnumeric_avg_combine, numeric_poly_combine, and int8_avg_combine -- are\nreturning NULL without signaling the nullity of datum in fcinfo.isnull.\nThis is obscured by the fact that the only functions in core (finalfunc\nfor various aggregates) that those return values feed into happen to\ntolerate (or rather, not quite distinguish) zero-but-not-NULL trans\nvalues.\n\nIn Greenplum, this behavior becomes problematic because Greenplum\nserializes internal trans values before spilling the hash table. The\nserial functions (numeric_serialize and friends) are strict functions\nthat will blow up when they are given null (either in the C sense or the\nSQL sense) inputs.\n\nIn Postgres if we change hash aggregation in the future to spill the\nhash table (vis-à-vis the input tuples), this issues would manifest\nitself in the final aggregate because we'll serialize the combined (and\nlikely incorrectly null) trans values.\n\nPlease find attached a small patch fixing said issue. Originally\nreported by Denis Smirnov over at\nhttps://github.com/greenplum-db/gpdb/pull/9878\n\nCheers,\nJesse and Deep", "msg_date": "Thu, 9 Apr 2020 16:22:11 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Hi,\n\nOn 2020-04-09 16:22:11 -0700, Jesse Zhang wrote:\n> We found that several functions -- namely numeric_combine,\n> numeric_avg_combine, numeric_poly_combine, and int8_avg_combine -- are\n> returning NULL without signaling the nullity of datum in fcinfo.isnull.\n> This is obscured by the fact that the only functions in core (finalfunc\n> for various aggregates) that those return values feed into happen to\n> tolerate (or rather, not quite distinguish) zero-but-not-NULL trans\n> values.\n\nShouldn't these just be marked as strict?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:14:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-09 16:22:11 -0700, Jesse Zhang wrote:\n>> We found that several functions -- namely numeric_combine,\n>> numeric_avg_combine, numeric_poly_combine, and int8_avg_combine -- are\n>> returning NULL without signaling the nullity of datum in fcinfo.isnull.\n>> This is obscured by the fact that the only functions in core (finalfunc\n>> for various aggregates) that those return values feed into happen to\n>> tolerate (or rather, not quite distinguish) zero-but-not-NULL trans\n>> values.\n\n> Shouldn't these just be marked as strict?\n\nNo, certainly not --- they need to be able to act on null inputs.\nThe question is how careful do we need to be about representing\nnull results as \"real\" nulls rather than NULL pointers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:19:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Hi Andres,\n\nOn Fri, Apr 10, 2020 at 12:14 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Shouldn't these just be marked as strict?\n>\n\nAre you suggesting that because none of the corresponding trans\nfunctions (int8_avg_accum, int2_accum, and friends) ever output NULL?\nThat's what we thought, but then I realized that an input to a comebine\nfunction is not necessarily an output from a trans function invocation:\nfor example, when there is a \"FILTER (WHERE ...)\" clause that filters\nout every tuple in a group, the partial aggregate might just throw a\nNULL state for the final aggregate to combine.\n\nOn the other hand, we examined the corresponding final functions\n(numeric_stddev_pop and friends), they all seem to carefully treat a\nNULL trans value the same as a \"zero input\" (as in, state.N == 0 &&\nstate.NaNcount ==0). That does suggest to me that it should be fine to\ndeclare those combine functions as strict (barring the restriction that\nthey should not be STRICT, anybody remembers why?).\n\nCheers,\nJesse and Deep\n\n\n", "msg_date": "Fri, 10 Apr 2020 15:01:43 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> On the other hand, we examined the corresponding final functions\n> (numeric_stddev_pop and friends), they all seem to carefully treat a\n> NULL trans value the same as a \"zero input\" (as in, state.N == 0 &&\n> state.NaNcount ==0). That does suggest to me that it should be fine to\n> declare those combine functions as strict (barring the restriction that\n> they should not be STRICT, anybody remembers why?).\n\nThey can't be strict because the initial iteration needs to produce\nsomething from a null state and non-null input. nodeAgg's default\nbehavior won't work for those because nodeAgg doesn't know how to\ncopy a value of type \"internal\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 18:59:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "On Fri, Apr 10, 2020 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> They can't be strict because the initial iteration needs to produce\n> something from a null state and non-null input. nodeAgg's default\n> behavior won't work for those because nodeAgg doesn't know how to\n> copy a value of type \"internal\".\n>\n> regards, tom lane\n\nAh, I think I get it. A copy must happen because the input is likely in\na shorter-lived memory context than the state, but nodeAgg's default\nbehavior of copying a by-value datum won't really copy the object\npointed to by the pointer wrapped in the datum of \"internal\" type, so we\ndefer to the combine function. Am I right? Then it follows kinda\nnaturally that those combine functions have been sloppy on arrival since\ncommit 11c8669c0cc .\n\n\nCheers,\nJesse\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:34:00 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> On Fri, Apr 10, 2020 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> They can't be strict because the initial iteration needs to produce\n>> something from a null state and non-null input. nodeAgg's default\n>> behavior won't work for those because nodeAgg doesn't know how to\n>> copy a value of type \"internal\".\n\n> Ah, I think I get it. A copy must happen because the input is likely in\n> a shorter-lived memory context than the state, but nodeAgg's default\n> behavior of copying a by-value datum won't really copy the object\n> pointed to by the pointer wrapped in the datum of \"internal\" type, so we\n> defer to the combine function. Am I right? Then it follows kinda\n> naturally that those combine functions have been sloppy on arrival since\n> commit 11c8669c0cc .\n\nYeah, they're relying exactly on the assumption that nodeAgg is not\ngoing to try to copy a value declared \"internal\", and therefore they\ncan be loosey-goosey about whether the value pointer is null or not.\nHowever, if you want to claim that that's wrong, you have to explain\nwhy it's okay for some other code to be accessing a value that's\ndeclared \"internal\". I'd say that the meaning of that is precisely\n\"keepa u hands off\".\n\nIn the case at hand, the current situation is that we only expect the\nvalues returned by these combine functions to be read by the associated\nfinal functions, which are on board with the null-pointer representation\nof an empty result. Your argument is essentially that it should be\npossible to feed the values to the aggregate's associated serialization\nfunction as well. But the core code never does that, so I'm not convinced\nthat we should add it to the requirements; we'd be unable to test it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:13:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "On Tue, 14 Apr 2020 at 06:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, they're relying exactly on the assumption that nodeAgg is not\n> going to try to copy a value declared \"internal\", and therefore they\n> can be loosey-goosey about whether the value pointer is null or not.\n> However, if you want to claim that that's wrong, you have to explain\n> why it's okay for some other code to be accessing a value that's\n> declared \"internal\". I'd say that the meaning of that is precisely\n> \"keepa u hands off\".\n>\n> In the case at hand, the current situation is that we only expect the\n> values returned by these combine functions to be read by the associated\n> final functions, which are on board with the null-pointer representation\n> of an empty result. Your argument is essentially that it should be\n> possible to feed the values to the aggregate's associated serialization\n> function as well. But the core code never does that, so I'm not convinced\n> that we should add it to the requirements; we'd be unable to test it.\n\nCasting my mind back to when I originally wrote that code, I attempted\nto do so in such a way so that it could one day be used for a 3-stage\naggregation. e.g Parallel Partial Aggregate -> Gather -> Combine\nSerial Aggregate on one node, then on some master node a Deserial\nCombine Finalize Aggregate. You're very right that we can't craft\nsuch a plan with today's master (We didn't even add a supporting enum\nfor it in AggSplit). However, it does appear that there are\nextensions or forks out there which attempt to use the code in this\nway, so it would be good to not leave those people out in the cold\nregarding this.\n\nFor testing, can't we just have an Assert() in\nadvance_transition_function that verifies isnull matches the\nnullability of the return value for INTERNAL returning transfns? i.e,\nthe attached\n\nI don't have a test case to hand that could cause this to fail, but it\nsounds like Jesse might.\n\nDavid", "msg_date": "Tue, 14 Apr 2020 17:46:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "Hi David,\n\nOn Mon, Apr 13, 2020 at 10:46 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 14 Apr 2020 at 06:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Yeah, they're relying exactly on the assumption that nodeAgg is not\n> > going to try to copy a value declared \"internal\", and therefore they\n> > can be loosey-goosey about whether the value pointer is null or not.\n> > However, if you want to claim that that's wrong, you have to explain\n> > why it's okay for some other code to be accessing a value that's\n> > declared \"internal\". I'd say that the meaning of that is precisely\n> > \"keepa u hands off\".\n> >\n> > In the case at hand, the current situation is that we only expect the\n> > values returned by these combine functions to be read by the associated\n> > final functions, which are on board with the null-pointer representation\n> > of an empty result. Your argument is essentially that it should be\n> > possible to feed the values to the aggregate's associated serialization\n> > function as well. But the core code never does that, so I'm not convinced\n> > that we should add it to the requirements; we'd be unable to test it.\n>\n> Casting my mind back to when I originally wrote that code, I attempted\n> to do so in such a way so that it could one day be used for a 3-stage\n> aggregation. e.g Parallel Partial Aggregate -> Gather -> Combine\n> Serial Aggregate on one node, then on some master node a Deserial\n> Combine Finalize Aggregate. You're very right that we can't craft\n> such a plan with today's master (We didn't even add a supporting enum\n> for it in AggSplit). However, it does appear that there are\n> extensions or forks out there which attempt to use the code in this\n> way, so it would be good to not leave those people out in the cold\n> regarding this.\n\nGreenplum plans split-aggregation quite similarly from Postgres: while\nit doesn't pass partial results through a intra-cluster \"Gather\" --\nusing a reshuffle-by-hash type operation instead -- Greenplum _does_\nsplit an aggregate into final and partial halves, running them on\ndifferent nodes. In short, the relation ship among the combine, serial,\nand deserial functions are similar to how they are in Postgres today\n(serial->deserial->combine), in the context of splitting aggregates.\nThe current problem arises because Greenplum spills the hash table in\nhash aggregation (a diff we're working actively to upstream), a process\nin which we have to touch (read: serialize and copy) the internal trans\nvalues. However, we are definitely eyeing what you described as\nsomething to move towards.\n\nAs a fork, we'd like to carry as thin a diff as possible. So the current\nsituation is pretty much forcing us to diverge in the functions\nmentioned up-thread.\n\nIn hindsight, \"sloppy\" might not have been a wise choice of words,\napologies for the possible offense, David!\n\n>\n> For testing, can't we just have an Assert() in\n> advance_transition_function that verifies isnull matches the\n> nullability of the return value for INTERNAL returning transfns? i.e,\n> the attached\n>\n> I don't have a test case to hand that could cause this to fail, but it\n> sounds like Jesse might.\n\nOne easy way to cause this is \"sum(x) FILTER (WHERE false)\" which will\nfor sure make the partial results NULL. Is that what you're looking for?\nI'll be happy to send in the SQL.\n\nCheers,\nJesse\n\n\n", "msg_date": "Tue, 14 Apr 2020 08:31:25 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> For testing, can't we just have an Assert() in\n> advance_transition_function that verifies isnull matches the\n> nullability of the return value for INTERNAL returning transfns? i.e,\n> the attached\n\nFTR, I do not like this Assert one bit. nodeAgg.c has NO business\ninquiring into the contents of internal-type Datums. It has even\nless business enforcing a particular Datum value for a SQL null ---\nwe have always, throughout the system, considered that if isnull\nis true then the contents of the Datum are unspecified. I think\nthis is much more likely to cause problems than solve any.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:41:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" }, { "msg_contents": "On Wed, 15 Apr 2020 at 03:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > For testing, can't we just have an Assert() in\n> > advance_transition_function that verifies isnull matches the\n> > nullability of the return value for INTERNAL returning transfns? i.e,\n> > the attached\n>\n> FTR, I do not like this Assert one bit. nodeAgg.c has NO business\n> inquiring into the contents of internal-type Datums. It has even\n> less business enforcing a particular Datum value for a SQL null ---\n> we have always, throughout the system, considered that if isnull\n> is true then the contents of the Datum are unspecified. I think\n> this is much more likely to cause problems than solve any.\n\nOK. the latter case could be ignored by adding an OR condition to the\nAssert to allow isnull == false cases to pass without any\nconsideration to the Datum value, but it sounds like you don't want to\ninsist that isnull == true returns NULL a pointer.\n\nFWIW, I agree with Jesse that having numeric_combine() return a NULL\npointer without properly setting the isnull flag is pretty bad and it\nshould be fixed regardless. Not fixing it, even in the absence of\nhaving a good way to test it just seems like we're leaving something\naround that we're going to trip up on in the future. Serialization\nfunctions crashing after receiving input from a combine function seems\npretty busted to me, regardless if there is a pathway for the\nfunctions to be called in that order in core or not. I'm not a fan of\nleaving it in just because testing for it might not be easy. One\nproblem with coming up with a way of testing from an SQL level will be\nthat we'll need to pick some aggregate functions that currently have\nthis issue and ensure they don't regress. There's not much we can do\nto ensure any new aggregates we might create the future don't go and\nbreak this rule. That's why I thought that the Assert might be more\nuseful.\n\nI don't think it would be impossible to test this using an extension\nand using the create_upper_paths_hook. I see that test_rls_hooks\nwhich runs during make check-world does hook into the RLS hooks do\ntest some behaviour. I don't think it would be too tricky to have a\nhook implement a 3-stage aggregate plan with the middle stage doing a\ndeserial/combine/serial before passing to the Finalize Aggregate node.\nThat would allow us to ensure serial functions can accept the results\nfrom combine functions, to which nothing in core currently can do.\n\nDavid\n\n\n", "msg_date": "Wed, 15 Apr 2020 15:48:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Properly mark NULL returns in numeric aggregates" } ]
[ { "msg_contents": "Hi,\n\nWe use (an equivalent of) the PAUSE instruction in spin_delay() for\nIntel architectures. The goal is to slow down the spinlock tight loop\nand thus prevent it from eating CPU and causing CPU starvation, so\nthat other processes get their fair share of the CPU time. Intel\ndocumentation [1] clearly mentions this, along with other benefits of\nPAUSE, like, low power consumption, and avoidance of memory order\nviolation while exiting the loop.\n\nSimilar to PAUSE, the ARM architecture has YIELD instruction, which is\nalso clearly documented [2]. It explicitly says that it is a way to\nhint the CPU that it is being called in a spinlock loop and this\nprocess can be preempted out. But for ARM, we are not using any kind\nof spin delay.\n\nFor PG spinlocks, the goal of both of these instructions are the same,\nand also both architectures recommend using them in spinlock loops.\nAlso, I found multiple places where YIELD is already used in same\nsituations : Linux kernel [3] ; OpenJDK [4],[5]\n\nNow, for ARM implementations that don't implement YIELD, it runs as a\nno-op. Unfortunately the ARM machine I have does not implement YIELD.\nBut recently there has been some ARM implementations that are\nhyperthreaded, so they are expected to actually do the YIELD, although\nthe docs do not explicitly say that YIELD has to be implemented only\nby hyperthreaded implementations.\n\nI ran some pgbench tests to test PAUSE/YIELD on the respective\narchitectures, once with the instruction present, and once with the\ninstruction removed. Didn't see change in the TPS numbers; they were\nmore or less same. For Arm, this was expected because my ARM machine\ndoes not implement it.\n\nOn my Intel Xeon machine with 8 cores, I tried to test PAUSE also\nusing a sample C program (attached spin.c). Here, many child processes\n(much more than CPUs) wait in a tight loop for a shared variable to\nbecome 0, while the parent process continuously increments a sequence\nnumber for a fixed amount of time, after which, it sets the shared\nvariable to 0. The child's tight loop calls PAUSE in each iteration.\nWhat I hoped was that because of PAUSE in children, the parent process\nwould get more share of the CPU, due to which, in a given time, the\nsequence number will reach a higher value. Also, I expected the CPU\ncycles spent by child processes to drop down, thanks to PAUSE. None of\nthese happened. There was no change.\n\nPossibly, this testcase is not right. Probably the process preemption\noccurs only within the set of hyperthreads attached to a single core.\nAnd in my testcase, the parent process is the only one who is ready to\nrun. Still, I have anyway attached the program (spin.c) for archival;\nin case somebody with a YIELD-supporting ARM machine wants to use it\nto test YIELD.\n\nNevertheless, I think because we have clear documentation that\nstrongly recommends to use it, and because it has been used in other\nuse-cases such as linux kernel and JDK, we should start using YIELD\nfor spin_delay() in ARM.\n\nAttached is the trivial patch (spin_delay_for_arm.patch). To start\nwith, it contains changes only for aarch64. I haven't yet added\nchanges in configure[.in] for making sure yield compiles successfully\n(YIELD is present in manuals from ARMv6 onwards). Before that I\nthought of getting some comments; so didn't do configure changes yet.\n\n\n[1] https://c9x.me/x86/html/file_module_x86_id_232.html\n[2] https://developer.arm.com/docs/100076/0100/instruction-set-reference/a64-general-instructions/yield\n[3] https://elixir.bootlin.com/linux/latest/source/arch/arm64/include/asm/processor.h#L259\n[4] http://cr.openjdk.java.net/~dchuyko/8186670/yield/spinwait.html\n[5] http://mail.openjdk.java.net/pipermail/aarch64-port-dev/2017-August/004880.html\n\n\n--\nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Fri, 10 Apr 2020 13:09:13 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "spin_delay() for ARM" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 13:09:13 +0530, Amit Khandekar wrote:\n> On my Intel Xeon machine with 8 cores, I tried to test PAUSE also\n> using a sample C program (attached spin.c). Here, many child processes\n> (much more than CPUs) wait in a tight loop for a shared variable to\n> become 0, while the parent process continuously increments a sequence\n> number for a fixed amount of time, after which, it sets the shared\n> variable to 0. The child's tight loop calls PAUSE in each iteration.\n> What I hoped was that because of PAUSE in children, the parent process\n> would get more share of the CPU, due to which, in a given time, the\n> sequence number will reach a higher value. Also, I expected the CPU\n> cycles spent by child processes to drop down, thanks to PAUSE. None of\n> these happened. There was no change.\n\n> Possibly, this testcase is not right. Probably the process preemption\n> occurs only within the set of hyperthreads attached to a single core.\n> And in my testcase, the parent process is the only one who is ready to\n> run. Still, I have anyway attached the program (spin.c) for archival;\n> in case somebody with a YIELD-supporting ARM machine wants to use it\n> to test YIELD.\n\nPAUSE doesn't operate on the level of the CPU scheduler. So the OS won't\njust schedule another process - you won't see different CPU usage if you\nmeasure it purely as the time running. You should be able to see a\ndifference if you measure with a profiler that shows you data from the\nCPUs performance monitoring unit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:17:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-10 13:09:13 +0530, Amit Khandekar wrote:\n>> On my Intel Xeon machine with 8 cores, I tried to test PAUSE also\n>> using a sample C program (attached spin.c).\n\n> PAUSE doesn't operate on the level of the CPU scheduler. So the OS won't\n> just schedule another process - you won't see different CPU usage if you\n> measure it purely as the time running. You should be able to see a\n> difference if you measure with a profiler that shows you data from the\n> CPUs performance monitoring unit.\n\nA more useful test would be to directly experiment with contended\nspinlocks. As I recall, we had some test cases laying about when\nwe were fooling with the spin delay stuff on Intel --- maybe\nresurrecting one of those would be useful?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:22:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "I wrote:\n> A more useful test would be to directly experiment with contended\n> spinlocks. As I recall, we had some test cases laying about when\n> we were fooling with the spin delay stuff on Intel --- maybe\n> resurrecting one of those would be useful?\n\nThe last really significant performance testing we did in this area\nseems to have been in this thread:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n\nA relevant point from that is Haas' comment\n\n I think optimizing spinlocks for machines with only a few CPUs is\n probably pointless. Based on what I've seen so far, spinlock\n contention even at 16 CPUs is negligible pretty much no matter what\n you do. Whether your implementation is fast or slow isn't going to\n matter, because even an inefficient implementation will account for\n only a negligible percentage of the total CPU time - much less than 1%\n - as opposed to a 64-core machine, where it's not that hard to find\n cases where spin-waits consume the *majority* of available CPU time\n (recall previous discussion of lseek).\n\nSo I wonder whether this patch is getting ahead of the game. It does\nseem that ARM systems with a couple dozen cores exist, but are they\ncommon enough to optimize for yet? Can we even find *one* to test on\nand verify that this is a win and not a loss? (Also, seeing that\nthere are so many different ARM vendors, results from just one\nchipset might not be too trustworthy ...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 18:48:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Sat, 11 Apr 2020 at 00:47, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-10 13:09:13 +0530, Amit Khandekar wrote:\n> > On my Intel Xeon machine with 8 cores, I tried to test PAUSE also\n> > using a sample C program (attached spin.c). Here, many child processes\n> > (much more than CPUs) wait in a tight loop for a shared variable to\n> > become 0, while the parent process continuously increments a sequence\n> > number for a fixed amount of time, after which, it sets the shared\n> > variable to 0. The child's tight loop calls PAUSE in each iteration.\n> > What I hoped was that because of PAUSE in children, the parent process\n> > would get more share of the CPU, due to which, in a given time, the\n> > sequence number will reach a higher value. Also, I expected the CPU\n> > cycles spent by child processes to drop down, thanks to PAUSE. None of\n> > these happened. There was no change.\n>\n> > Possibly, this testcase is not right. Probably the process preemption\n> > occurs only within the set of hyperthreads attached to a single core.\n> > And in my testcase, the parent process is the only one who is ready to\n> > run. Still, I have anyway attached the program (spin.c) for archival;\n> > in case somebody with a YIELD-supporting ARM machine wants to use it\n> > to test YIELD.\n>\n> PAUSE doesn't operate on the level of the CPU scheduler. So the OS won't\n> just schedule another process - you won't see different CPU usage if you\n> measure it purely as the time running.\n\nYeah, I thought that the OS scheduling would be an *indirect* consequence\nof the pause because of it's slowing down the CPU, but looks like that does\nnot happen.\n\n\n> You should be able to see a\n> difference if you measure with a profiler that shows you data from the\n> CPUs performance monitoring unit.\nHmm, I had tried with perf and could see the pause itself consuming 5% cpu.\nBut I haven't yet played with per-process figures.\n\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n\nOn Sat, 11 Apr 2020 at 00:47, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-10 13:09:13 +0530, Amit Khandekar wrote:\n> > On my Intel Xeon machine with 8 cores, I tried to test PAUSE also\n> > using a sample C program (attached spin.c). Here, many child processes\n> > (much more than CPUs) wait in a tight loop for a shared variable to\n> > become 0, while the parent process continuously increments a sequence\n> > number for a fixed amount of time, after which, it sets the shared\n> > variable to 0. The child's tight loop calls PAUSE in each iteration.\n> > What I hoped was that because of PAUSE in children, the parent process\n> > would get more share of the CPU, due to which, in a given time, the\n> > sequence number will reach a higher value. Also, I expected the CPU\n> > cycles spent by child processes to drop down, thanks to PAUSE. None of\n> > these happened. There was no change.\n>\n> > Possibly, this testcase is not right. Probably the process preemption\n> > occurs only within the set of hyperthreads attached to a single core.\n> > And in my testcase, the parent process is the only one who is ready to\n> > run. Still, I have anyway attached the program (spin.c) for archival;\n> > in case somebody with a YIELD-supporting ARM machine wants to use it\n> > to test YIELD.\n>\n> PAUSE doesn't operate on the level of the CPU scheduler. So the OS won't\n> just schedule another process - you won't see different CPU usage if you\n> measure it purely as the time running.\n\nYeah, I thought that the OS scheduling would be an *indirect* consequence of the pause because of it's slowing down the CPU, but looks like that does not happen.\n\n> You should be able to see a\n> difference if you measure with a profiler that shows you data from the\n> CPUs performance monitoring unit.\nHmm, I had tried with perf and could see the pause itself consuming 5% cpu. But I haven't yet played with per-process figures.\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n-- Thanks,-Amit KhandekarHuawei Technologies", "msg_date": "Mon, 13 Apr 2020 20:15:53 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Sat, 11 Apr 2020 at 04:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > A more useful test would be to directly experiment with contended\n> > spinlocks. As I recall, we had some test cases laying about when\n> > we were fooling with the spin delay stuff on Intel --- maybe\n> > resurrecting one of those would be useful?\n>\n> The last really significant performance testing we did in this area\n> seems to have been in this thread:\n>\n>\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n>\n> A relevant point from that is Haas' comment\n>\n> I think optimizing spinlocks for machines with only a few CPUs is\n> probably pointless. Based on what I've seen so far, spinlock\n> contention even at 16 CPUs is negligible pretty much no matter what\n> you do. Whether your implementation is fast or slow isn't going to\n> matter, because even an inefficient implementation will account for\n> only a negligible percentage of the total CPU time - much less than 1%\n> - as opposed to a 64-core machine, where it's not that hard to find\n> cases where spin-waits consume the *majority* of available CPU time\n> (recall previous discussion of lseek).\n\nYeah, will check if I find some machines with large cores.\n\n\n> So I wonder whether this patch is getting ahead of the game. It does\n> seem that ARM systems with a couple dozen cores exist, but are they\n> common enough to optimize for yet? Can we even find *one* to test on\n> and verify that this is a win and not a loss? (Also, seeing that\n> there are so many different ARM vendors, results from just one\n> chipset might not be too trustworthy ...)\n\nOk. Yes, it would be worth waiting to see if there are others in the\ncommunity with ARM systems that have implemented YIELD. May be after that\nwe might gain some confidence. I myself also hope that I will get one soon\nto test, but right now I have one that does not support it, so it will be\njust a no-op.\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n\nOn Sat, 11 Apr 2020 at 04:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > A more useful test would be to directly experiment with contended\n> > spinlocks.  As I recall, we had some test cases laying about when\n> > we were fooling with the spin delay stuff on Intel --- maybe\n> > resurrecting one of those would be useful?\n>\n> The last really significant performance testing we did in this area\n> seems to have been in this thread:\n>\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n>\n> A relevant point from that is Haas' comment\n>\n>     I think optimizing spinlocks for machines with only a few CPUs is\n>     probably pointless.  Based on what I've seen so far, spinlock\n>     contention even at 16 CPUs is negligible pretty much no matter what\n>     you do.  Whether your implementation is fast or slow isn't going to\n>     matter, because even an inefficient implementation will account for\n>     only a negligible percentage of the total CPU time - much less than 1%\n>     - as opposed to a 64-core machine, where it's not that hard to find\n>     cases where spin-waits consume the *majority* of available CPU time\n>     (recall previous discussion of lseek).\n\nYeah, will check if I find some machines with large cores.\n\n> So I wonder whether this patch is getting ahead of the game.  It does\n> seem that ARM systems with a couple dozen cores exist, but are they\n> common enough to optimize for yet?  Can we even find *one* to test on\n> and verify that this is a win and not a loss?  (Also, seeing that\n> there are so many different ARM vendors, results from just one\n> chipset might not be too trustworthy ...)\n\nOk. Yes, it would be worth waiting to see if there are others in the community with ARM systems that have implemented YIELD. May be after that we might gain some confidence. I myself also hope that I will get one soon to test, but right now I have one that does not support it, so it will be just a no-op.\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n-- Thanks,-Amit KhandekarHuawei Technologies", "msg_date": "Mon, 13 Apr 2020 20:16:36 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Mon, 13 Apr 2020 at 20:16, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> On Sat, 11 Apr 2020 at 04:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I wrote:\n> > > A more useful test would be to directly experiment with contended\n> > > spinlocks. As I recall, we had some test cases laying about when\n> > > we were fooling with the spin delay stuff on Intel --- maybe\n> > > resurrecting one of those would be useful?\n> >\n> > The last really significant performance testing we did in this area\n> > seems to have been in this thread:\n> >\n> > https://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n> >\n> > A relevant point from that is Haas' comment\n> >\n> > I think optimizing spinlocks for machines with only a few CPUs is\n> > probably pointless. Based on what I've seen so far, spinlock\n> > contention even at 16 CPUs is negligible pretty much no matter what\n> > you do. Whether your implementation is fast or slow isn't going to\n> > matter, because even an inefficient implementation will account for\n> > only a negligible percentage of the total CPU time - much less than 1%\n> > - as opposed to a 64-core machine, where it's not that hard to find\n> > cases where spin-waits consume the *majority* of available CPU time\n> > (recall previous discussion of lseek).\n>\n> Yeah, will check if I find some machines with large cores.\n\nI got hold of a 32 CPUs VM (actually it was a 16-core, but being\nhyperthreaded, CPUs were 32).\nIt was an Intel Xeon , 3Gz CPU. 15G available memory. Hypervisor :\nKVM. Single NUMA node.\nPG parameters changed : shared_buffer: 8G ; max_connections : 1000\n\nI compared pgbench results with HEAD versus PAUSE removed like this :\n perform_spin_delay(SpinDelayStatus *status)\n {\n- /* CPU-specific delay each time through the loop */\n- SPIN_DELAY();\n\nRan with increasing number of parallel clients :\npgbench -S -c $num -j $num -T 60 -M prepared\nBut couldn't find any significant change in the TPS numbers with or\nwithout PAUSE:\n\nClients HEAD Without_PAUSE\n8 244446 247264\n16 399939 399549\n24 454189 453244\n32 1097592 1098844\n40 1090424 1087984\n48 1068645 1075173\n64 1035035 1039973\n96 976578 970699\n\nMay be it will indeed show some difference only with around 64 cores,\nor perhaps a bare metal machine will help; but as of now I didn't get\nsuch a machine. Anyways, I thought why not archive the results with\nwhatever I have.\n\nNot relevant to the PAUSE stuff .... Note that when the parallel\nclients reach from 24 to 32 (which equals the machine CPUs), the TPS\nshoots from 454189 to 1097592 which is more than double speed gain\nwith just a 30% increase in parallel sessions. I was not expecting\nthis much speed gain, because, with contended scenario already pgbench\nprocesses are already taking around 20% of the total CPU time of\npgbench run. May be later on, I will get a chance to run with some\ncustomized pgbench script that runs a server function which keeps on\nrunning an index scan on pgbench_accounts, so as to make pgbench\nclients almost idle.\n\nThanks\n-Amit Khandekar\n\n\n", "msg_date": "Thu, 16 Apr 2020 12:48:18 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "čt 16. 4. 2020 v 9:18 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Mon, 13 Apr 2020 at 20:16, Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> > On Sat, 11 Apr 2020 at 04:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > I wrote:\n> > > > A more useful test would be to directly experiment with contended\n> > > > spinlocks. As I recall, we had some test cases laying about when\n> > > > we were fooling with the spin delay stuff on Intel --- maybe\n> > > > resurrecting one of those would be useful?\n> > >\n> > > The last really significant performance testing we did in this area\n> > > seems to have been in this thread:\n> > >\n> > >\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n> > >\n> > > A relevant point from that is Haas' comment\n> > >\n> > > I think optimizing spinlocks for machines with only a few CPUs is\n> > > probably pointless. Based on what I've seen so far, spinlock\n> > > contention even at 16 CPUs is negligible pretty much no matter what\n> > > you do. Whether your implementation is fast or slow isn't going to\n> > > matter, because even an inefficient implementation will account for\n> > > only a negligible percentage of the total CPU time - much less\n> than 1%\n> > > - as opposed to a 64-core machine, where it's not that hard to find\n> > > cases where spin-waits consume the *majority* of available CPU time\n> > > (recall previous discussion of lseek).\n> >\n> > Yeah, will check if I find some machines with large cores.\n>\n> I got hold of a 32 CPUs VM (actually it was a 16-core, but being\n> hyperthreaded, CPUs were 32).\n> It was an Intel Xeon , 3Gz CPU. 15G available memory. Hypervisor :\n> KVM. Single NUMA node.\n> PG parameters changed : shared_buffer: 8G ; max_connections : 1000\n>\n> I compared pgbench results with HEAD versus PAUSE removed like this :\n> perform_spin_delay(SpinDelayStatus *status)\n> {\n> - /* CPU-specific delay each time through the loop */\n> - SPIN_DELAY();\n>\n> Ran with increasing number of parallel clients :\n> pgbench -S -c $num -j $num -T 60 -M prepared\n> But couldn't find any significant change in the TPS numbers with or\n> without PAUSE:\n>\n> Clients HEAD Without_PAUSE\n> 8 244446 247264\n> 16 399939 399549\n> 24 454189 453244\n> 32 1097592 1098844\n> 40 1090424 1087984\n> 48 1068645 1075173\n> 64 1035035 1039973\n> 96 976578 970699\n>\n> May be it will indeed show some difference only with around 64 cores,\n> or perhaps a bare metal machine will help; but as of now I didn't get\n> such a machine. Anyways, I thought why not archive the results with\n> whatever I have.\n>\n> Not relevant to the PAUSE stuff .... Note that when the parallel\n> clients reach from 24 to 32 (which equals the machine CPUs), the TPS\n> shoots from 454189 to 1097592 which is more than double speed gain\n> with just a 30% increase in parallel sessions. I was not expecting\n> this much speed gain, because, with contended scenario already pgbench\n> processes are already taking around 20% of the total CPU time of\n> pgbench run. May be later on, I will get a chance to run with some\n> customized pgbench script that runs a server function which keeps on\n> running an index scan on pgbench_accounts, so as to make pgbench\n> clients almost idle.\n>\n\nwhat I know, pgbench cannot be used for testing spinlocks problems.\n\nMaybe you can see this issue when a) use higher number clients - hundreds,\nthousands. Decrease share memory, so there will be press on related spin\nlock.\n\nRegards\n\nPavel\n\n\n> Thanks\n> -Amit Khandekar\n>\n>\n>\n\nčt 16. 4. 2020 v 9:18 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Mon, 13 Apr 2020 at 20:16, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> On Sat, 11 Apr 2020 at 04:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I wrote:\n> > > A more useful test would be to directly experiment with contended\n> > > spinlocks.  As I recall, we had some test cases laying about when\n> > > we were fooling with the spin delay stuff on Intel --- maybe\n> > > resurrecting one of those would be useful?\n> >\n> > The last really significant performance testing we did in this area\n> > seems to have been in this thread:\n> >\n> > https://www.postgresql.org/message-id/flat/CA%2BTgmoZvATZV%2BeLh3U35jaNnwwzLL5ewUU_-t0X%3DT0Qwas%2BZdA%40mail.gmail.com\n> >\n> > A relevant point from that is Haas' comment\n> >\n> >     I think optimizing spinlocks for machines with only a few CPUs is\n> >     probably pointless.  Based on what I've seen so far, spinlock\n> >     contention even at 16 CPUs is negligible pretty much no matter what\n> >     you do.  Whether your implementation is fast or slow isn't going to\n> >     matter, because even an inefficient implementation will account for\n> >     only a negligible percentage of the total CPU time - much less than 1%\n> >     - as opposed to a 64-core machine, where it's not that hard to find\n> >     cases where spin-waits consume the *majority* of available CPU time\n> >     (recall previous discussion of lseek).\n>\n> Yeah, will check if I find some machines with large cores.\n\nI got hold of a 32 CPUs VM (actually it was a 16-core, but being\nhyperthreaded, CPUs were 32).\nIt was an Intel Xeon , 3Gz CPU. 15G available memory. Hypervisor :\nKVM. Single NUMA node.\nPG parameters changed : shared_buffer: 8G ; max_connections : 1000\n\nI compared pgbench results with HEAD versus PAUSE removed like this :\n perform_spin_delay(SpinDelayStatus *status)\n {\n-       /* CPU-specific delay each time through the loop */\n-       SPIN_DELAY();\n\nRan with increasing number of parallel clients :\npgbench -S -c $num -j $num -T 60 -M prepared\nBut couldn't find any significant change in the TPS numbers with or\nwithout PAUSE:\n\nClients     HEAD     Without_PAUSE\n8         244446       247264\n16        399939       399549\n24        454189       453244\n32       1097592      1098844\n40       1090424      1087984\n48       1068645      1075173\n64       1035035      1039973\n96        976578       970699\n\nMay be it will indeed show some difference only with around 64 cores,\nor perhaps a bare metal machine will help; but as of now I didn't get\nsuch a machine. Anyways, I thought why not archive the results with\nwhatever I have.\n\nNot relevant to the PAUSE stuff .... Note that when the parallel\nclients reach from 24 to 32 (which equals the machine CPUs), the TPS\nshoots from 454189 to 1097592 which is more than double speed gain\nwith just a 30% increase in parallel sessions. I was not expecting\nthis much speed gain, because, with contended scenario already pgbench\nprocesses are already taking around 20% of the total CPU time of\npgbench run. May be later on, I will get a chance to run with some\ncustomized pgbench script that runs a server function which keeps on\nrunning an index scan on pgbench_accounts, so as to make pgbench\nclients almost idle.what I know, pgbench cannot be used for testing spinlocks problems.Maybe you can see this issue when a) use higher number clients - hundreds, thousands. Decrease share memory, so there will be press on related spin lock.RegardsPavel\n\nThanks\n-Amit Khandekar", "msg_date": "Thu, 16 Apr 2020 09:32:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Thu, 16 Apr 2020 at 10:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> what I know, pgbench cannot be used for testing spinlocks problems.\n>\n> Maybe you can see this issue when a) use higher number clients - hundreds, thousands. Decrease share memory, so there will be press on related spin lock.\n\nThere really aren't many spinlocks left that could be tickled by a\nnormal workload. I looked for a way to trigger spinlock contention\nwhen I prototyped a patch to replace spinlocks with futexes. The only\none that I could figure out a way to make contended was the lock\nprotecting parallel btree scan. A highly parallel index only scan on a\nfully cached index should create at least some spinlock contention.\n\nRegards,\nAnts Aasma\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:59:46 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Thu, Apr 16, 2020 at 3:18 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> Not relevant to the PAUSE stuff .... Note that when the parallel\n> clients reach from 24 to 32 (which equals the machine CPUs), the TPS\n> shoots from 454189 to 1097592 which is more than double speed gain\n> with just a 30% increase in parallel sessions.\n\nI've seen stuff like this too. For instance, check out the graph from\nthis 2012 blog post:\n\nhttp://rhaas.blogspot.com/2012/04/did-i-say-32-cores-how-about-64.html\n\nYou can see that the performance growth is basically on a straight\nline up to about 16 cores, but then it kinks downward until about 28,\nafter which it kinks sharply upward until about 36 cores.\n\nI think this has something to do with the process scheduling behavior\nof Linux, because I vaguely recall some discussion where somebody did\nbenchmarking on the same hardware on both Linux and one of the BSD\nsystems, and the effect didn't appear on BSD. They had other problems,\nlike a huge drop-off at higher core counts, but they didn't have that\neffect.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 13:24:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Sat, Apr 18, 2020 at 2:00 AM Ants Aasma <ants@cybertec.at> wrote:\n> On Thu, 16 Apr 2020 at 10:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > what I know, pgbench cannot be used for testing spinlocks problems.\n> >\n> > Maybe you can see this issue when a) use higher number clients - hundreds, thousands. Decrease share memory, so there will be press on related spin lock.\n>\n> There really aren't many spinlocks left that could be tickled by a\n> normal workload. I looked for a way to trigger spinlock contention\n> when I prototyped a patch to replace spinlocks with futexes. The only\n> one that I could figure out a way to make contended was the lock\n> protecting parallel btree scan. A highly parallel index only scan on a\n> fully cached index should create at least some spinlock contention.\n\nI suspect the snapshot-too-old \"mutex_threshold\" spinlock can become\ncontended under workloads that generate a high rate of\nheap_page_prune_opt() calls with old_snapshot_threshold enabled. One\nway to do that is with a bunch of concurrent index scans that hit the\nheap in random order. Some notes about that:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGKT8oTkp5jw_U4p0S-7UG9zsvtw_M47Y285bER6a2gD%2Bg%40mail.gmail.com\n\n\n", "msg_date": "Sat, 18 Apr 2020 09:59:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: spin_delay() for ARM" }, { "msg_contents": "On Sat, 18 Apr 2020 at 03:30, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Apr 18, 2020 at 2:00 AM Ants Aasma <ants@cybertec.at> wrote:\n> > On Thu, 16 Apr 2020 at 10:33, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > > what I know, pgbench cannot be used for testing spinlocks problems.\n> > >\n> > > Maybe you can see this issue when a) use higher number clients - hundreds, thousands. Decrease share memory, so there will be press on related spin lock.\n> >\n> > There really aren't many spinlocks left that could be tickled by a\n> > normal workload. I looked for a way to trigger spinlock contention\n> > when I prototyped a patch to replace spinlocks with futexes. The only\n> > one that I could figure out a way to make contended was the lock\n> > protecting parallel btree scan. A highly parallel index only scan on a\n> > fully cached index should create at least some spinlock contention.\n>\n> I suspect the snapshot-too-old \"mutex_threshold\" spinlock can become\n> contended under workloads that generate a high rate of\n> heap_page_prune_opt() calls with old_snapshot_threshold enabled. One\n> way to do that is with a bunch of concurrent index scans that hit the\n> heap in random order. Some notes about that:\n>\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGKT8oTkp5jw_U4p0S-7UG9zsvtw_M47Y285bER6a2gD%2Bg%40mail.gmail.com\n\nThanks all for the inputs. Will keep these two particular scenarios in\nmind, and try to get some bandwidth on this soon.\n\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Tue, 21 Apr 2020 09:55:10 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: spin_delay() for ARM" } ]
[ { "msg_contents": "Hi,\n\nI have noticed that attempting to use pg_basebackup from HEAD leads to\nfailures when using it with backend versions from 12 and older:\n$ pg_basebackup -D hoge\npg_basebackup: error: backup manifests are not supported by server\nversion 12beta2\npg_basebackup: removing data directory \"hoge\"\n\nThis is a bit backwards with what we did in the past to maintain\ncompatibility silently when possible, for example look at the handling\nof temporary replication slots. Instead of an error when means to\nforce users to have to specify --no-manifest in this case, shouldn't\nwe silently disable the generation of the backup manifest? We know\nthat this option won't work on older server versions anyway.\n\nThanks,\n--\nMichael", "msg_date": "Fri, 10 Apr 2020 17:09:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On 4/10/20 4:09 AM, Michael Paquier wrote:\n> \n> I have noticed that attempting to use pg_basebackup from HEAD leads to\n> failures when using it with backend versions from 12 and older:\n> $ pg_basebackup -D hoge\n> pg_basebackup: error: backup manifests are not supported by server\n> version 12beta2\n> pg_basebackup: removing data directory \"hoge\"\n> \n> This is a bit backwards with what we did in the past to maintain\n> compatibility silently when possible, for example look at the handling\n> of temporary replication slots. Instead of an error when means to\n> force users to have to specify --no-manifest in this case, shouldn't\n> we silently disable the generation of the backup manifest? We know\n> that this option won't work on older server versions anyway.\n\nI'm a bit conflicted here. I see where you are coming from, but given \nthat writing a manifest is now the default I'm not sure silently \nskipping it is ideal.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:32:08 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 4/10/20 4:09 AM, Michael Paquier wrote:\n> >I have noticed that attempting to use pg_basebackup from HEAD leads to\n> >failures when using it with backend versions from 12 and older:\n> >$ pg_basebackup -D hoge\n> >pg_basebackup: error: backup manifests are not supported by server\n> >version 12beta2\n> >pg_basebackup: removing data directory \"hoge\"\n> >\n> >This is a bit backwards with what we did in the past to maintain\n> >compatibility silently when possible, for example look at the handling\n> >of temporary replication slots. Instead of an error when means to\n> >force users to have to specify --no-manifest in this case, shouldn't\n> >we silently disable the generation of the backup manifest? We know\n> >that this option won't work on older server versions anyway.\n> \n> I'm a bit conflicted here. I see where you are coming from, but given that\n> writing a manifest is now the default I'm not sure silently skipping it is\n> ideal.\n\nIt's only the default in v13.. Surely when we connect to a v12 or\nearlier system we should just keep working and accept that we don't get\na manifest as part of that.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Apr 2020 16:41:10 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On 4/10/20 4:41 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * David Steele (david@pgmasters.net) wrote:\n>> On 4/10/20 4:09 AM, Michael Paquier wrote:\n>>> I have noticed that attempting to use pg_basebackup from HEAD leads to\n>>> failures when using it with backend versions from 12 and older:\n>>> $ pg_basebackup -D hoge\n>>> pg_basebackup: error: backup manifests are not supported by server\n>>> version 12beta2\n>>> pg_basebackup: removing data directory \"hoge\"\n>>>\n>>> This is a bit backwards with what we did in the past to maintain\n>>> compatibility silently when possible, for example look at the handling\n>>> of temporary replication slots. Instead of an error when means to\n>>> force users to have to specify --no-manifest in this case, shouldn't\n>>> we silently disable the generation of the backup manifest? We know\n>>> that this option won't work on older server versions anyway.\n>>\n>> I'm a bit conflicted here. I see where you are coming from, but given that\n>> writing a manifest is now the default I'm not sure silently skipping it is\n>> ideal.\n> \n> It's only the default in v13.. Surely when we connect to a v12 or\n> earlier system we should just keep working and accept that we don't get\n> a manifest as part of that.\n\nYeah, OK. It's certainly better than forcing the user to disable \nmanifests, which might also disable them for v13 clusters.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:44:34 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 16:32:08 -0400, David Steele wrote:\n> On 4/10/20 4:09 AM, Michael Paquier wrote:\n> > \n> > I have noticed that attempting to use pg_basebackup from HEAD leads to\n> > failures when using it with backend versions from 12 and older:\n> > $ pg_basebackup -D hoge\n> > pg_basebackup: error: backup manifests are not supported by server\n> > version 12beta2\n> > pg_basebackup: removing data directory \"hoge\"\n> > \n> > This is a bit backwards with what we did in the past to maintain\n> > compatibility silently when possible, for example look at the handling\n> > of temporary replication slots. Instead of an error when means to\n> > force users to have to specify --no-manifest in this case, shouldn't\n> > we silently disable the generation of the backup manifest? We know\n> > that this option won't work on older server versions anyway.\n> \n> I'm a bit conflicted here. I see where you are coming from, but given that\n> writing a manifest is now the default I'm not sure silently skipping it is\n> ideal.\n\nI think we at the very least should add a hint about how to perform a\nbackup without a manifest.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:41:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Fri, Apr 10, 2020 at 04:44:34PM -0400, David Steele wrote:\n> On 4/10/20 4:41 PM, Stephen Frost wrote:\n>> It's only the default in v13.. Surely when we connect to a v12 or\n>> earlier system we should just keep working and accept that we don't get\n>> a manifest as part of that.\n> \n> Yeah, OK. It's certainly better than forcing the user to disable manifests,\n> which might also disable them for v13 clusters.\n\nExactly. My point is exactly that. The current code would force\nusers maintaining scripts with pg_basebackup to use --no-manifest if\nsuch a script runs with older versions of Postgres, but we should\nencourage users not do to that because we want them to use manifests\nwith backend versions where they are supported.\n--\nMichael", "msg_date": "Sun, 12 Apr 2020 08:08:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Sun, Apr 12, 2020 at 08:08:17AM +0900, Michael Paquier wrote:\n> Exactly. My point is exactly that. The current code would force\n> users maintaining scripts with pg_basebackup to use --no-manifest if\n> such a script runs with older versions of Postgres, but we should\n> encourage users not do to that because we want them to use manifests\n> with backend versions where they are supported.\n\nPlease note that I have added an open item for this thread, and\nattached is a proposal of patch. While reading the code, I have\nnoticed that the minimum version handling is not consistent with the\nother MINIMUM_VERSION_*, so I have added one for manifests.\n--\nMichael", "msg_date": "Mon, 13 Apr 2020 09:56:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "At Mon, 13 Apr 2020 09:56:02 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Apr 12, 2020 at 08:08:17AM +0900, Michael Paquier wrote:\n> > Exactly. My point is exactly that. The current code would force\n> > users maintaining scripts with pg_basebackup to use --no-manifest if\n> > such a script runs with older versions of Postgres, but we should\n> > encourage users not do to that because we want them to use manifests\n> > with backend versions where they are supported.\n> \n> Please note that I have added an open item for this thread, and\n> attached is a proposal of patch. While reading the code, I have\n> noticed that the minimum version handling is not consistent with the\n> other MINIMUM_VERSION_*, so I have added one for manifests.\n\nSince I'm not sure about the work flow that contains taking a\nbasebackup from a server of a different version, I'm not sure which is\nbetter between silently disabling and erroring out. However, it seems\nto me, the option for replication slot is a choice of the way the tool\nworks which doesn't affect the result itself, but that for backup\nmanifest is about what the resulting backup contains. Therefore I\nthink it is better that pg_basebackup in PG13 should error out if the\nsource server doesn't support backup manifest but --no-manifest is not\nspecfied, and show how to accomplish their wants (, though I don't see\nthe wants clearly).\n\n$ pg_basebackup ...\npg_basebackup: error: backup manifest is available from servers running PostgreSQL 13 or later\nTry --no-manifest to take a backup from this server.\n\n\nBy the way, if I specified --manifest-checksums, it complains about\nincompatible options with a message that would look strange to the\nuser.\n\npg_basebackup: error: --no-manifest and --manifest-checksums are incompatible options\n\n(\"I didn't specified such an option..\")\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Apr 2020 11:52:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 11:52:51AM +0900, Kyotaro Horiguchi wrote:\n> Since I'm not sure about the work flow that contains taking a\n> basebackup from a server of a different version, I'm not sure which is\n> better between silently disabling and erroring out. However, it seems\n> to me, the option for replication slot is a choice of the way the tool\n> works which doesn't affect the result itself, but that for backup\n> manifest is about what the resulting backup contains. Therefore I\n> think it is better that pg_basebackup in PG13 should error out if the\n> source server doesn't support backup manifest but --no-manifest is not\n> specfied, and show how to accomplish their wants (, though I don't see\n> the wants clearly).\n\nNot sure what Robert and other authors of the feature think about\nthat. What I am rather afraid of is somebody deciding to patch a\nscript aimed at working across multiple backend versions to add\nunconditionally --no-manifest all the time, even for v13. That would\nkill the purpose of encouraging the use of manifests.\n\n> By the way, if I specified --manifest-checksums, it complains about\n> incompatible options with a message that would look strange to the\n> user.\n> \n> pg_basebackup: error: --no-manifest and --manifest-checksums are incompatible options\n> \n> (\"I didn't specified such an option..\")\n\nHow did you trigger that? I am able to only see this failure when\nusing --manifest-checksums and --no-manifest together.\n--\nMichael", "msg_date": "Mon, 13 Apr 2020 13:51:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "At Mon, 13 Apr 2020 13:51:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Apr 13, 2020 at 11:52:51AM +0900, Kyotaro Horiguchi wrote:\n> > Since I'm not sure about the work flow that contains taking a\n> > basebackup from a server of a different version, I'm not sure which is\n> > better between silently disabling and erroring out. However, it seems\n> > to me, the option for replication slot is a choice of the way the tool\n> > works which doesn't affect the result itself, but that for backup\n> > manifest is about what the resulting backup contains. Therefore I\n> > think it is better that pg_basebackup in PG13 should error out if the\n> > source server doesn't support backup manifest but --no-manifest is not\n> > specfied, and show how to accomplish their wants (, though I don't see\n> > the wants clearly).\n> \n> Not sure what Robert and other authors of the feature think about\n> that. What I am rather afraid of is somebody deciding to patch a\n> script aimed at working across multiple backend versions to add\n> unconditionally --no-manifest all the time, even for v13. That would\n> kill the purpose of encouraging the use of manifests.\n\nI don't object that since I'm not sure about the use case of\ncross-version pg_basebackup.\n\n\n> > By the way, if I specified --manifest-checksums, it complains about\n> > incompatible options with a message that would look strange to the\n> > user.\n> > \n> > pg_basebackup: error: --no-manifest and --manifest-checksums are incompatible options\n> > \n> > (\"I didn't specified such an option..\")\n> \n> How did you trigger that? I am able to only see this failure when\n> using --manifest-checksums and --no-manifest together.\n\nMmm. Sorry for the noise. I might ran unpatched version for the time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Apr 2020 17:49:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Sun, Apr 12, 2020 at 8:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Apr 12, 2020 at 08:08:17AM +0900, Michael Paquier wrote:\n> > Exactly. My point is exactly that. The current code would force\n> > users maintaining scripts with pg_basebackup to use --no-manifest if\n> > such a script runs with older versions of Postgres, but we should\n> > encourage users not do to that because we want them to use manifests\n> > with backend versions where they are supported.\n>\n> Please note that I have added an open item for this thread, and\n> attached is a proposal of patch. While reading the code, I have\n> noticed that the minimum version handling is not consistent with the\n> other MINIMUM_VERSION_*, so I have added one for manifests.\n\nI think that this patch is incorrect. I have no objection to\nintroducing MINIMUM_VERSION_FOR_MANIFESTS, but this is not OK:\n\n- else\n- {\n- if (serverMajor < 1300)\n- manifest_clause = \"\";\n- else\n- manifest_clause = \"MANIFEST 'no'\";\n- }\n\nIt seems to me that this will break --no-manifest option on v13.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 11:13:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 11:13:06AM -0400, Robert Haas wrote:\n> I think that this patch is incorrect. I have no objection to\n> introducing MINIMUM_VERSION_FOR_MANIFESTS, but this is not OK:\n> \n> - else\n> - {\n> - if (serverMajor < 1300)\n> - manifest_clause = \"\";\n> - else\n> - manifest_clause = \"MANIFEST 'no'\";\n> - }\n> \n> It seems to me that this will break --no-manifest option on v13.\n\nWell, the documentation tells me that as of protocol.sgml:\n\"For compatibility with previous releases, the default is\n<literal>MANIFEST 'no'</literal>.\"\n\nThe code also tells me that, in line with the docs:\nstatic void\nparse_basebackup_options(List *options, basebackup_options *opt)\n[...]\n MemSet(opt, 0, sizeof(*opt));\n opt->manifest = MANIFEST_OPTION_NO;\n\nAnd there is also a TAP test for that when passing down --no-manifest,\nwhich should not create a backup manifest:\n$node->command_ok(\n [\n 'pg_basebackup', '-D', \"$tempdir/backup2\", '--no-manifest',\n '--waldir', \"$tempdir/xlog2\"\n ],\n\nSo, it seems to me that it is fine to remove this block, as when\n--no-manifest is used, then \"manifest\" gets set to false, and then it\ndoes not matter if the MANIFEST clause is added or not as we'd just\nrely on the default. Keeping the block would matter if you want to\nmake the code more robust to a change of the default value in the\nBASE_BACKUP query though, and its logic is not incorrect either. So,\nif you wish to keep it, that's fine by me, but it looks cleaner to me\nto remove it and more consistent with the other options like MAX_RATE,\nTABLESPACE_MAP, etc.\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 07:26:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On 2020-Apr-13, Michael Paquier wrote:\n\n> On Mon, Apr 13, 2020 at 11:52:51AM +0900, Kyotaro Horiguchi wrote:\n> > Since I'm not sure about the work flow that contains taking a\n> > basebackup from a server of a different version, I'm not sure which is\n> > better between silently disabling and erroring out. However, it seems\n> > to me, the option for replication slot is a choice of the way the tool\n> > works which doesn't affect the result itself, but that for backup\n> > manifest is about what the resulting backup contains. Therefore I\n> > think it is better that pg_basebackup in PG13 should error out if the\n> > source server doesn't support backup manifest but --no-manifest is not\n> > specfied, and show how to accomplish their wants (, though I don't see\n> > the wants clearly).\n> \n> Not sure what Robert and other authors of the feature think about\n> that. What I am rather afraid of is somebody deciding to patch a\n> script aimed at working across multiple backend versions to add\n> unconditionally --no-manifest all the time, even for v13. That would\n> kill the purpose of encouraging the use of manifests.\n\nI agree, I think forcing users to specify --no-manifest when run on old\nservers will cause users to write bad scripts; I vote for silently\ndisabling checksums.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:04:20 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 6:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Well, the documentation tells me that as of protocol.sgml:\n> \"For compatibility with previous releases, the default is\n> <literal>MANIFEST 'no'</literal>.\"\n>\n> The code also tells me that, in line with the docs:\n> static void\n> parse_basebackup_options(List *options, basebackup_options *opt)\n> [...]\n> MemSet(opt, 0, sizeof(*opt));\n> opt->manifest = MANIFEST_OPTION_NO;\n>\n> And there is also a TAP test for that when passing down --no-manifest,\n> which should not create a backup manifest:\n> $node->command_ok(\n> [\n> 'pg_basebackup', '-D', \"$tempdir/backup2\", '--no-manifest',\n> '--waldir', \"$tempdir/xlog2\"\n> ],\n>\n> So, it seems to me that it is fine to remove this block, as when\n> --no-manifest is used, then \"manifest\" gets set to false, and then it\n> does not matter if the MANIFEST clause is added or not as we'd just\n> rely on the default. Keeping the block would matter if you want to\n> make the code more robust to a change of the default value in the\n> BASE_BACKUP query though, and its logic is not incorrect either. So,\n> if you wish to keep it, that's fine by me, but it looks cleaner to me\n> to remove it and more consistent with the other options like MAX_RATE,\n> TABLESPACE_MAP, etc.\n\nOh, hmm. Maybe I'm getting confused with a previous version of the\npatch that behaved differently.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:55:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 07:55:07PM -0400, Robert Haas wrote:\n> Oh, hmm. Maybe I'm getting confused with a previous version of the\n> patch that behaved differently.\n\nNo problem. If you prefer keeping this part of the code, that's fine\nby me. If you think that the patch is suited as-is, including\nsilencing the error forcing to use --no-manifest on server versions\nolder than v13, I am fine to help out and apply it myself, but I am\nalso fine if you wish to take care of it by yourself.\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 09:23:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 8:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 13, 2020 at 07:55:07PM -0400, Robert Haas wrote:\n> > Oh, hmm. Maybe I'm getting confused with a previous version of the\n> > patch that behaved differently.\n>\n> No problem. If you prefer keeping this part of the code, that's fine\n> by me. If you think that the patch is suited as-is, including\n> silencing the error forcing to use --no-manifest on server versions\n> older than v13, I am fine to help out and apply it myself, but I am\n> also fine if you wish to take care of it by yourself.\n\nFeel free to go ahead.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 15:13:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Tue, Apr 14, 2020 at 03:13:39PM -0400, Robert Haas wrote:\n> Feel free to go ahead.\n\nThanks, let's do it then. If you have any objections about any parts\nof the patch, of course please feel free.\n--\nMichael", "msg_date": "Wed, 15 Apr 2020 07:39:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Mon, Apr 13, 2020 at 07:04:20PM -0400, Alvaro Herrera wrote:\n> I agree, I think forcing users to specify --no-manifest when run on old\n> servers will cause users to write bad scripts; I vote for silently\n> disabling checksums.\n\nOkay, thanks. Are there any other opinions?\n--\nMichael", "msg_date": "Wed, 15 Apr 2020 07:41:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Apr 13, 2020 at 07:04:20PM -0400, Alvaro Herrera wrote:\n>> I agree, I think forcing users to specify --no-manifest when run on old\n>> servers will cause users to write bad scripts; I vote for silently\n>> disabling checksums.\n\n> Okay, thanks. Are there any other opinions?\n\nFWIW, I concur with silently disabling the feature if the source\nserver can't support it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 20:09:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" }, { "msg_contents": "On Tue, Apr 14, 2020 at 08:09:22PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Mon, Apr 13, 2020 at 07:04:20PM -0400, Alvaro Herrera wrote:\n>>> I agree, I think forcing users to specify --no-manifest when run on old\n>>> servers will cause users to write bad scripts; I vote for silently\n>>> disabling checksums.\n> \n>> Okay, thanks. Are there any other opinions?\n> \n> FWIW, I concur with silently disabling the feature if the source\n> server can't support it.\n\nThanks. I have applied the patch, then.\n--\nMichael", "msg_date": "Thu, 16 Apr 2020 14:23:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup, manifests and backends older than ~12" } ]
[ { "msg_contents": "I've noticed that Postgres doesn't have support for DATETIMEOFFSET (or \nany functional equivalent data type) yet. Is this on the roadmap to \nimplement? I find it a very useful data type that I use all over the \nplace in TSQL databases.\n\n-- \nBest regards,\nJeremy Morton (Jez)\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:34:51 +0100", "msg_from": "Jeremy Morton <postgres@game-point.net>", "msg_from_op": true, "msg_subject": "Support for DATETIMEOFFSET" }, { "msg_contents": "On 4/10/20 10:34 AM, Jeremy Morton wrote:\n> I've noticed that Postgres doesn't have support for DATETIMEOFFSET (or \n> any functional equivalent data type) yet.  Is this on the roadmap to \n> implement?  I find it a very useful data type that I use all over the \n> place in TSQL databases.\n\nHi,\n\nI do not think anyone is working on such a type. And personally I think \nsuch a type is better suite for an extension rather than for core \nPostgreSQL. For most applications the timestamptz and date types are \nenough to solve everything time related (with some use of the timestamp \ntype when doing calculations), but there are niche applications where \nother temporal types can be very useful, but I personally do not think \nthose are common enough for inclusion in core PostgreSQL.\n\nI suggest writing an extension with this type and see if there is any \ninterest in it.\n\nAndreas\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:05:38 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Oh well. Guess I keep using SQL Server then. datetimeoffset makes it \nimpossible for developers to make the mistake of forgetting to use UTC \ninstead of local datetime, and for that reason alone it makes it \ninvaluable in my opinion. It should be used universally instead of \ndatetime.\n\n-- \nBest regards,\nJeremy Morton (Jez)\n\nAndreas Karlsson wrote:\n> On 4/10/20 10:34 AM, Jeremy Morton wrote:\n>> I've noticed that Postgres doesn't have support for DATETIMEOFFSET \n>> (or any functional equivalent data type) yet.  Is this on the \n>> roadmap to implement?  I find it a very useful data type that I use \n>> all over the place in TSQL databases.\n> \n> Hi,\n> \n> I do not think anyone is working on such a type.  And personally I \n> think such a type is better suite for an extension rather than for \n> core PostgreSQL. For most applications the timestamptz and date types \n> are enough to solve everything time related (with some use of the \n> timestamp type when doing calculations), but there are niche \n> applications where other temporal types can be very useful, but I \n> personally do not think those are common enough for inclusion in core \n> PostgreSQL.\n> \n> I suggest writing an extension with this type and see if there is any \n> interest in it.\n> \n> Andreas\n> \n> \n> \n\n\n", "msg_date": "Fri, 10 Apr 2020 14:19:09 +0100", "msg_from": "Jeremy Morton <admin@game-point.net>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Jeremy Morton <admin@game-point.net> writes:\n> Oh well. Guess I keep using SQL Server then. datetimeoffset makes it \n> impossible for developers to make the mistake of forgetting to use UTC \n> instead of local datetime,\n\nReally? That would be a remarkable feat for a mere datatype to\naccomplish.\n\n> and for that reason alone it makes it \n> invaluable in my opinion. It should be used universally instead of \n> datetime.\n\nWhat's it do that timestamptz together with setting timezone to UTC\ndoesn't?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 10:07:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "\n> On Apr 10, 2020, at 8:19 AM, Jeremy Morton <admin@game-point.net> wrote:\n> \n> Oh well. Guess I keep using SQL Server then. datetimeoffset makes it impossible for developers to make the mistake of forgetting to use UTC instead of local datetime, and for that reason alone it makes it invaluable in my opinion. It should be used universally instead of datetime.\n\n1. Not sure I understand. I’ve never used datetimeoffset so please bear with me. How does storing a time zone with the date time “make it impossible for developers to make the mistake….”\n\n2. I usually work with timestamps that have input and output across multiple time zones, why would one store a time zone in the database? If I need a local time, then postgres does that automatically. \n\n3. At the end of the day a point in time in UTC is about as clear as it is possible to make it.\n\nNot trying to be difficult, just trying to understand.\n\nNeil\n\n> \n> -- \n> Best regards,\n> Jeremy Morton (Jez)\n> \n> Andreas Karlsson wrote:\n>> On 4/10/20 10:34 AM, Jeremy Morton wrote:\n>>> I've noticed that Postgres doesn't have support for DATETIMEOFFSET (or any functional equivalent data type) yet. Is this on the roadmap to implement? I find it a very useful data type that I use all over the place in TSQL databases.\n>> Hi,\n>> I do not think anyone is working on such a type. And personally I think such a type is better suite for an extension rather than for core PostgreSQL. For most applications the timestamptz and date types are enough to solve everything time related (with some use of the timestamp type when doing calculations), but there are niche applications where other temporal types can be very useful, but I personally do not think those are common enough for inclusion in core PostgreSQL.\n>> I suggest writing an extension with this type and see if there is any interest in it.\n>> Andreas\n> \n> \n\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:24:11 -0500", "msg_from": "Neil <neil@fairwindsoft.com>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "On 4/10/20 3:19 PM, Jeremy Morton wrote:\n> Oh well.  Guess I keep using SQL Server then.  datetimeoffset makes it \n> impossible for developers to make the mistake of forgetting to use UTC \n> instead of local datetime, and for that reason alone it makes it \n> invaluable in my opinion.  It should be used universally instead of \n> datetime.\n\nI think that the timestamptz type already helps out a lot with that \nsince it accepts input strings with a time zone offest (e.g. '2020-04-10 \n17:19:39+02') and converts it to UTC after parsing the timestamp. In \nfact I would argue that it does so with fewer pitfalls than the \ndatetimeoffset type since with timestamptz everything you read will have \nthe same time zone while when you read a datetimeoffset column you will \nget the time zone used by the application which inserted it originally, \nand if e.g. one of the application servers have a different time zone \n(let's say the sysadmin forgot to set it to UTC and it runs in local \ntime) you will get a mix which will make bugs hard to spot.\n\nI am not saying there isn't a use case for something like \ndatetimeoffset, I think that there is. For example in some kind of \ncalendar or scheduling application. But as a generic type for storing \npoints in time we already have timestamptz which is easy to use and \nhandles most of the common use cases, e.g. storing when an event happened.\n\nAndreas\n\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:19:57 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Neil wrote:\n> \n>> On Apr 10, 2020, at 8:19 AM, Jeremy Morton <admin@game-point.net> wrote:\n>>\n>> Oh well. Guess I keep using SQL Server then. datetimeoffset makes it impossible for developers to make the mistake of forgetting to use UTC instead of local datetime, and for that reason alone it makes it invaluable in my opinion. It should be used universally instead of datetime.\n> \n> 1. Not sure I understand. I’ve never used datetimeoffset so please bear with me. How does storing a time zone with the date time “make it impossible for developers to make the mistake….”\n\nAt just about every development shop I've worked for, I've seen \ndevelopers use methods to get a local DateTime - both in the DB and in \nthe code - such as DateTime.Now, and throw it at a DateTime field. \nHeck, even I've occasionally forgotten to use .UtcNow. With \nDateTimeOffset.Now, you can't go wrong. You get the UTC time, and the \noffset. I've taken to using it 100% of the time. It's just really handy.\n\n-- \nBest regards,\nJeremy Morton (Jez)\n\n\n", "msg_date": "Sat, 11 Apr 2020 00:10:54 +0100", "msg_from": "Jeremy Morton <admin@game-point.net>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "\n> On Apr 10, 2020, at 6:10 PM, Jeremy Morton <admin@game-point.net> wrote:\n> \n> Neil wrote:\n>>> On Apr 10, 2020, at 8:19 AM, Jeremy Morton <admin@game-point.net> wrote:\n>>> \n>>> Oh well. Guess I keep using SQL Server then. datetimeoffset makes it impossible for developers to make the mistake of forgetting to use UTC instead of local datetime, and for that reason alone it makes it invaluable in my opinion. It should be used universally instead of datetime.\n>> 1. Not sure I understand. I’ve never used datetimeoffset so please bear with me. How does storing a time zone with the date time “make it impossible for developers to make the mistake….”\n> \n> At just about every development shop I've worked for, I've seen developers use methods to get a local DateTime - both in the DB and in the code - such as DateTime.Now, and throw it at a DateTime field. Heck, even I've occasionally forgotten to use .UtcNow. With DateTimeOffset.Now, you can't go wrong. You get the UTC time, and the offset. I've taken to using it 100% of the time. It’s just really handy.\n> \n\nIn PostgreSQL there are two types; timestamp and timestamptz. If you use timestamptz then all time stamps coming into the database with time zones will be converted to and stored in UTC in the database and all times coming out of the database will have the local time zone of the server unless otherwise requested.\n\nNot sure how that is error prone. Maybe you are working around a problem that does not exist in PostgreSQL.\n\nIf you use timestamp type (not timestamptz) then all input output time zone conversions are ignored (time zone is truncated) and sure problems can occur. That is why there is very little use of the timestamp type.\n\nNeil\nhttps:://www.fairwindsoft.com \n\n", "msg_date": "Sat, 11 Apr 2020 13:43:28 -0500", "msg_from": "Neil <neil@fairwindsoft.com>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Jeremy Morton <admin@game-point.net> writes:\n> At just about every development shop I've worked for, I've seen \n> developers use methods to get a local DateTime - both in the DB and in \n> the code - such as DateTime.Now, and throw it at a DateTime field. \n> Heck, even I've occasionally forgotten to use .UtcNow. With \n> DateTimeOffset.Now, you can't go wrong. You get the UTC time, and the \n> offset. I've taken to using it 100% of the time. It's just really handy.\n\nIt sounds like what you are describing is a client-side problem, not\na server issue. If you have such a thing in the client code, why\ncan't it readily be mapped to timestamptz storage in the server?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Apr 2020 10:25:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Andreas Karlsson wrote:\n> On 4/10/20 3:19 PM, Jeremy Morton wrote:\n>> Oh well.  Guess I keep using SQL Server then.  datetimeoffset makes \n>> it impossible for developers to make the mistake of forgetting to \n>> use UTC instead of local datetime, and for that reason alone it \n>> makes it invaluable in my opinion.  It should be used universally \n>> instead of datetime.\n> \n> I think that the timestamptz type already helps out a lot with that \n> since it accepts input strings with a time zone offest (e.g. \n> '2020-04-10 17:19:39+02') and converts it to UTC after parsing the \n> timestamp. In fact I would argue that it does so with fewer pitfalls \n> than the datetimeoffset type since with timestamptz everything you \n> read will have the same time zone while when you read a datetimeoffset \n> column you will get the time zone used by the application which \n> inserted it originally, and if e.g. one of the application servers \n> have a different time zone (let's say the sysadmin forgot to set it to \n> UTC and it runs in local time) you will get a mix which will make bugs \n> hard to spot.\n\nI don't understand how that makes bugs hard to spot. And if the \"mix\" \nis confusing, you could easily set up a view that converts all the \ndatetimeoffset's to UTC datetimes.\n\n> I am not saying there isn't a use case for something like \n> datetimeoffset, I think that there is. For example in some kind of \n\nSurely the fact that you'll lose data if you try to store a common \n.NET datatype with any kind of ORM (eg. EF, which is pretty popular) \nright now, using \"the world's most advanced open source relational \ndatabase\", is reason enough to support it?\n\n-- \nBest regards,\nJeremy Morton (Jez)\n\n\n", "msg_date": "Fri, 17 Apr 2020 10:00:10 +0100", "msg_from": "Jeremy Morton <postgres@game-point.net>", "msg_from_op": true, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "On 4/17/20 11:00 AM, Jeremy Morton wrote:\n>> I am not saying there isn't a use case for something like \n>> datetimeoffset, I think that there is. For example in some kind of \n> \n> Surely the fact that you'll lose data if you try to store a common .NET \n> datatype with any kind of ORM (eg. EF, which is pretty popular) right \n> now, using \"the world's most advanced open source relational database\", \n> is reason enough to support it?\n\nNo, because if PostgreSQL started adding supports for all data types in \nall standard libraries of all programming languages it would become \nvirtually unusable. What if PostgreSQL shipped with 8 or 9 different \ntimestamp types? How would the users be able to pick which one to use? \nIt is better to have a few types which cover the use cases of most users \nand then let extension authors add more specialized types.\n\nAndreas\n\n\n", "msg_date": "Fri, 17 Apr 2020 11:57:12 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "Jeremy Morton <postgres@game-point.net> writes:\n> Surely the fact that you'll lose data if you try to store a common \n> .NET datatype with any kind of ORM (eg. EF, which is pretty popular) \n> right now, using \"the world's most advanced open source relational \n> database\", is reason enough to support it?\n\nIf the ORM somehow prevents you from using timestamptz, that's a\nbug in the ORM. If it doesn't, the above is just a hysterical\nclaim with no factual foundation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 09:22:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "How could the ORM use timestamptz when that doesn't actually store \nboth a datetime and an offset?\n\n-- \nBest regards,\nJeremy Morton (Jez)\n\nTom Lane wrote:\n> Jeremy Morton <postgres@game-point.net> writes:\n>> Surely the fact that you'll lose data if you try to store a common\n>> .NET datatype with any kind of ORM (eg. EF, which is pretty popular)\n>> right now, using \"the world's most advanced open source relational\n>> database\", is reason enough to support it?\n> \n> If the ORM somehow prevents you from using timestamptz, that's a\n> bug in the ORM. If it doesn't, the above is just a hysterical\n> claim with no factual foundation.\n> \n> \t\t\tregards, tom lane\n> \n\n\n", "msg_date": "Fri, 17 Apr 2020 14:36:03 +0100", "msg_from": "Jeremy Morton <admin@game-point.net>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" }, { "msg_contents": "On 2020-Apr-17, Jeremy Morton wrote:\n\n> How could the ORM use timestamptz when that doesn't actually store both a\n> datetime and an offset?\n\nThere are lots of ways in which timestamptz can be used. The most\ntypical one is to rely on the TimeZone configuration parameter; another\nvery typical one is to have a zone specification at the end of the\ntimestamp literal such as \"+03\" or \"Europe/Madrid\", as Andreas Karlsson\nalready mentioned. In addition to those, the \"AT TIME ZONE\" operator\ncan be used with a bare timestamp.\n\nThe main point of the timestamptz type is that both the input and output\nare timezone-aware. This timezone is not *stored*, but in most cases it\ndoesn't need to be. I have never seen a case where an application\nneeded a timezone to be *stored* together with each timestamp value.\nIt's just not useful.\n\nIf you want to set up an output timezone, you can set it for each\nspecific user (for example). Then all timestamps you show to that user\nwill use that timezone. It's a very easy and convenient thing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 19:25:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for DATETIMEOFFSET" } ]
[ { "msg_contents": "Over at https://www.postgresql.org/message-id/172c9d9b-1d0a-1b94-1456-376b1e017322@2ndquadrant.com\nPeter Eisentraut suggests that pg_validatebackup should be called\npg_verifybackup, with corresponding terminology changes throughout the\ncode and documentation.\n\nHere's a patch for that. I'd like to commit this quickly or abandon in\nquickly, because large renaming patches like this are a pain to\nmaintain. I believe that there was a mild consensus in favor of this\non that thread, so I plan to go forward unless somebody shows up\npretty quickly to object.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 10 Apr 2020 11:04:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Over at https://www.postgresql.org/message-id/172c9d9b-1d0a-1b94-1456-376b1e017322@2ndquadrant.com\n> Peter Eisentraut suggests that pg_validatebackup should be called\n> pg_verifybackup, with corresponding terminology changes throughout the\n> code and documentation.\n\n> Here's a patch for that. I'd like to commit this quickly or abandon in\n> quickly, because large renaming patches like this are a pain to\n> maintain. I believe that there was a mild consensus in favor of this\n> on that thread, so I plan to go forward unless somebody shows up\n> pretty quickly to object.\n\n+1, let's get it done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:37:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On 4/10/20 11:37 AM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Over at https://www.postgresql.org/message-id/172c9d9b-1d0a-1b94-1456-376b1e017322@2ndquadrant.com\n>> Peter Eisentraut suggests that pg_validatebackup should be called\n>> pg_verifybackup, with corresponding terminology changes throughout the\n>> code and documentation.\n> \n>> Here's a patch for that. I'd like to commit this quickly or abandon in\n>> quickly, because large renaming patches like this are a pain to\n>> maintain. I believe that there was a mild consensus in favor of this\n>> on that thread, so I plan to go forward unless somebody shows up\n>> pretty quickly to object.\n> \n> +1, let's get it done.\n\nI'm not sure that Peter suggested verify was the correct name, he just \npointed out that verify and validate are not necessarily the same thing \n(and that we should be consistent in the docs one way or the other). \nIt'd be nice if Peter (now CC'd) commented since he's the one who \nbrought it up.\n\nHaving said that, I'm +1 on verify.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:56:48 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> Having said that, I'm +1 on verify.\n\nMe too, if only because it's shorter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 15:27:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On 4/10/20 3:27 PM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> Having said that, I'm +1 on verify.\n> \n> Me too, if only because it's shorter.\n\nI also think it is (probably) more correct but failing that it is \n*definitely* shorter!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 10 Apr 2020 15:29:34 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 14:56:48 -0400, David Steele wrote:\n> On 4/10/20 11:37 AM, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > Over at https://www.postgresql.org/message-id/172c9d9b-1d0a-1b94-1456-376b1e017322@2ndquadrant.com\n> > > Peter Eisentraut suggests that pg_validatebackup should be called\n> > > pg_verifybackup, with corresponding terminology changes throughout the\n> > > code and documentation.\n> > \n> > > Here's a patch for that. I'd like to commit this quickly or abandon in\n> > > quickly, because large renaming patches like this are a pain to\n> > > maintain. I believe that there was a mild consensus in favor of this\n> > > on that thread, so I plan to go forward unless somebody shows up\n> > > pretty quickly to object.\n> > \n> > +1, let's get it done.\n> \n> I'm not sure that Peter suggested verify was the correct name, he just\n> pointed out that verify and validate are not necessarily the same thing (and\n> that we should be consistent in the docs one way or the other). It'd be nice\n> if Peter (now CC'd) commented since he's the one who brought it up.\n> \n> Having said that, I'm +1 on verify.\n\nFWIW, I still think it's a mistake to accumulate all these bespoke\ntools. We should go towards having one tool that can verify checksums,\nvalidate backup manifests etc. Partially because it's more discoverable,\nbut also because it allows to verify multiple such properties in a\nsingle pass, rather than reading the huge base backup twice.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:40:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-04-10 14:56:48 -0400, David Steele wrote:\n> > On 4/10/20 11:37 AM, Tom Lane wrote:\n> > > Robert Haas <robertmhaas@gmail.com> writes:\n> > > > Over at https://www.postgresql.org/message-id/172c9d9b-1d0a-1b94-1456-376b1e017322@2ndquadrant.com\n> > > > Peter Eisentraut suggests that pg_validatebackup should be called\n> > > > pg_verifybackup, with corresponding terminology changes throughout the\n> > > > code and documentation.\n> > > \n> > > > Here's a patch for that. I'd like to commit this quickly or abandon in\n> > > > quickly, because large renaming patches like this are a pain to\n> > > > maintain. I believe that there was a mild consensus in favor of this\n> > > > on that thread, so I plan to go forward unless somebody shows up\n> > > > pretty quickly to object.\n> > > \n> > > +1, let's get it done.\n> > \n> > I'm not sure that Peter suggested verify was the correct name, he just\n> > pointed out that verify and validate are not necessarily the same thing (and\n> > that we should be consistent in the docs one way or the other). It'd be nice\n> > if Peter (now CC'd) commented since he's the one who brought it up.\n> > \n> > Having said that, I'm +1 on verify.\n> \n> FWIW, I still think it's a mistake to accumulate all these bespoke\n> tools. We should go towards having one tool that can verify checksums,\n> validate backup manifests etc. Partially because it's more discoverable,\n> but also because it allows to verify multiple such properties in a\n> single pass, rather than reading the huge base backup twice.\n\nWould be kinda neat to have a single tool for doing backups and restores\ntoo, as well as validating backup manifests and checksums, that can back\nup to s3 or to a remote system with ssh, has multiple compression\noptions and a pretty sound architecture that's all written in C and is\nOSS.\n\nI also agree with Tom/David that verify probably makes sense for this\ncommand, in its current form at least.\n\nThanks,\n\nStephen", "msg_date": "Fri, 10 Apr 2020 15:46:42 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> FWIW, I still think it's a mistake to accumulate all these bespoke\n> tools. We should go towards having one tool that can verify checksums,\n> validate backup manifests etc. Partially because it's more discoverable,\n> but also because it allows to verify multiple such properties in a\n> single pass, rather than reading the huge base backup twice.\n\nWell, we're not getting there for v13. Are you proposing that this\npatch just be reverted because it doesn't do everything at once?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:13:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 16:13:18 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > FWIW, I still think it's a mistake to accumulate all these bespoke\n> > tools. We should go towards having one tool that can verify checksums,\n> > validate backup manifests etc. Partially because it's more discoverable,\n> > but also because it allows to verify multiple such properties in a\n> > single pass, rather than reading the huge base backup twice.\n> \n> Well, we're not getting there for v13. Are you proposing that this\n> patch just be reverted because it doesn't do everything at once?\n\nNo. I suggest choosing a name that's compatible with moving more\ncapabilities under the same umbrella at a later time (and I suggested\nthe same pre freeze too). Possibly adding a toplevel --verify-manifest\noption as the only change besides naming.\n\nAndres\n\n\n", "msg_date": "Fri, 10 Apr 2020 13:35:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-10 16:13:18 -0400, Tom Lane wrote:\n>> Well, we're not getting there for v13. Are you proposing that this\n>> patch just be reverted because it doesn't do everything at once?\n\n> No. I suggest choosing a name that's compatible with moving more\n> capabilities under the same umbrella at a later time (and I suggested\n> the same pre freeze too). Possibly adding a toplevel --verify-manifest\n> option as the only change besides naming.\n\nIt doesn't really seem like either name is problematic from that\nstandpoint? \"Verify backup\" isn't prejudging what aspect of the\nbackup is going to be verified, AFAICS.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:40:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 16:40:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-04-10 16:13:18 -0400, Tom Lane wrote:\n> >> Well, we're not getting there for v13. Are you proposing that this\n> >> patch just be reverted because it doesn't do everything at once?\n> \n> > No. I suggest choosing a name that's compatible with moving more\n> > capabilities under the same umbrella at a later time (and I suggested\n> > the same pre freeze too). Possibly adding a toplevel --verify-manifest\n> > option as the only change besides naming.\n> \n> It doesn't really seem like either name is problematic from that\n> standpoint? \"Verify backup\" isn't prejudging what aspect of the\n> backup is going to be verified, AFAICS.\n\nMy point is that I'd eventually like to see the same tool also be usable\nto just verify the checksums of a normal, non-backup, data directory.\n\nWe shouldn't end up with pg_verifybackup, pg_checksums,\npg_dbdir_checknofilesmissing, pg_checkpageheaders, ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:19:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-10 16:40:02 -0400, Tom Lane wrote:\n>> It doesn't really seem like either name is problematic from that\n>> standpoint? \"Verify backup\" isn't prejudging what aspect of the\n>> backup is going to be verified, AFAICS.\n\n> My point is that I'd eventually like to see the same tool also be usable\n> to just verify the checksums of a normal, non-backup, data directory.\n\nMeh. I would argue that that's an actively BAD idea. The use-cases\nare entirely different, the implementation is going to be quite a lot\ndifferent, the relevant options are going to be quite a lot different.\nIt will not be better for either implementors or users to force those\ninto the same executable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:23:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 17:23:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-04-10 16:40:02 -0400, Tom Lane wrote:\n> >> It doesn't really seem like either name is problematic from that\n> >> standpoint? \"Verify backup\" isn't prejudging what aspect of the\n> >> backup is going to be verified, AFAICS.\n> \n> > My point is that I'd eventually like to see the same tool also be usable\n> > to just verify the checksums of a normal, non-backup, data directory.\n> \n> Meh. I would argue that that's an actively BAD idea. The use-cases\n> are entirely different, the implementation is going to be quite a lot\n> different, the relevant options are going to be quite a lot different.\n> It will not be better for either implementors or users to force those\n> into the same executable.\n\nI don't agree with any of that. Combining the manifest validation with\nchecksum validation halves the IO. It allows to offload some of the\nexpense of verifying page level checksums from the primary.\n\nAnd all of the operations require iterating through data directories,\nclassify files that are part / not part of a normal data directory, etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:48:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Fri, Apr 10, 2020 at 5:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meh. I would argue that that's an actively BAD idea. The use-cases\n> are entirely different, the implementation is going to be quite a lot\n> different, the relevant options are going to be quite a lot different.\n> It will not be better for either implementors or users to force those\n> into the same executable.\n\nI think Andres has a point, but on balance I am more inclined to agree\nwith you. It may be that in the future it will make sense to organize\nthings differently, but I would rather arrange them according to what\nmakes sense now, and then change it later, even though that makes for\nsome user-visible churn. The thing is that we don't really know what's\ngoing to happen in future releases, and our track record when we try\nto guess is very poor. Creating stuff that we hope will get extended\nto do this or that in a future release can end up looking really\nhalf-baked if the code doesn't get extended in the anticipated\ndirection.\n\nI *would* like to find a way to address the proliferation of binaries,\nbecause I've got other things I'd like to do that would require\ncreating still more of them, and until we come up with a scalable\nsolution that makes everybody happy, there's going to be progressively\nmore complaining every time. One possible solution is to adopt the\n'git' approach and decide we're going to have one 'pg' command (or\nwhatever we call it). I think the way that 'git' does it is that all\nof the real binaries are stored in a directory that users are not\nexpected to have in their path, and the 'git' wrapper just looks for\none based on the name of the subcommand. So, if you say 'git thunk',\nit goes and searches the private bin directory for an executable\ncalled 'git-thunk'. We could easily do this for nearly everything 'pg'\nrelated, so:\n\nclusterdb -> pg clusterdb\npg_ctl -> pg ctl\npg_resetwal -> pg resetwal\netc.\n\nI think we'd want psql to still be separate, but I'm not sure we'd\nneed any other exceptions. In a lot of cases it won't lead to any more\ntyping because the current command is pg_whatever and with this\napproach you'd just type a space instead of an underscore. The\n\"legacy\" cases that don't start with \"pg\" would get a bit longer, but\nI wouldn't lose a lot of sleep over that myself.\n\nThere are other possibilities too. We could try to pick out individual\ngroups of commands to merge, rather than having a unified framework\nfor everything. For instance, we could turn\n{cluster,create,drop,reindex,vacuum}db into one utility,\n{create,drop}user into another, pg_dump{,all} into a third, and so on.\nBut this seems like it would require making a lot more awkward policy\ndecisions, so I don't think it's a great plan.\n\nStill, we need to agree on something. It won't do to tell people that\nwe're not going to commit patches to add more functionality to\nPostgreSQL because it would involve adding more binaries. Any\nparticular piece of functionality may draw substantive objections, and\nthat's fine, but we shouldn't stifle development categorically because\nwe can't agree on how to clean up the namespace pollution.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 11 Apr 2020 16:09:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On 2020-Apr-11, Robert Haas wrote:\n\n> I *would* like to find a way to address the proliferation of binaries,\n> because I've got other things I'd like to do that would require\n> creating still more of them, and until we come up with a scalable\n> solution that makes everybody happy, there's going to be progressively\n> more complaining every time. One possible solution is to adopt the\n> 'git' approach and decide we're going to have one 'pg' command (or\n> whatever we call it). I think the way that 'git' does it is that all\n> of the real binaries are stored in a directory that users are not\n> expected to have in their path, and the 'git' wrapper just looks for\n> one based on the name of the subcommand.\n\nI like this idea so much that I already proposed it in the past[1], so +1.\n\n[1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 11 Apr 2020 17:50:56 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Fri, Apr 10, 2020 at 02:48:05PM -0700, Andres Freund wrote:\n> I don't agree with any of that. Combining the manifest validation with\n> checksum validation halves the IO. It allows to offload some of the\n> expense of verifying page level checksums from the primary.\n> \n> And all of the operations require iterating through data directories,\n> classify files that are part / not part of a normal data directory, etc.\n\nThe last time we had the idea to use _verify_ in a tool name, the same\ntool has been renamed one year after as we found new use cases for\nit, aka pg_checksums. Cannot the same be said with pg_validatebackup?\nIt seems to me that it could be interesting for some users to build a\nmanifest after a backup is taken, using something like a --build\noption with pg_validatebackup.\n--\nMichael", "msg_date": "Sun, 12 Apr 2020 08:21:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 05:50:56PM -0400, Alvaro Herrera wrote:\n> On 2020-Apr-11, Robert Haas wrote:\n>> I *would* like to find a way to address the proliferation of binaries,\n>> because I've got other things I'd like to do that would require\n>> creating still more of them, and until we come up with a scalable\n>> solution that makes everybody happy, there's going to be progressively\n>> more complaining every time. One possible solution is to adopt the\n>> 'git' approach and decide we're going to have one 'pg' command (or\n>> whatever we call it). I think the way that 'git' does it is that all\n>> of the real binaries are stored in a directory that users are not\n>> expected to have in their path, and the 'git' wrapper just looks for\n>> one based on the name of the subcommand.\n> \n> I like this idea so much that I already proposed it in the past[1], so +1.\n> \n> [1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n\nYeah, their stuff is nice. Another nice thing is that git has the\npossibility to scan as well for custom scripts as long as they respect\nthe naming convention, like having a custom script called \"git-foo\",\nthat can be called as \"git foo\".\n--\nMichael", "msg_date": "Sun, 12 Apr 2020 08:36:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sat, 11 Apr 2020 at 19:36, Michael Paquier <michael@paquier.xyz> wrote:\n\n\n> Yeah, their stuff is nice. Another nice thing is that git has the\n> possibility to scan as well for custom scripts as long as they respect\n> the naming convention, like having a custom script called \"git-foo\",\n> that can be called as \"git foo\".\n>\n\n… which could be installed by an extension perhaps. Wait, that doesn't\nquite work because it's usually one bin directory per version, not per\ndatabase. Still maybe something can be done with that idea.\n\nOn Sat, 11 Apr 2020 at 19:36, Michael Paquier <michael@paquier.xyz> wrote: \nYeah, their stuff is nice.  Another nice thing is that git has the\npossibility to scan as well for custom scripts as long as they respect\nthe naming convention, like having a custom script called \"git-foo\",\nthat can be called as \"git foo\". … which could be installed by an extension perhaps. Wait, that doesn't quite work because it's usually one bin directory per version, not per database. Still maybe something can be done with that idea.", "msg_date": "Sat, 11 Apr 2020 20:07:08 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 5:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I like this idea so much that I already proposed it in the past[1], so +1.\n>\n> [1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n\nHey, look at that. I think I had some vague recollection of a prior\nproposal, but I couldn't remember exactly who or exactly what had been\nproposed. I do think that pg_ctl is too long a prefix, though. People\ncan get used to typing 'pg createdb' instead of 'createdb' but 'pg_ctl\ncreatedb' seems like too much. At least, it would very very quickly\ncause me to install aliases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 12 Apr 2020 10:19:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Apr 11, 2020 at 5:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I like this idea so much that I already proposed it in the past[1], so +1.\n>> \n>> [1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n\n> Hey, look at that. I think I had some vague recollection of a prior\n> proposal, but I couldn't remember exactly who or exactly what had been\n> proposed. I do think that pg_ctl is too long a prefix, though. People\n> can get used to typing 'pg createdb' instead of 'createdb' but 'pg_ctl\n> createdb' seems like too much. At least, it would very very quickly\n> cause me to install aliases.\n\nYeah, I'd be happier with \"pg\" than \"pg_ctl\" as well. But it's so\nshort that I wonder if some other software has already adopted it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Apr 2020 10:57:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 4:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sat, Apr 11, 2020 at 5:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >> I like this idea so much that I already proposed it in the past[1], so\n> +1.\n> >>\n> >> [1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n>\n> > Hey, look at that. I think I had some vague recollection of a prior\n> > proposal, but I couldn't remember exactly who or exactly what had been\n> > proposed. I do think that pg_ctl is too long a prefix, though. People\n> > can get used to typing 'pg createdb' instead of 'createdb' but 'pg_ctl\n> > createdb' seems like too much. At least, it would very very quickly\n> > cause me to install aliases.\n>\n> Yeah, I'd be happier with \"pg\" than \"pg_ctl\" as well. But it's so\n> short that I wonder if some other software has already adopted it.\n>\n\nThere's https://en.wikipedia.org/wiki/Pg_(Unix).\n\nSo it's been removed from posix, but not unlikely to be around. For\nexample, I see it on a server with Debian 9 (Stretch) or Ubuntu 16.04 which\nis still well in support (but not on a RedHat from the same era).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Apr 12, 2020 at 4:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Apr 11, 2020 at 5:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I like this idea so much that I already proposed it in the past[1], so +1.\n>> \n>> [1] https://postgr.es/m/20160826202911.GA320593@alvherre.pgsql\n\n> Hey, look at that. I think I had some vague recollection of a prior\n> proposal, but I couldn't remember exactly who or exactly what had been\n> proposed. I do think that pg_ctl is too long a prefix, though. People\n> can get used to typing 'pg createdb' instead of 'createdb' but 'pg_ctl\n> createdb' seems like too much. At least, it would very very quickly\n> cause me to install aliases.\n\nYeah, I'd be happier with \"pg\" than \"pg_ctl\" as well.  But it's so\nshort that I wonder if some other software has already adopted it.There's https://en.wikipedia.org/wiki/Pg_(Unix).So it's been removed from posix, but not unlikely to be around. For example, I see it on a server with Debian 9 (Stretch) or Ubuntu 16.04 which is still well in support (but not on a RedHat from the same era). --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 12 Apr 2020 17:02:18 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 11:02 AM Magnus Hagander <magnus@hagander.net> wrote:\n> There's https://en.wikipedia.org/wiki/Pg_(Unix).\n>\n> So it's been removed from posix, but not unlikely to be around. For example, I see it on a server with Debian 9 (Stretch) or Ubuntu 16.04 which is still well in support (but not on a RedHat from the same era).\n\nWell, if it's around on older distros, but not in the newest versions,\nI think we should try to lay speedy claim to the name before something\nelse does, because any other name we pick is going to be longer or\nless intuitive or, most likely, both. There's no guarantee that\nPostgreSQL 14 would even get packaged for older distros, anyway, or at\nleast not by the OS provider.\n\nWe could also have an alternate name, like pgsql, and make 'pg' a\nsymlink to it that packagers can choose to omit. (I would prefer pgsql\nto pg_ctl, both because I think it's confusing to adopt the name of an\nexisting tool as the meta-command and also because the underscore\nrequires pressing two keys at once, which is slightly slower to type).\nBut there is no way anyone who is a serious user is going to be happy\nwith a five-character meta-command name that requires six key-presses\nto enter (cf. cvs, git, hg, yum, pip, apt, ...).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 12 Apr 2020 11:21:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 5:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Apr 12, 2020 at 11:02 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > There's https://en.wikipedia.org/wiki/Pg_(Unix).\n> >\n> > So it's been removed from posix, but not unlikely to be around. For\n> example, I see it on a server with Debian 9 (Stretch) or Ubuntu 16.04 which\n> is still well in support (but not on a RedHat from the same era).\n>\n> Well, if it's around on older distros, but not in the newest versions,\n> I think we should try to lay speedy claim to the name before something\n> else does, because any other name we pick is going to be longer or\n> less intuitive or, most likely, both. There's no guarantee that\n> PostgreSQL 14 would even get packaged for older distros, anyway, or at\n> least not by the OS provider.\n>\n\nIt definitely won't be by the OS provider, however it will be by *us*. Our\napt and yum repositories support all our versions on \"all supported\"\nversions of the upstream distros. So we should at least have a plan and a\nstory for how to deal with that, and make sure all our own packagers deal\nwith it the same way.\n\n\nWe could also have an alternate name, like pgsql, and make 'pg' a\n> symlink to it that packagers can choose to omit. (I would prefer pgsql\n> to pg_ctl, both because I think it's confusing to adopt the name of an\n> existing tool as the meta-command and also because the underscore\n> requires pressing two keys at once, which is slightly slower to type).\n> But there is no way anyone who is a serious user is going to be happy\n> with a five-character meta-command name that requires six key-presses\n> to enter (cf. cvs, git, hg, yum, pip, apt, ...).\n>\n\nAgreed, pgsql would certainly be better than pg_ctl.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Apr 12, 2020 at 5:22 PM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Apr 12, 2020 at 11:02 AM Magnus Hagander <magnus@hagander.net> wrote:\n> There's https://en.wikipedia.org/wiki/Pg_(Unix).\n>\n> So it's been removed from posix, but not unlikely to be around. For example, I see it on a server with Debian 9 (Stretch) or Ubuntu 16.04 which is still well in support (but not on a RedHat from the same era).\n\nWell, if it's around on older distros, but not in the newest versions,\nI think we should try to lay speedy claim to the name before something\nelse does, because any other name we pick is going to be longer or\nless intuitive or, most likely, both. There's no guarantee that\nPostgreSQL 14 would even get packaged for older distros, anyway, or at\nleast not by the OS provider.It definitely won't be by the OS provider, however it will be by *us*. Our apt and yum repositories support all our versions on \"all supported\" versions of the upstream distros. So we should at least have a plan and a story for how to deal with that, and make sure all our own packagers deal with it the same way.\nWe could also have an alternate name, like pgsql, and make 'pg' a\nsymlink to it that packagers can choose to omit. (I would prefer pgsql\nto pg_ctl, both because I think it's confusing to adopt the name of an\nexisting tool as the meta-command and also because the underscore\nrequires pressing two keys at once, which is slightly slower to type).\nBut there is no way anyone who is a serious user is going to be happy\nwith a five-character meta-command name that requires six key-presses\nto enter (cf. cvs, git, hg, yum, pip, apt, ...).Agreed, pgsql would certainly be better than pg_ctl. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 12 Apr 2020 17:29:55 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nOn 2020-04-12 10:57:59 -0400, Tom Lane wrote:\n> Yeah, I'd be happier with \"pg\" than \"pg_ctl\" as well. But it's so\n> short that I wonder if some other software has already adopted it.\n\nFWIW, Debian unstable does not have a 'pg' binary. There's a few modules\nin various languages called 'pg', but that's not a problem.\n\nI personally think it might be a good idea to 'claim' the pg binary\nsoon, so that doesn't change. Even if we should support a command or two\nthrough it initially (e.g. pg_ctl ... -> pg ctl ...).\n\nRegards,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 12:32:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On 2020-04-12 11:21:50 -0400, Robert Haas wrote:\n> We could also have an alternate name, like pgsql, and make 'pg' a\n> symlink to it that packagers can choose to omit.\n\nWe could even name the non-abbreviated binary postgres :).\n\nSure, that'd cause a bit more trouble upgrading for people that scripted\nstarting postgres without going through pg_ctl or such, but OTOH it'd\nnot cause new naming conflicts. And all that'd be needed to fix the\nstart script would be 's/postgres/postgres server/'.\n\n- Andres\n\n\n", "msg_date": "Sun, 12 Apr 2020 12:39:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-12 11:21:50 -0400, Robert Haas wrote:\n>> We could also have an alternate name, like pgsql, and make 'pg' a\n>> symlink to it that packagers can choose to omit.\n\n> We could even name the non-abbreviated binary postgres :).\n\nI shudder to imagine the confusion that would result.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 Apr 2020 16:07:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "Hi,\n\nSorry this email is not a discussion about word selection.\nSince part of the manual had left pg_validatebackup in commit dbc60c5593f26dc777a3be032bff4fb4eab1ddd1.\nI've attached a patch to fix this.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us] \nSent: Monday, April 13, 2020 5:07 AM\nTo: Andres Freund <andres@anarazel.de>\nCc: Robert Haas <robertmhaas@gmail.com>; Magnus Hagander <magnus@hagander.net>; Alvaro Herrera <alvherre@2ndquadrant.com>; David Steele <david@pgmasters.net>; pgsql-hackers@postgresql.org; Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\nSubject: Re: pg_validatebackup -> pg_verifybackup?\n\nAndres Freund <andres@anarazel.de> writes:\n> On 2020-04-12 11:21:50 -0400, Robert Haas wrote:\n>> We could also have an alternate name, like pgsql, and make 'pg' a \n>> symlink to it that packagers can choose to omit.\n\n> We could even name the non-abbreviated binary postgres :).\n\nI shudder to imagine the confusion that would result.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Apr 2020 02:20:03 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "I recklessly join the discussion about naming.\n\nAt Sun, 12 Apr 2020 17:29:55 +0200, Magnus Hagander <magnus@hagander.net> wrote in \n> Agreed, pgsql would certainly be better than pg_ctl.\n\nI like pgsql. And if we are going to join to the THREE-LETTERS\nCONTROLLER COMMANDS ALLIANCE, pgc (pg controller) might be a candidate\nof the controller program.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Apr 2020 15:45:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" }, { "msg_contents": "On Sun, Apr 12, 2020 at 10:20 PM Shinoda, Noriyoshi (PN Japan A&PS\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\n> Sorry this email is not a discussion about word selection.\n> Since part of the manual had left pg_validatebackup in commit dbc60c5593f26dc777a3be032bff4fb4eab1ddd1.\n> I've attached a patch to fix this.\n\nCommitted, thanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:56:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_validatebackup -> pg_verifybackup?" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI think I've found a small bug in pg_dump that could cause some schema\r\nprivileges to be missed. In short, if you've renamed a schema that\r\nhas an entry in pg_init_privs, pg_dump will skip dumping the initial\r\nACL for the schema. This results in missing privileges on restore.\r\n\r\nI've attached a small patch with a test case to handle this. This\r\npatch fixes the problem by adjusting the LEFT JOIN on pg_init_privs to\r\nonly match for schemas that match the default system names. I've only\r\nincluded 'public' and 'pg_catalog' for now, since AFAICT those are the\r\nonly two system schemas with corresponding pg_init_privs entries for\r\nwhich pg_dump dumps ACLs. Also, I haven't attempted to handle the\r\ncase where an extension schema with a pg_init_privs entry has been\r\nrenamed. Perhaps a sturdier approach would be to adjust the way\r\npg_init_privs is maintained, but that might be too invasive.\r\n\r\nEven with this patch, I think there are still some interesting corner\r\ncases involving the 'public' schema (e.g. recreating it, changing its\r\nownership). I don't know if it's worth trying to address all these\r\ncorner cases with special system schemas, but the first one I\r\nmentioned seemed simple enough to fix.\r\n\r\nNathan", "msg_date": "Fri, 10 Apr 2020 20:14:58 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "pg_dump issue with renamed system schemas" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> I think I've found a small bug in pg_dump that could cause some schema\n> privileges to be missed. In short, if you've renamed a schema that\n> has an entry in pg_init_privs, pg_dump will skip dumping the initial\n> ACL for the schema. This results in missing privileges on restore.\n\nThis seems like a special case of the issue discussed in\n\nhttps://www.postgresql.org/message-id/flat/f85991ad-bbd4-ad57-fde4-e12f0661dbf0@postgrespro.ru\n\nAFAICT we didn't think we'd found a satisfactory solution yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:26:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump issue with renamed system schemas" } ]
[ { "msg_contents": "Hi\n\nNow, the content of redirect output has two parts\n\n1. tabular output\n2. cmd tags\n\nThere is a problem with command tags, because it is specific kind of\ninformation and can be nice if can be redirected to stdout every time like\n\\h output. There can be new psql variable like REDIRECTED_OUTPUT with\npossibilities (all, tabular)\n\nWhat do you think about this?\n\nPavel\n\nHiNow, the content of redirect output has two parts1. tabular output2. cmd tagsThere is a problem with command tags, because it is specific kind of information and can be nice if can be redirected to stdout every time like \\h output. There can be new psql variable like REDIRECTED_OUTPUT with possibilities (all, tabular)What do you think about this?Pavel", "msg_date": "Sat, 11 Apr 2020 08:54:55 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "so 11. 4. 2020 v 8:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> Now, the content of redirect output has two parts\n>\n> 1. tabular output\n> 2. cmd tags\n>\n> There is a problem with command tags, because it is specific kind of\n> information and can be nice if can be redirected to stdout every time like\n> \\h output. There can be new psql variable like REDIRECTED_OUTPUT with\n> possibilities (all, tabular)\n>\n> What do you think about this?\n>\n\nor different method - set target of status row - with result (default) or\nstdout (terminal)\n\npatch assigned\n\nWhen I pin status rows just to stdout, then redirected output contains only\nquery results\n\nRegards\n\nPavel\n\n\n> Pavel\n>", "msg_date": "Sat, 11 Apr 2020 10:21:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "On 2020-04-11 10:21, Pavel Stehule wrote:\n> so 11. 4. 2020 v 8:54 odesílatel Pavel Stehule \n> <pavel.stehule@gmail.com>\n> napsal:\n\n> [psql-status-target.patch]\n\nHi Pavel,\n\nThis looks interesting, and I built an instance with the patch to try \nit, but I can't figure out how to use it.\n\nCan you perhaps give a few or even just one example?\n\nthanks!\n\nErik Rijkers\n\n\n", "msg_date": "Sat, 11 Apr 2020 11:04:21 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "so 11. 4. 2020 v 11:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> On 2020-04-11 10:21, Pavel Stehule wrote:\n> > so 11. 4. 2020 v 8:54 odesílatel Pavel Stehule\n> > <pavel.stehule@gmail.com>\n> > napsal:\n>\n> > [psql-status-target.patch]\n>\n> Hi Pavel,\n>\n> This looks interesting, and I built an instance with the patch to try\n> it, but I can't figure out how to use it.\n>\n> Can you perhaps give a few or even just one example?\n>\n\nMain motivation for this patch is working with psql for writing and editing\nqueries, and browsing result in second terminal with pspg or any other\nsimilar tool (tail, ...). The advantage of this setup is possibility to see\nsql and query result together. I use terminal multiplicator (Tilix), but it\ncan be used without it.\n\nSo example with pspg (should be some fresh release)\n\n1. create fifo used for communication - mkfifo ~/pipe\n\n2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2\n\n3. run psql in other terminal\n\npsql\n\\o ~/pipe\nCREATE TABLE foo(a int);\nINSERT INTO foo VALUES(10);\n-- in default case, the status row \"CREATE\", \"INSERT\" is redirected to\n\"browser terminal\" and it doesn't look well (and it is not user friendly).\n\nwith patched version you can\n\n\\set STATUS_TARGET stdout\n-- after this setting, the status row will be displayed in psql terminal\n\nRegards\n\nPavel\n\n\n\n\n\n> thanks!\n>\n> Erik Rijkers\n>\n\nso 11. 4. 2020 v 11:04 odesílatel Erik Rijkers <er@xs4all.nl> napsal:On 2020-04-11 10:21, Pavel Stehule wrote:\n> so 11. 4. 2020 v 8:54 odesílatel Pavel Stehule \n> <pavel.stehule@gmail.com>\n> napsal:\n\n> [psql-status-target.patch]\n\nHi Pavel,\n\nThis looks interesting, and I built an instance with the patch to try \nit, but I can't figure out how to use it.\n\nCan you perhaps give a few or even just one example?Main motivation for this patch is working with psql for writing and editing queries, and browsing result in second terminal with pspg or any other similar tool (tail, ...). The advantage of this setup is possibility to see sql and query result together. I use terminal multiplicator (Tilix), but it can be used without it.So example with pspg (should be some fresh release)1. create fifo used for communication - mkfifo ~/pipe2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=23. run psql in other terminalpsql\\o ~/pipeCREATE TABLE foo(a int);INSERT INTO foo VALUES(10); -- in default case, the status row \"CREATE\", \"INSERT\" is redirected to \"browser terminal\" and it doesn't look well (and it is not user friendly).with patched version you can\\set STATUS_TARGET stdout-- after this setting, the status row will be displayed in psql terminalRegardsPavel\n\nthanks!\n\nErik Rijkers", "msg_date": "Sat, 11 Apr 2020 11:19:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "Hello,\n\nOn Sat, Apr 11, 2020 at 6:20 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Main motivation for this patch is working with psql for writing and editing queries, and browsing result in second terminal with pspg or any other similar tool (tail, ...). The advantage of this setup is possibility to see sql and query result together. I use terminal multiplicator (Tilix), but it can be used without it.\n>\n> So example with pspg (should be some fresh release)\n>\n> 1. create fifo used for communication - mkfifo ~/pipe\n>\n> 2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2\n>\n> 3. run psql in other terminal\n\nThe patch looks interesting. As far as I understand the purpose of the\npatch is to hide status messages from result output.\nSo maybe it would be enough just to hide status messages at all. There\nis the QUIET variable for that. The main advantage of this variable is\nthat it hides a status of \"\\lo_\" commands, for example, as well as a\nstatus of utility commands. So the QUIET variable covers more use\ncases already.\n\n-- \nArtur\n\n\n", "msg_date": "Fri, 3 Jul 2020 19:16:17 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "pá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov <zaartur@gmail.com> napsal:\n\n> Hello,\n>\n> On Sat, Apr 11, 2020 at 6:20 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Main motivation for this patch is working with psql for writing and\n> editing queries, and browsing result in second terminal with pspg or any\n> other similar tool (tail, ...). The advantage of this setup is possibility\n> to see sql and query result together. I use terminal multiplicator (Tilix),\n> but it can be used without it.\n> >\n> > So example with pspg (should be some fresh release)\n> >\n> > 1. create fifo used for communication - mkfifo ~/pipe\n> >\n> > 2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2\n> >\n> > 3. run psql in other terminal\n>\n> The patch looks interesting. As far as I understand the purpose of the\n> patch is to hide status messages from result output.\n> So maybe it would be enough just to hide status messages at all. There\n> is the QUIET variable for that. The main advantage of this variable is\n> that it hides a status of \"\\lo_\" commands, for example, as well as a\n> status of utility commands. So the QUIET variable covers more use\n> cases already.\n>\n\nThe quiet mode isn't exactly what I want (it can be used as a workaround -\nand now, pspg https://github.com/okbob/pspg knows a format of status line\nand can work it).\n\nI would like to see a status row. For me it is a visual check so some\nstatements like INSERT or UPDATE was done successfully. But I would not\nsend it to the terminal with an active tabular pager.\n\nPavel\n\n\n\n\n> --\n> Artur\n>\n\npá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov <zaartur@gmail.com> napsal:Hello,\n\nOn Sat, Apr 11, 2020 at 6:20 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Main motivation for this patch is working with psql for writing and editing queries, and browsing result in second terminal with pspg or any other similar tool (tail, ...). The advantage of this setup is possibility to see sql and query result together. I use terminal multiplicator (Tilix), but it can be used without it.\n>\n> So example with pspg (should be some fresh release)\n>\n> 1. create fifo used for communication - mkfifo ~/pipe\n>\n> 2. run in one terminal pspg - pspg -f ~/pipe --hold-stream=2\n>\n> 3. run psql in other terminal\n\nThe patch looks interesting. As far as I understand the purpose of the\npatch is to hide status messages from result output.\nSo maybe it would be enough just to hide status messages at all. There\nis the QUIET variable for that. The main advantage of this variable is\nthat it hides a status of \"\\lo_\" commands, for example, as well as a\nstatus of utility commands. So the QUIET variable covers more use\ncases already.The quiet mode isn't exactly what I want (it can be used as a workaround - and now, pspg https://github.com/okbob/pspg knows a format of status line and can work it).I would like to see a status row. For me it is a visual check so some statements like INSERT or UPDATE was done successfully. But I would not send it to the terminal with an active tabular pager.Pavel \n\n-- \nArtur", "msg_date": "Sat, 4 Jul 2020 05:27:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "On 7/3/20 11:27 PM, Pavel Stehule wrote:\n> \n> pá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov <zaartur@gmail.com \n> <mailto:zaartur@gmail.com>> napsal:\n> \n> The patch looks interesting. As far as I understand the purpose of the\n> patch is to hide status messages from result output.\n> So maybe it would be enough just to hide status messages at all. There\n> is the QUIET variable for that. The main advantage of this variable is\n> that it hides a status of \"\\lo_\" commands, for example, as well as a\n> status of utility commands. So the QUIET variable covers more use\n> cases already.\n> \n> \n> The quiet mode isn't exactly what I want (it can be used as a workaround \n> - and now, pspg https://github.com/okbob/pspg \n> <https://github.com/okbob/pspg> knows a format of status line and can \n> work it).\n> \n> I would like to see a status row. For me it is a visual check so some \n> statements like INSERT or UPDATE was done successfully. But I would not \n> send it to the terminal with an active tabular pager.\n\nIt's been quite a while since the patch has seen any review. It's not \nclear that this is enough of an improvement over QUIET to be worthwhile.\n\nI think it would be best to close this patch at the end of the CF if \nthere is no further reviewer/committer interest.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 3 Mar 2021 11:58:02 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" }, { "msg_contents": "st 3. 3. 2021 v 17:58 odesílatel David Steele <david@pgmasters.net> napsal:\n\n> On 7/3/20 11:27 PM, Pavel Stehule wrote:\n> >\n> > pá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov <zaartur@gmail.com\n> > <mailto:zaartur@gmail.com>> napsal:\n> >\n> > The patch looks interesting. As far as I understand the purpose of\n> the\n> > patch is to hide status messages from result output.\n> > So maybe it would be enough just to hide status messages at all.\n> There\n> > is the QUIET variable for that. The main advantage of this variable\n> is\n> > that it hides a status of \"\\lo_\" commands, for example, as well as a\n> > status of utility commands. So the QUIET variable covers more use\n> > cases already.\n> >\n> >\n> > The quiet mode isn't exactly what I want (it can be used as a workaround\n> > - and now, pspg https://github.com/okbob/pspg\n> > <https://github.com/okbob/pspg> knows a format of status line and can\n> > work it).\n> >\n> > I would like to see a status row. For me it is a visual check so some\n> > statements like INSERT or UPDATE was done successfully. But I would not\n> > send it to the terminal with an active tabular pager.\n>\n> It's been quite a while since the patch has seen any review. It's not\n> clear that this is enough of an improvement over QUIET to be worthwhile.\n>\n> I think it would be best to close this patch at the end of the CF if\n> there is no further reviewer/committer interest.\n>\n\nok\n\nThank you\n\nPavel\n\n\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nst 3. 3. 2021 v 17:58 odesílatel David Steele <david@pgmasters.net> napsal:On 7/3/20 11:27 PM, Pavel Stehule wrote:\n> \n> pá 3. 7. 2020 v 12:16 odesílatel Artur Zakirov <zaartur@gmail.com \n> <mailto:zaartur@gmail.com>> napsal:\n> \n>     The patch looks interesting. As far as I understand the purpose of the\n>     patch is to hide status messages from result output.\n>     So maybe it would be enough just to hide status messages at all. There\n>     is the QUIET variable for that. The main advantage of this variable is\n>     that it hides a status of \"\\lo_\" commands, for example, as well as a\n>     status of utility commands. So the QUIET variable covers more use\n>     cases already.\n> \n> \n> The quiet mode isn't exactly what I want (it can be used as a workaround \n> - and now, pspg https://github.com/okbob/pspg \n> <https://github.com/okbob/pspg> knows a format of status line and can \n> work it).\n> \n> I would like to see a status row. For me it is a visual check so some \n> statements like INSERT or UPDATE was done successfully. But I would not \n> send it to the terminal with an active tabular pager.\n\nIt's been quite a while since the patch has seen any review. It's not \nclear that this is enough of an improvement over QUIET to be worthwhile.\n\nI think it would be best to close this patch at the end of the CF if \nthere is no further reviewer/committer interest.okThank youPavel\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Wed, 3 Mar 2021 18:47:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql - possibility to redirect only tabular output" } ]
[ { "msg_contents": "I just fixed a relcache leak that I accidentally introduced \n(5a1d0c9925). Because it was a TAP test involving replication workers, \nyou don't see the usual warning anywhere unless you specifically check \nthe log files manually.\n\nHow about a compile-time option to turn all the warnings in resowner.c \ninto errors? This could be enabled automatically by --enable-cassert, \nsimilar to other defines that that option enables.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 11 Apr 2020 10:09:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "relcache leak warnings vs. errors" }, { "msg_contents": "On Sat, Apr 11, 2020 at 10:09:59AM +0200, Peter Eisentraut wrote:\n> I just fixed a relcache leak that I accidentally introduced (5a1d0c9925).\n> Because it was a TAP test involving replication workers, you don't see the\n> usual warning anywhere unless you specifically check the log files manually.\n> \n> How about a compile-time option to turn all the warnings in resowner.c into\n> errors? This could be enabled automatically by --enable-cassert, similar to\n> other defines that that option enables.\n> \n\n+1. Would it be worthwhile to do the same in e.g. aset.c (for\nMEMORY_CONTEXT_CHECKING case), or more generally for stuff in\nsrc/backend/utils?\n\n\n", "msg_date": "Sat, 11 Apr 2020 10:28:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: relcache leak warnings vs. errors" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> How about a compile-time option to turn all the warnings in resowner.c \n> into errors? This could be enabled automatically by --enable-cassert, \n> similar to other defines that that option enables.\n\n[ itch... ] Those calls occur post-commit; throwing an error there\nis really a mess, which is why it's only WARNING now.\n\nI guess you could make them PANICs, but it would be an option that nobody\ncould possibly want to have enabled in anything resembling production.\nSo I\"m kind of -0.5 on making --enable-cassert do it automatically.\nAlthough I suppose that it's not really worse than other assertion\nfailures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 10:54:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: relcache leak warnings vs. errors" }, { "msg_contents": "Hi,\n\nOn 2020-04-11 10:54:49 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > How about a compile-time option to turn all the warnings in resowner.c \n> > into errors? This could be enabled automatically by --enable-cassert, \n> > similar to other defines that that option enables.\n> \n> [ itch... ] Those calls occur post-commit; throwing an error there\n> is really a mess, which is why it's only WARNING now.\n\n> I guess you could make them PANICs, but it would be an option that nobody\n> could possibly want to have enabled in anything resembling production.\n> So I\"m kind of -0.5 on making --enable-cassert do it automatically.\n> Although I suppose that it's not really worse than other assertion\n> failures.\n\nI'd much rather see this throw an assertion than the current\nbehaviour. But I'm wondering if there's a chance we can throw an error\nin non-assert builds without adding too much complexity to the error\npaths. Could we perhaps throw the error a bit later during the commit\nprocessing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Apr 2020 13:00:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: relcache leak warnings vs. errors" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-11 10:54:49 -0400, Tom Lane wrote:\n>> I guess you could make them PANICs, but it would be an option that nobody\n>> could possibly want to have enabled in anything resembling production.\n>> So I\"m kind of -0.5 on making --enable-cassert do it automatically.\n>> Although I suppose that it's not really worse than other assertion\n>> failures.\n\n> I'd much rather see this throw an assertion than the current\n> behaviour. But I'm wondering if there's a chance we can throw an error\n> in non-assert builds without adding too much complexity to the error\n> paths. Could we perhaps throw the error a bit later during the commit\n> processing?\n\nAny error post-commit is a semantic disaster.\n\nI guess that an assertion wouldn't be so awful, if people would rather\ndo it like that in debug builds.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:22:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: relcache leak warnings vs. errors" }, { "msg_contents": "On Mon, Apr 13, 2020 at 04:22:26PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I'd much rather see this throw an assertion than the current\n>> behaviour. But I'm wondering if there's a chance we can throw an error\n>> in non-assert builds without adding too much complexity to the error\n>> paths. Could we perhaps throw the error a bit later during the commit\n>> processing?\n> \n> Any error post-commit is a semantic disaster.\n\nYes, I can immediately think of two problems in the very recent\nhistory where this has bitten.\n\n> I guess that an assertion wouldn't be so awful, if people would rather\n> do it like that in debug builds.\n\nWARNING is useful mainly for tests where the output is checked, like\nthe main regression test suite. Now that TAP scenarios get more and\nmore complex, +1 on the addition of an assertion for a hard failure.\nI don't think either that's worth controlling with a developer GUC.\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 10:57:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: relcache leak warnings vs. errors" } ]
[ { "msg_contents": "Short version:\n\nIn what I'm currently working on I had a few questions about arrays\nand the execExpr/execExprInterp framework that didn't seem obviously\nanswered in the code or README.\n\n- Does the execExpr/execExprInterp framework allow a scalar array op\nto get an already expanded array (unless I'm missing something we\ncan't easily lookup a given index in a flattened array)?\n- If not, is there a way in that framework to know if the array expr\nhas stayed the same through multiple evaluations of the expression\ntree (i.e., so you could expand and sort it just once)?\n\nLong version/background:\n\nI've been looking at how a query like:\nselect * from t where i in (<long array>);\nexecutes as a seq scan the execution time increases linearly with\nrespect to the length of the array. That's because in\nexecExprInterp.c's ExecEvalScalarArrayOp() we do a linear search\nthrough the array.\n\nIn contrast, when setting up a btree scan with a similar saop, we\nfirst sort the array, remove duplicates, and remove nulls. Of course\nwith btree scans this has other values, like allowing us to return\nresults in array order.\n\nI've been considering approaches to improve the seq scan case. We might:\n- At plan time rewrite as a hash join to a deduped array values\nrelation (gross, but included here since that's an approach we can\ntake rewriting the SQL itself as a proof of concept).\n- At execution time build a hash and lookup.\n- At execution time sort the array and binary search through it.\n\nI've been working other last approach to see what results I might get\n(it seemed like the easiest to hack together). Putting that together\nleft me with the questions mentioned above in the \"short version\".\n\nObviously if anyone has thoughts on the above approaches I'd be\ninterested in that too.\n\nSide question: when we do:\narr = DatumGetArrayTypeP(*op->resvalue);\nin ExecEvalScalarArrayOp() is that going to be expensive each time\nthrough a seq scan? Or is it (likely) going to resolve to an already\nin-memory array and effectively be the cost of retrieving that\npointer?\n\nJames\n\n\n", "msg_date": "Sat, 11 Apr 2020 08:58:46 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "Hi,\n\n\nTom, CCing you because of expanded datum question at bottom.\n\n\nOn 2020-04-11 08:58:46 -0400, James Coleman wrote:\n> - Does the execExpr/execExprInterp framework allow a scalar array op\n> to get an already expanded array (unless I'm missing something we\n> can't easily lookup a given index in a flattened array)?\n\nWell, I'm not quite sure what you're thinking of. If the input is\nconstant, then expression initialization can do just about everything it\nwants. Including preprocessing the array into whatever form it wants.\nBut there's no logic for doing preprocessing whenever values change.\n\n\n> - If not, is there a way in that framework to know if the array expr\n> has stayed the same through multiple evaluations of the expression\n> tree (i.e., so you could expand and sort it just once)?\n\nNo.\n\n\n> Long version/background:\n> \n> I've been looking at how a query like:\n> select * from t where i in (<long array>);\n> executes as a seq scan the execution time increases linearly with\n> respect to the length of the array. That's because in\n> execExprInterp.c's ExecEvalScalarArrayOp() we do a linear search\n> through the array.\n\nIs \"<long array>\" constant in the cases you're interested in? Because\nit's a heck of a lot easier to add an optimization for that case, than\nadding runtime tracking the array values by comparing the whole array\nfor equality with the last - the comparisons of the whole array could\neasily end up adding more cost than what's being saved.\n\n\n> I've been considering approaches to improve the seq scan case. We might:\n> - At plan time rewrite as a hash join to a deduped array values\n> relation (gross, but included here since that's an approach we can\n> take rewriting the SQL itself as a proof of concept).\n> - At execution time build a hash and lookup.\n> - At execution time sort the array and binary search through it.\n> \n> I've been working other last approach to see what results I might get\n> (it seemed like the easiest to hack together). Putting that together\n> left me with the questions mentioned above in the \"short version\".\n> \n> Obviously if anyone has thoughts on the above approaches I'd be\n> interested in that too.\n\nIf you're content with optimizing constant arrays, I'd go for detecting\nthat case in the T_ScalarArrayOpExpr case in ExecInitExprRec(), and\npreprocessing the array into an optimized form. Probably with a separate\nopcode for execution.\n\n\n> Side question: when we do:\n> arr = DatumGetArrayTypeP(*op->resvalue);\n> in ExecEvalScalarArrayOp() is that going to be expensive each time\n> through a seq scan? Or is it (likely) going to resolve to an already\n> in-memory array and effectively be the cost of retrieving that\n> pointer?\n\nIt Depends TM. For the constant case it's likely going to be cheap-ish,\nbecause it'll not be toasted. For the case where it's the return value\nfrom a subquery or something, you cannot assume it won't change between\ncalls.\n\nI think the worst case here is something like a nestloop, where the\ninner side does foo IN (outer.column). If I recall the code correctly,\nwe'll currently end up detoasting the array value every single\niteration.\n\nI wonder if it would be a good idea to change ExecEvalParamExec and\nExecEvalParamExtern to detoast the to-be-returned value. If we change\nthe value that's stored in econtext->ecxt_param_exec_vals /\necontext->ecxt_param_list_info, we'd avoid repeated detaosting.\n\nIt'd be nice if we somehow could make the expanded datum machinery work\nhere. I'm not quite seeing how though?\n\nCrazy idea: I have a patch to make execExprInterp largely use\nNullableDatum. Tom and I had theorized a while ago about adding\nadditional fields in the padding that currently exists in it. I wonder\nif we could utilize a bit in there to allow to expand in-place?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Apr 2020 11:01:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-11 08:58:46 -0400, James Coleman wrote:\n>> - Does the execExpr/execExprInterp framework allow a scalar array op\n>> to get an already expanded array (unless I'm missing something we\n>> can't easily lookup a given index in a flattened array)?\n\n> Well, I'm not quite sure what you're thinking of. If the input is\n> constant, then expression initialization can do just about everything it\n> wants. Including preprocessing the array into whatever form it wants.\n> But there's no logic for doing preprocessing whenever values change.\n\nFor the most part it seems like this is asking the question at the wrong\nlevel. It's not execExpr's job to determine the form of either values\ncoming in from \"outside\" (Vars from table rows, or Params from elsewhere)\nor the results of intermediate functions/operators.\n\nIn the specific case of an array-valued (or record-valued) Const node,\nyou could imagine having a provision to convert the array/record to an\nexpanded datum at execution start, or maybe better on first use. I'm\nnot sure how to tell whether that's actually a win though. It could\neasily be a net loss if the immediate consumer of the value wants a\nflat datum.\n\nIt seems like this might be somewhat related to the currently-moribund\npatch to allow caching of the values of stable subexpressions from\none execution to the next. If we had that infrastructure you could\nimagine extending it to allow caching the expanded not flat form of\nsome datums. Again I'm not entirely sure what would drive the choice.\n\n> I wonder if it would be a good idea to change ExecEvalParamExec and\n> ExecEvalParamExtern to detoast the to-be-returned value. If we change\n> the value that's stored in econtext->ecxt_param_exec_vals /\n> econtext->ecxt_param_list_info, we'd avoid repeated detaosting.\n\nI'd think about attaching that to the nestloop param mechanism not\nExecEvalParam in general.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 15:33:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n>\n> Tom, CCing you because of expanded datum question at bottom.\n>\n>\n> On 2020-04-11 08:58:46 -0400, James Coleman wrote:\n> > - Does the execExpr/execExprInterp framework allow a scalar array op\n> > to get an already expanded array (unless I'm missing something we\n> > can't easily lookup a given index in a flattened array)?\n>\n> Well, I'm not quite sure what you're thinking of. If the input is\n> constant, then expression initialization can do just about everything it\n> wants. Including preprocessing the array into whatever form it wants.\n> But there's no logic for doing preprocessing whenever values change.\n>\n>\n> > - If not, is there a way in that framework to know if the array expr\n> > has stayed the same through multiple evaluations of the expression\n> > tree (i.e., so you could expand and sort it just once)?\n>\n> No.\n\nOk. Seems like it'd be likely to be interesting (maybe in other places\ntoo?) to know if:\n- Something is actually a param that can change, and,\n- When (perhaps by some kind of flag or counter) it has changed.\n\n> > Long version/background:\n> >\n> > I've been looking at how a query like:\n> > select * from t where i in (<long array>);\n> > executes as a seq scan the execution time increases linearly with\n> > respect to the length of the array. That's because in\n> > execExprInterp.c's ExecEvalScalarArrayOp() we do a linear search\n> > through the array.\n>\n> Is \"<long array>\" constant in the cases you're interested in? Because\n> it's a heck of a lot easier to add an optimization for that case, than\n> adding runtime tracking the array values by comparing the whole array\n> for equality with the last - the comparisons of the whole array could\n> easily end up adding more cost than what's being saved.\n\nIn the simplest case, yes, it's a constant, though it'd be obviously\nbetter if it weren't limited to that. There are many cases where a\nlong array can come from a subplan but we can easily tell by looking\nat the SQL that it will only ever execute once. An unimaginative case\nis something like:\nselect * from t where a in (select i from generate_series(0,10000) n(i));\n\n> > I've been considering approaches to improve the seq scan case. We might:\n> > - At plan time rewrite as a hash join to a deduped array values\n> > relation (gross, but included here since that's an approach we can\n> > take rewriting the SQL itself as a proof of concept).\n> > - At execution time build a hash and lookup.\n> > - At execution time sort the array and binary search through it.\n> >\n> > I've been working other last approach to see what results I might get\n> > (it seemed like the easiest to hack together). Putting that together\n> > left me with the questions mentioned above in the \"short version\".\n> >\n> > Obviously if anyone has thoughts on the above approaches I'd be\n> > interested in that too.\n>\n> If you're content with optimizing constant arrays, I'd go for detecting\n> that case in the T_ScalarArrayOpExpr case in ExecInitExprRec(), and\n> preprocessing the array into an optimized form. Probably with a separate\n> opcode for execution.\n\nAt minimum constants are a good first place to try it out. Thanks for\nthe pointers.\n\n> > Side question: when we do:\n> > arr = DatumGetArrayTypeP(*op->resvalue);\n> > in ExecEvalScalarArrayOp() is that going to be expensive each time\n> > through a seq scan? Or is it (likely) going to resolve to an already\n> > in-memory array and effectively be the cost of retrieving that\n> > pointer?\n>\n> It Depends TM. For the constant case it's likely going to be cheap-ish,\n> because it'll not be toasted. For the case where it's the return value\n> from a subquery or something, you cannot assume it won't change between\n> calls.\n\nBack to what I was saying earlier. Perhaps some kind of mechanism so\nwe can know that is a better place to start. Perhaps something from\nthe patch Tom referenced would help kickstart that. I'll take a look.\n\n> I think the worst case here is something like a nestloop, where the\n> inner side does foo IN (outer.column). If I recall the code correctly,\n> we'll currently end up detoasting the array value every single\n> iteration.\n\nOuch. Seems like that could be a significant cost in some queries?\n\n> I wonder if it would be a good idea to change ExecEvalParamExec and\n> ExecEvalParamExtern to detoast the to-be-returned value. If we change\n> the value that's stored in econtext->ecxt_param_exec_vals /\n> econtext->ecxt_param_list_info, we'd avoid repeated detaosting.\n>\n> It'd be nice if we somehow could make the expanded datum machinery work\n> here. I'm not quite seeing how though?\n\nI'm not yet familiar enough with it to comment.\n\n> Crazy idea: I have a patch to make execExprInterp largely use\n> NullableDatum. Tom and I had theorized a while ago about adding\n> additional fields in the padding that currently exists in it. I wonder\n> if we could utilize a bit in there to allow to expand in-place?\n\nEffectively to store the pointers to, for example, the expanded array?\n\nJames\n\n\n", "msg_date": "Sat, 11 Apr 2020 15:53:11 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "Hi,\n\nOn 2020-04-11 15:33:09 -0400, Tom Lane wrote:\n> For the most part it seems like this is asking the question at the wrong\n> level. It's not execExpr's job to determine the form of either values\n> coming in from \"outside\" (Vars from table rows, or Params from elsewhere)\n> or the results of intermediate functions/operators.\n\nWe don't really have good place to do optimizations to transform an\nexpression to be better for an \"upper\" expression node.\n\n\n> In the specific case of an array-valued (or record-valued) Const node,\n> you could imagine having a provision to convert the array/record to an\n> expanded datum at execution start, or maybe better on first use. I'm\n> not sure how to tell whether that's actually a win though. It could\n> easily be a net loss if the immediate consumer of the value wants a\n> flat datum.\n\nWith execution start you do mean ExecInitExpr()? Or later? I see little\nreason not to do such an optimization during expression\ninitialization. It's not going to add much if any overhead compared to\nall the rest of the costs to change a Const array into a different\nform.\n\n\n> It seems like this might be somewhat related to the currently-moribund\n> patch to allow caching of the values of stable subexpressions from\n> one execution to the next. If we had that infrastructure you could\n> imagine extending it to allow caching the expanded not flat form of\n> some datums. Again I'm not entirely sure what would drive the choice.\n\n> > I wonder if it would be a good idea to change ExecEvalParamExec and\n> > ExecEvalParamExtern to detoast the to-be-returned value. If we change\n> > the value that's stored in econtext->ecxt_param_exec_vals /\n> > econtext->ecxt_param_list_info, we'd avoid repeated detaosting.\n> \n> I'd think about attaching that to the nestloop param mechanism not\n> ExecEvalParam in general.\n\nThat was my first though too - but especially if there's multiple\nvariables detoasting all of them might be a waste if the params are not\ndereferenced. So doing it the first time a parameter is used seemed like\nit'd be much more likely to be beneficial.\n\nBut even there it could be a waste, because e.g. a length comparison\nalone is enough to determine inequality. Which is why I was wondering\nabout somehow being able to detoast the parameter \"in place\" in the\nparams arrays.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Apr 2020 12:57:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 3:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-04-11 08:58:46 -0400, James Coleman wrote:\n> >> - Does the execExpr/execExprInterp framework allow a scalar array op\n> >> to get an already expanded array (unless I'm missing something we\n> >> can't easily lookup a given index in a flattened array)?\n>\n> > Well, I'm not quite sure what you're thinking of. If the input is\n> > constant, then expression initialization can do just about everything it\n> > wants. Including preprocessing the array into whatever form it wants.\n> > But there's no logic for doing preprocessing whenever values change.\n>\n> For the most part it seems like this is asking the question at the wrong\n> level. It's not execExpr's job to determine the form of either values\n> coming in from \"outside\" (Vars from table rows, or Params from elsewhere)\n> or the results of intermediate functions/operators.\n\nRight, though I didn't know if the expr interpretation by any chance\nhad expanded arrays already available in some cases that we could take\nadvantage of. A bit of a shot in the dark as I try to grok how this\nall fits together.\n\n> In the specific case of an array-valued (or record-valued) Const node,\n> you could imagine having a provision to convert the array/record to an\n> expanded datum at execution start, or maybe better on first use. I'm\n> not sure how to tell whether that's actually a win though. It could\n> easily be a net loss if the immediate consumer of the value wants a\n> flat datum.\n>\n> It seems like this might be somewhat related to the currently-moribund\n> patch to allow caching of the values of stable subexpressions from\n> one execution to the next. If we had that infrastructure you could\n> imagine extending it to allow caching the expanded not flat form of\n> some datums. Again I'm not entirely sure what would drive the choice.\n\nI'll have to look into that patch to see if it sparks any ideas (or if\nit's worth working on for its own merits).\n\n> > I wonder if it would be a good idea to change ExecEvalParamExec and\n> > ExecEvalParamExtern to detoast the to-be-returned value. If we change\n> > the value that's stored in econtext->ecxt_param_exec_vals /\n> > econtext->ecxt_param_list_info, we'd avoid repeated detaosting.\n>\n> I'd think about attaching that to the nestloop param mechanism not\n> ExecEvalParam in general.\n\nRevealing my ignorance here, but is nestloop the only case where we\nhave params like that?\n\nJames\n\n\n", "msg_date": "Sat, 11 Apr 2020 15:57:55 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 3:57 PM James Coleman <jtc331@gmail.com> wrote:\n> ..\n> > It seems like this might be somewhat related to the currently-moribund\n> > patch to allow caching of the values of stable subexpressions from\n> > one execution to the next. If we had that infrastructure you could\n> > imagine extending it to allow caching the expanded not flat form of\n> > some datums. Again I'm not entirely sure what would drive the choice.\n>\n> I'll have to look into that patch to see if it sparks any ideas (or if\n> it's worth working on for its own merits).\n\nIs this the patch [1] you're thinking of?\n\nJames\n\n[1]: https://www.postgresql.org/message-id/flat/da87bb6a014e029176a04f6e50033cfb%40postgrespro.ru\n\n\n", "msg_date": "Sat, 11 Apr 2020 16:58:45 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n>>> It seems like this might be somewhat related to the currently-moribund\n>>> patch to allow caching of the values of stable subexpressions from\n>>> one execution to the next.\n\n> Is this the patch [1] you're thinking of?\n> [1]: https://www.postgresql.org/message-id/flat/da87bb6a014e029176a04f6e50033cfb%40postgrespro.ru\n\nYeah. I was just digging for that in the archives, and also came across\nthis older patch:\n\nhttps://www.postgresql.org/message-id/CABRT9RBdRFS8sQNsJHxZOhC0tJe1x2jnomiz%3DFOhFkS07yRwQA%40mail.gmail.com\n\nwhich doesn't seem to have gone anywhere but might still contain\nuseful ideas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Apr 2020 17:03:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "Hi,\n\nOn 2020-04-11 15:53:11 -0400, James Coleman wrote:\n> On Sat, Apr 11, 2020 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > - If not, is there a way in that framework to know if the array expr\n> > > has stayed the same through multiple evaluations of the expression\n> > > tree (i.e., so you could expand and sort it just once)?\n> >\n> > No.\n> \n> Ok. Seems like it'd be likely to be interesting (maybe in other places\n> too?) to know if:\n> - Something is actually a param that can change, and,\n> - When (perhaps by some kind of flag or counter) it has changed.\n\nWe do have the param logic inside the executor, which does signal which\nparams have changed. It's just independent of expression evaluation.\n\nI'm not convinced (or well, even doubtful) this is something we want to\nhave at the expression evaluation level.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Apr 2020 14:32:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "On Sat, Apr 11, 2020 at 5:32 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-11 15:53:11 -0400, James Coleman wrote:\n> > On Sat, Apr 11, 2020 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > - If not, is there a way in that framework to know if the array expr\n> > > > has stayed the same through multiple evaluations of the expression\n> > > > tree (i.e., so you could expand and sort it just once)?\n> > >\n> > > No.\n> >\n> > Ok. Seems like it'd be likely to be interesting (maybe in other places\n> > too?) to know if:\n> > - Something is actually a param that can change, and,\n> > - When (perhaps by some kind of flag or counter) it has changed.\n>\n> We do have the param logic inside the executor, which does signal which\n> params have changed. It's just independent of expression evaluation.\n>\n> I'm not convinced (or well, even doubtful) this is something we want to\n> have at the expression evaluation level.\n\nPerhaps I'll discover the reason as I internalize the code, but could\nyou expound a bit? Is that because you believe there's a better way to\noptimize subexpressions that don't change? Or that it's likely to add\na lot of cost to non-optimized cases?\n\nJames\n\n\n", "msg_date": "Sun, 12 Apr 2020 08:55:44 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "Hi,\n\nOn 2020-04-12 08:55:44 -0400, James Coleman wrote:\n> On Sat, Apr 11, 2020 at 5:32 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-04-11 15:53:11 -0400, James Coleman wrote:\n> > > On Sat, Apr 11, 2020 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > - If not, is there a way in that framework to know if the array expr\n> > > > > has stayed the same through multiple evaluations of the expression\n> > > > > tree (i.e., so you could expand and sort it just once)?\n\n> > > Ok. Seems like it'd be likely to be interesting (maybe in other places\n> > > too?) to know if:\n> > > - Something is actually a param that can change, and,\n> > > - When (perhaps by some kind of flag or counter) it has changed.\n> >\n> > We do have the param logic inside the executor, which does signal which\n> > params have changed. It's just independent of expression evaluation.\n> >\n> > I'm not convinced (or well, even doubtful) this is something we want to\n> > have at the expression evaluation level.\n> \n> Perhaps I'll discover the reason as I internalize the code, but could\n> you expound a bit? Is that because you believe there's a better way to\n> optimize subexpressions that don't change? Or that it's likely to add\n> a lot of cost to non-optimized cases?\n\nI think, if you're putting it into expression evaluation itself, the\nlikelihood of causing slowdowns outside of the cases you're trying to\noptimize is much higher than likely the gain. Checks whether variables\nhaven't changed aren't free.\n\nSo, while I think it makes sense to optimize a constant array for a SAO\ninside expression initialization (possibly with a different opcode), I\ndon't think runtime checking logic to see whether the array is still the\nsame in ExecEvalScalarArrayOp() or any related place is likely to be a\ngood idea.\n\nIf - I am not really convinced it's worth it - we really want to\noptimize SAO arrays that can change at runtime, I suspect it'd be better\nif we extended the param mechanism so there's a 'postprocessing' step\nfor params that changed.\n\nWhich then would have to apply the expression sub-tree that applies to\nthe Param (i.e. ScalarArrayOpExpr->args) into some place that is\naccessible (or, even better, is directly accessed) by\nExecEvalScalarArrayOp().\n\n\nI think you'll also have to be careful about whether using binary search\nagainst the array will always come out top. I'd expect it to be worse\nfor the pretty common case of below 8 elements in the IN() or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 11:24:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "I've read through all of the previous discussions related to stable\nsubexpression caching, and I'm planning to send a summary email with\nall of those links in one place.\n\nBut I also happened to stumble upon mention in the TODO of some email\ndiscussion way back in 2007 where Tom suggested [1] we should really\ntry planning scalar array ops (particularly those with large IN lists)\nas `IN (VALUES ...)`.\n\nThat actually would solve the specific case I'd had this problem with\n(seq scan on a large constant array IN expression). Ideally any query\nwith forms like:\nselect * from t where a in (1, 2,...)\nselect * from t where a in ((select i from x))\nwould always be isomorphic in planning. But thinking about this\novernight and scanning through things quickly this morning, I have a\nfeeling that'd be 1.) a pretty significant undertaking, and 2.) likely\nto explode the number of plans considered.\n\nAlso I don't know if there's a good place to slot that into planning.\nDo either of you happen to have any pointers into places that do\nsimilar kinds of rewrites I could look at? And in those cases do we\nnormally always rewrite or do we consider both styles independently?\n\nI suppose _only_ handling the case where a `IN (VALUES ...)` replaces\na seq scan with a scalar array op might be somewhat easier...but feels\nlike it leaves a lot of holes.\n\nI'm still at the point where I'm trying to determine if any of the\nabove (subexpression caching, saop optimization only on constants,\nre-planning as `IN (VALUES ...)`) is something reasonable enough\nrelative to the amount of effort to be worth working on.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/19001.1178823208%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:40:52 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 10:40 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> I've read through all of the previous discussions related to stable\n> subexpression caching, and I'm planning to send a summary email with\n> all of those links in one place.\n>\n> But I also happened to stumble upon mention in the TODO of some email\n> discussion way back in 2007 where Tom suggested [1] we should really\n> try planning scalar array ops (particularly those with large IN lists)\n> as `IN (VALUES ...)`.\n>\n> That actually would solve the specific case I'd had this problem with\n> (seq scan on a large constant array IN expression). Ideally any query\n> with forms like:\n> select * from t where a in (1, 2,...)\n> select * from t where a in ((select i from x))\n> would always be isomorphic in planning. But thinking about this\n> overnight and scanning through things quickly this morning, I have a\n> feeling that'd be 1.) a pretty significant undertaking, and 2.) likely\n> to explode the number of plans considered.\n>\n> Also I don't know if there's a good place to slot that into planning.\n> Do either of you happen to have any pointers into places that do\n> similar kinds of rewrites I could look at? And in those cases do we\n> normally always rewrite or do we consider both styles independently?\n> ...\n> [1]: https://www.postgresql.org/message-id/19001.1178823208%40sss.pgh.pa.us\n\nI've kept reading the code and thinking this over some more. and it\nseems like implementing this would require one of:\n\n1. Tracking on a path whether or not it handles a given IN qual,\ngenerating paths with and without that qual, tracking the cheapest for\neach of those, and then running the join search with and without the\nVALUES RTEs in play...in short that seems...not really feasible.\n\n2. Teaching set_plain_rel_pathlist to run with and without the IN\nquals, and run a join search for just that base rel and the VALUES\nRTE, and determine which is lower cost. This seems like it'd be easier\nto get working, would have the limitation of not fully considering all\nJOIN combinations including VALUES RTEs, and would still build more\nthan twice as many base rel paths as we do now. I suppose it could\npredicate all of this on some kind of heuristic about saop cost (e.g.,\nsize of the array).\n\nBut the purist in me finds this attractive -- it means people\nshouldn't have to know to try both IN (<list>) and IN (VALUES ...) --\nit seems like it'd be a large effort with relatively little gain. My\nconclusion currently is that I believe we'd get 90% of the speed\nimprovement (at least) merely by optimizing scalar array ops, and,\npossibly (to expand beyond constants) work on the broader effort of\ncaching stable subexpressions with preprocessing support. And I have\nto assume the IN (<list>) case is a far more commonly used SQL clause\nanyway.\n\nI do wonder though if we could automatically convert some IN clauses\nwith subqueries into saops (parameterized, or even better, in some\ncases, quasi-consts lazily evaluated)...\n\nJames\n\n\n", "msg_date": "Fri, 17 Apr 2020 20:43:37 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: execExprInterp() questions / How to improve scalar array op expr\n eval?" } ]
[ { "msg_contents": "Adding -hackers, originally forgotten.\n\nOn Sat, Apr 11, 2020 at 10:26:39PM +0200, Tomas Vondra wrote:\n> Thanks! I'll investigate.\n> \n> On Sat, Apr 11, 2020 at 02:19:52PM -0500, Justin Pryzby wrote:\n> > frequent crash looks like:\n> > \n> > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n> > #1 0x00007fb0a0cda801 in __GI_abort () at abort.c:79\n> > #2 0x00007fb0a21ec425 in ExceptionalCondition (conditionName=conditionName@entry=0x7fb0a233a2ed \"relid > 0\", errorType=errorType@entry=0x7fb0a224701d \"FailedAssertion\",\n> > fileName=fileName@entry=0x7fb0a2340ce8 \"relnode.c\", lineNumber=lineNumber@entry=379) at assert.c:67\n> > #3 0x00007fb0a2010d3a in find_base_rel (root=root@entry=0x7fb0a2de2d00, relid=<optimized out>) at relnode.c:379\n> > #4 0x00007fb0a2199666 in examine_variable (root=root@entry=0x7fb0a2de2d00, node=node@entry=0x7fb0a2e65eb8, varRelid=varRelid@entry=0, vardata=vardata@entry=0x7ffe7b549e60) at selfuncs.c:4600\n> > #5 0x00007fb0a219e2ed in estimate_num_groups (root=root@entry=0x7fb0a2de2d00, groupExprs=0x7fb0a2e69118, input_rows=input_rows@entry=2, pgset=pgset@entry=0x0) at selfuncs.c:3279\n> > #6 0x00007fb0a1fc198b in cost_incremental_sort (path=path@entry=0x7fb0a2e69080, root=root@entry=0x7fb0a2de2d00, pathkeys=pathkeys@entry=0x7fb0a2e68b28, presorted_keys=presorted_keys@entry=3,\n> > input_startup_cost=103.73424154497742, input_total_cost=<optimized out>, input_tuples=2, width=480, comparison_cost=comparison_cost@entry=0, sort_mem=4096, limit_tuples=-1) at costsize.c:1832\n> > #7 0x00007fb0a2007f63 in create_incremental_sort_path (root=root@entry=0x7fb0a2de2d00, rel=rel@entry=0x7fb0a2e67a38, subpath=subpath@entry=0x7fb0a2e681a0, pathkeys=0x7fb0a2e68b28,\n> > presorted_keys=3, limit_tuples=limit_tuples@entry=-1) at pathnode.c:2793\n> > #8 0x00007fb0a1fe97cb in create_ordered_paths (limit_tuples=-1, target_parallel_safe=true, target=0x7fb0a2e65568, input_rel=<optimized out>, root=0x7fb0a2de2d00) at planner.c:5029\n> > #9 grouping_planner (root=root@entry=0x7fb0a2de2d00, inheritance_update=inheritance_update@entry=false, tuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:2254\n> > #10 0x00007fb0a1fecd5c in subquery_planner (glob=<optimized out>, parse=parse@entry=0x7fb0a2db7840, parent_root=parent_root@entry=0x7fb0a2dad650, hasRecursion=hasRecursion@entry=false,\n> > tuple_fraction=0) at planner.c:1015\n> > #11 0x00007fb0a1fbc286 in set_subquery_pathlist (rte=<optimized out>, rti=<optimized out>, rel=0x7fb0a2db3598, root=0x7fb0a2dad650) at allpaths.c:2303\n> > #12 set_rel_size (root=root@entry=0x7fb0a2dad650, rel=rel@entry=0x7fb0a2db1670, rti=rti@entry=2, rte=<optimized out>) at allpaths.c:422\n> > #13 0x00007fb0a1fbecad in set_base_rel_sizes (root=<optimized out>) at allpaths.c:323\n> > #14 make_one_rel (root=root@entry=0x7fb0a2dad650, joinlist=joinlist@entry=0x7fb0a2db76b8) at allpaths.c:185\n> > #15 0x00007fb0a1fe4a2b in query_planner (root=root@entry=0x7fb0a2dad650, qp_callback=qp_callback@entry=0x7fb0a1fe52c0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe7b54a510)\n> > at planmain.c:269\n> > #16 0x00007fb0a1fea0b8 in grouping_planner (root=root@entry=0x7fb0a2dad650, inheritance_update=inheritance_update@entry=false, tuple_fraction=<optimized out>, tuple_fraction@entry=0)\n> > at planner.c:2058\n> > #17 0x00007fb0a1fecd5c in subquery_planner (glob=glob@entry=0x7fb0a2dab480, parse=parse@entry=0x7fb0a2d48410, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false,\n> > tuple_fraction=tuple_fraction@entry=0) at planner.c:1015\n> > #18 0x00007fb0a1fee1df in standard_planner (parse=0x7fb0a2d48410, query_string=<optimized out>, cursorOptions=256, boundParams=<optimized out>) at planner.c:405\n> > \n> > Minimal query like:\n> > \n> > explain SELECT * FROM information_schema.transforms AS ref_1 RIGHT JOIN (SELECT 1 FROM pg_catalog.pg_namespace TABLESAMPLE SYSTEM (7.2))AS sample_2 ON (ref_1.specific_name is NULL);\n> > \n> > -- \n> > Justin\n\n\n", "msg_date": "Sat, 11 Apr 2020 16:46:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "Hi,\n\nI've looked into this a bit, and at first I thought that maybe the issue \nis in how cost_incremental_sort picks the EC members. It simply does this:\n\n EquivalenceMember *member = (EquivalenceMember *)\n linitial(key->pk_eclass->ec_members);\n\nso I was speculating that maybe there are multiple EC members and the\none we need is not the first one. That would have been easy to fix.\n\nBut that doesn't seem to be the case - in this example the EC ony has a\nsingle EC member anyway.\n\n (gdb) p key->pk_eclass->ec_members\n $14 = (List *) 0x12eb958\n (gdb) p *key->pk_eclass->ec_members\n $15 = {type = T_List, length = 1, max_length = 5, elements = 0x12eb970, initial_elements = 0x12eb970}\n\nand the member is a Var with varno=0 (with a RelabelType on top, but \nthat's irrelevant).\n\n (gdb) p *(Var*)((RelabelType*)member->em_expr)->arg\n $12 = {xpr = {type = T_Var}, varno = 0, varattno = 1, vartype = 12445, vartypmod = -1, varcollid = 950, varlevelsup = 0, varnosyn = 0, varattnosyn = 1, location = -1}\n\nwhich then triggers the assert in find_base_rel. When looking for other\nplaces calling estimate_num_groups I found this in prepunion.c:\n\n * XXX you don't really want to know about this: we do the estimation\n * using the subquery's original targetlist expressions, not the\n * subroot->processed_tlist which might seem more appropriate. The\n * reason is that if the subquery is itself a setop, it may return a\n * processed_tlist containing \"varno 0\" Vars generated by\n * generate_append_tlist, and those would confuse estimate_num_groups\n * mightily. We ought to get rid of the \"varno 0\" hack, but that\n * requires a redesign of the parsetree representation of setops, so\n * that there can be an RTE corresponding to each setop's output.\n\nwhich seems pretty similar to the issue at hand, because the subpath is\nT_UpperUniquePath (not sure if that passes as setop, but the symptoms \nmatch nicely).\n\nNot sure what to do about it in cost_incremental_sort, though :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 12 Apr 2020 00:44:45 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Sun, Apr 12, 2020 at 12:44:45AM +0200, Tomas Vondra wrote:\n>Hi,\n>\n>I've looked into this a bit, and at first I thought that maybe the \n>issue is in how cost_incremental_sort picks the EC members. It simply \n>does this:\n>\n> EquivalenceMember *member = (EquivalenceMember *)\n> linitial(key->pk_eclass->ec_members);\n>\n>so I was speculating that maybe there are multiple EC members and the\n>one we need is not the first one. That would have been easy to fix.\n>\n>But that doesn't seem to be the case - in this example the EC ony has a\n>single EC member anyway.\n>\n> (gdb) p key->pk_eclass->ec_members\n> $14 = (List *) 0x12eb958\n> (gdb) p *key->pk_eclass->ec_members\n> $15 = {type = T_List, length = 1, max_length = 5, elements = 0x12eb970, initial_elements = 0x12eb970}\n>\n>and the member is a Var with varno=0 (with a RelabelType on top, but \n>that's irrelevant).\n>\n> (gdb) p *(Var*)((RelabelType*)member->em_expr)->arg\n> $12 = {xpr = {type = T_Var}, varno = 0, varattno = 1, vartype = 12445, vartypmod = -1, varcollid = 950, varlevelsup = 0, varnosyn = 0, varattnosyn = 1, location = -1}\n>\n>which then triggers the assert in find_base_rel. When looking for other\n>places calling estimate_num_groups I found this in prepunion.c:\n>\n> * XXX you don't really want to know about this: we do the estimation\n> * using the subquery's original targetlist expressions, not the\n> * subroot->processed_tlist which might seem more appropriate. The\n> * reason is that if the subquery is itself a setop, it may return a\n> * processed_tlist containing \"varno 0\" Vars generated by\n> * generate_append_tlist, and those would confuse estimate_num_groups\n> * mightily. We ought to get rid of the \"varno 0\" hack, but that\n> * requires a redesign of the parsetree representation of setops, so\n> * that there can be an RTE corresponding to each setop's output.\n>\n>which seems pretty similar to the issue at hand, because the subpath is\n>T_UpperUniquePath (not sure if that passes as setop, but the symptoms \n>match nicely).\n>\n>Not sure what to do about it in cost_incremental_sort, though :-(\n>\n\nI've been messing with this the whole day, without much progress :-(\n\nI'm 99.9999% sure it's the same issue described by the quoted comment,\nbecause the plan looks like this:\n\n Nested Loop Left Join\n -> Sample Scan on pg_namespace\n Sampling: system ('7.2'::real)\n -> Incremental Sort\n Sort Key: ...\n Presorted Key: ...\n -> Unique\n -> Sort\n Sort Key: ...\n -> Append\n -> Nested Loop\n ...\n -> Nested Loop\n ...\n\nso yeah, the plan does have set operations, and generate_append_tlist\ndoes generate Vars with varno == 0, causing this issue.\n\nBut I'm not entirely sure what to do about it in cost_incremental_sort.\nThe comment (introduced by 89deca582a in 2017) suggests a proper fix\nwould require redesigning the parsetree representation of setops, and\nit's a bit too late for that.\n\nSo I wonder what a possible solution might look like. I was hoping we\nmight grab the original target list and use that, similarly to\nrecurse_set_operations, but I'm not sure how/where to get it.\n\nAnother option is to use something as simple as checking for Vars with\nvarno==0 in cost_incremental_sort() and ignoring them somehow. We could\nsimply use some arbitrary estimate - by assuming the rows are unique or\nsomething like that. Yes, I agree it's pretty ugly and I'd much rather\nfind a way to generate something sensible, but I'm not even sure we can\ngenerate good estimate when doing UNION of data from different relations\nand so on. The attached (ugly) patch does this.\n\nJustin, can you try if this resolves the crashes or if there's something\nelse going on?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 13 Apr 2020 02:09:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Mon, Apr 13, 2020 at 02:09:43AM +0200, Tomas Vondra wrote:\n> Justin, can you try if this resolves the crashes or if there's something\n> else going on?\n\nWith your patch, this no longer crashes:\n|explain SELECT * FROM information_schema.transforms AS ref_1 RIGHT JOIN (SELECT 1 FROM pg_catalog.pg_namespace TABLESAMPLE SYSTEM (7.2))AS sample_2 ON (ref_1.specific_name is NULL);\n..and sqlsmith survived 20min, which is a good sign.\n\npg_ctl -c -D pgsql13.dat start -o '-c port=1234 -c log_min_messages=fatal'\nsqlsmith --target='host=/tmp port=1234 dbname=postgres' --verbose\n\nPreviously, I changed find_base_rel()'s Assert to an if(){elog}, so I know from\nanother 12 sqlsmith-hours that there's no other crash occuring frequently.\n\n(I hadn't used sqlsmith before this weekend, and was excited when I first saw\nthat it'd crashed overnight, and very surprised (after enabling core dumps) to\nsee that it crashed within ~10min.)\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 12 Apr 2020 19:41:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Sun, Apr 12, 2020 at 8:09 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Apr 12, 2020 at 12:44:45AM +0200, Tomas Vondra wrote:\n> >Hi,\n> >\n> >I've looked into this a bit, and at first I thought that maybe the\n> >issue is in how cost_incremental_sort picks the EC members. It simply\n> >does this:\n> >\n> > EquivalenceMember *member = (EquivalenceMember *)\n> > linitial(key->pk_eclass->ec_members);\n> >\n> >so I was speculating that maybe there are multiple EC members and the\n> >one we need is not the first one. That would have been easy to fix.\n> >\n> >But that doesn't seem to be the case - in this example the EC ony has a\n> >single EC member anyway.\n> >\n> > (gdb) p key->pk_eclass->ec_members\n> > $14 = (List *) 0x12eb958\n> > (gdb) p *key->pk_eclass->ec_members\n> > $15 = {type = T_List, length = 1, max_length = 5, elements = 0x12eb970, initial_elements = 0x12eb970}\n> >\n> >and the member is a Var with varno=0 (with a RelabelType on top, but\n> >that's irrelevant).\n> >\n> > (gdb) p *(Var*)((RelabelType*)member->em_expr)->arg\n> > $12 = {xpr = {type = T_Var}, varno = 0, varattno = 1, vartype = 12445, vartypmod = -1, varcollid = 950, varlevelsup = 0, varnosyn = 0, varattnosyn = 1, location = -1}\n> >\n> >which then triggers the assert in find_base_rel. When looking for other\n> >places calling estimate_num_groups I found this in prepunion.c:\n> >\n> > * XXX you don't really want to know about this: we do the estimation\n> > * using the subquery's original targetlist expressions, not the\n> > * subroot->processed_tlist which might seem more appropriate. The\n> > * reason is that if the subquery is itself a setop, it may return a\n> > * processed_tlist containing \"varno 0\" Vars generated by\n> > * generate_append_tlist, and those would confuse estimate_num_groups\n> > * mightily. We ought to get rid of the \"varno 0\" hack, but that\n> > * requires a redesign of the parsetree representation of setops, so\n> > * that there can be an RTE corresponding to each setop's output.\n> >\n> >which seems pretty similar to the issue at hand, because the subpath is\n> >T_UpperUniquePath (not sure if that passes as setop, but the symptoms\n> >match nicely).\n> >\n> >Not sure what to do about it in cost_incremental_sort, though :-(\n> >\n>\n> I've been messing with this the whole day, without much progress :-(\n>\n> I'm 99.9999% sure it's the same issue described by the quoted comment,\n> because the plan looks like this:\n>\n> Nested Loop Left Join\n> -> Sample Scan on pg_namespace\n> Sampling: system ('7.2'::real)\n> -> Incremental Sort\n> Sort Key: ...\n> Presorted Key: ...\n> -> Unique\n> -> Sort\n> Sort Key: ...\n> -> Append\n> -> Nested Loop\n> ...\n> -> Nested Loop\n> ...\n>\n> so yeah, the plan does have set operations, and generate_append_tlist\n> does generate Vars with varno == 0, causing this issue.\n\nThis is a bit of an oddly shaped plan anyway, right? In an ideal world\nthe sort for the unique would have knowledge about what would be\nuseful for the parent node, and we wouldn't need the incremental sort\nat all.\n\nI'm not sure that that kind of thing is really a new problem, though,\nand it might not even be entirely possible to fix directly by trying\nto push down knowledge about useful sort keys to whatever created that\nsort path; it might only be fixable by having the incremental sort (or\neven regular sort) path creation know to \"subsume\" a sort underneath\nit.\n\nAnyway, I think that's a bit off topic, but it stood out to me.\n\n> But I'm not entirely sure what to do about it in cost_incremental_sort.\n> The comment (introduced by 89deca582a in 2017) suggests a proper fix\n> would require redesigning the parsetree representation of setops, and\n> it's a bit too late for that.\n>\n> So I wonder what a possible solution might look like. I was hoping we\n> might grab the original target list and use that, similarly to\n> recurse_set_operations, but I'm not sure how/where to get it.\n\nThis is also not an area I'm familiar with. Reading through the\nprepunion.c code alongside cost_incremental_sort, it seems that we\ndon't have access to the same level of information as the prepunion\ncode (i.e., we're only looking at the result of the union, not the\ncomponents of it), and trying descend down into it seems even more\ngross, so, see below...\n\n> Another option is to use something as simple as checking for Vars with\n> varno==0 in cost_incremental_sort() and ignoring them somehow. We could\n> simply use some arbitrary estimate - by assuming the rows are unique or\n> something like that. Yes, I agree it's pretty ugly and I'd much rather\n> find a way to generate something sensible, but I'm not even sure we can\n> generate good estimate when doing UNION of data from different relations\n> and so on. The attached (ugly) patch does this.\n\n...therefore I think this is worth proceeding with.\n\nJames\n\n\n", "msg_date": "Tue, 14 Apr 2020 13:16:33 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Tue, Apr 14, 2020 at 01:16:33PM -0400, James Coleman wrote:\n>On Sun, Apr 12, 2020 at 8:09 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sun, Apr 12, 2020 at 12:44:45AM +0200, Tomas Vondra wrote:\n>> >Hi,\n>> >\n>> >I've looked into this a bit, and at first I thought that maybe the\n>> >issue is in how cost_incremental_sort picks the EC members. It simply\n>> >does this:\n>> >\n>> > EquivalenceMember *member = (EquivalenceMember *)\n>> > linitial(key->pk_eclass->ec_members);\n>> >\n>> >so I was speculating that maybe there are multiple EC members and the\n>> >one we need is not the first one. That would have been easy to fix.\n>> >\n>> >But that doesn't seem to be the case - in this example the EC ony has a\n>> >single EC member anyway.\n>> >\n>> > (gdb) p key->pk_eclass->ec_members\n>> > $14 = (List *) 0x12eb958\n>> > (gdb) p *key->pk_eclass->ec_members\n>> > $15 = {type = T_List, length = 1, max_length = 5, elements = 0x12eb970, initial_elements = 0x12eb970}\n>> >\n>> >and the member is a Var with varno=0 (with a RelabelType on top, but\n>> >that's irrelevant).\n>> >\n>> > (gdb) p *(Var*)((RelabelType*)member->em_expr)->arg\n>> > $12 = {xpr = {type = T_Var}, varno = 0, varattno = 1, vartype = 12445, vartypmod = -1, varcollid = 950, varlevelsup = 0, varnosyn = 0, varattnosyn = 1, location = -1}\n>> >\n>> >which then triggers the assert in find_base_rel. When looking for other\n>> >places calling estimate_num_groups I found this in prepunion.c:\n>> >\n>> > * XXX you don't really want to know about this: we do the estimation\n>> > * using the subquery's original targetlist expressions, not the\n>> > * subroot->processed_tlist which might seem more appropriate. The\n>> > * reason is that if the subquery is itself a setop, it may return a\n>> > * processed_tlist containing \"varno 0\" Vars generated by\n>> > * generate_append_tlist, and those would confuse estimate_num_groups\n>> > * mightily. We ought to get rid of the \"varno 0\" hack, but that\n>> > * requires a redesign of the parsetree representation of setops, so\n>> > * that there can be an RTE corresponding to each setop's output.\n>> >\n>> >which seems pretty similar to the issue at hand, because the subpath is\n>> >T_UpperUniquePath (not sure if that passes as setop, but the symptoms\n>> >match nicely).\n>> >\n>> >Not sure what to do about it in cost_incremental_sort, though :-(\n>> >\n>>\n>> I've been messing with this the whole day, without much progress :-(\n>>\n>> I'm 99.9999% sure it's the same issue described by the quoted comment,\n>> because the plan looks like this:\n>>\n>> Nested Loop Left Join\n>> -> Sample Scan on pg_namespace\n>> Sampling: system ('7.2'::real)\n>> -> Incremental Sort\n>> Sort Key: ...\n>> Presorted Key: ...\n>> -> Unique\n>> -> Sort\n>> Sort Key: ...\n>> -> Append\n>> -> Nested Loop\n>> ...\n>> -> Nested Loop\n>> ...\n>>\n>> so yeah, the plan does have set operations, and generate_append_tlist\n>> does generate Vars with varno == 0, causing this issue.\n>\n>This is a bit of an oddly shaped plan anyway, right? In an ideal world\n>the sort for the unique would have knowledge about what would be\n>useful for the parent node, and we wouldn't need the incremental sort\n>at all.\n>\n\nWell, yeah. The problem is the Unique simply compares the columns in the\norder it sees them, and it does not match the column order desired by\nincremental sort. But we don't push down this information at all :-(\n\nIn fact, there may be other reasons to reorder the comparisons, e.g.\nwhen the cost is different for different columns. There was a patch by\nTeodor IIRC correctly doing exactly that.\n\n>I'm not sure that that kind of thing is really a new problem, though,\n>and it might not even be entirely possible to fix directly by trying\n>to push down knowledge about useful sort keys to whatever created that\n>sort path; it might only be fixable by having the incremental sort (or\n>even regular sort) path creation know to \"subsume\" a sort underneath\n>it.\n>\n>Anyway, I think that's a bit off topic, but it stood out to me.\n>\n\nIt's not a new problem. It's an optimization we don't have.\n\n>> But I'm not entirely sure what to do about it in cost_incremental_sort.\n>> The comment (introduced by 89deca582a in 2017) suggests a proper fix\n>> would require redesigning the parsetree representation of setops, and\n>> it's a bit too late for that.\n>>\n>> So I wonder what a possible solution might look like. I was hoping we\n>> might grab the original target list and use that, similarly to\n>> recurse_set_operations, but I'm not sure how/where to get it.\n>\n>This is also not an area I'm familiar with. Reading through the\n>prepunion.c code alongside cost_incremental_sort, it seems that we\n>don't have access to the same level of information as the prepunion\n>code (i.e., we're only looking at the result of the union, not the\n>components of it), and trying descend down into it seems even more\n>gross, so, see below...\n>\n\nYeah. And I'm not even sure having that information would allow good\nestimates e.g. for UNIONs of multiple relations etc.\n\n>> Another option is to use something as simple as checking for Vars with\n>> varno==0 in cost_incremental_sort() and ignoring them somehow. We could\n>> simply use some arbitrary estimate - by assuming the rows are unique or\n>> something like that. Yes, I agree it's pretty ugly and I'd much rather\n>> find a way to generate something sensible, but I'm not even sure we can\n>> generate good estimate when doing UNION of data from different relations\n>> and so on. The attached (ugly) patch does this.\n>\n>...therefore I think this is worth proceeding with.\n>\n\nOK, then the question is what estimate to use in this case. Should we\nassume 1 group or uniqueness? I'd assume a single group produces costs\nslightly above regular sort, right?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:47:12 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Wed, Apr 15, 2020 at 10:47 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Apr 14, 2020 at 01:16:33PM -0400, James Coleman wrote:\n> >On Sun, Apr 12, 2020 at 8:09 PM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> On Sun, Apr 12, 2020 at 12:44:45AM +0200, Tomas Vondra wrote:\n> >> >Hi,\n> >> >\n> >> >I've looked into this a bit, and at first I thought that maybe the\n> >> >issue is in how cost_incremental_sort picks the EC members. It simply\n> >> >does this:\n> >> >\n> >> > EquivalenceMember *member = (EquivalenceMember *)\n> >> > linitial(key->pk_eclass->ec_members);\n> >> >\n> >> >so I was speculating that maybe there are multiple EC members and the\n> >> >one we need is not the first one. That would have been easy to fix.\n> >> >\n> >> >But that doesn't seem to be the case - in this example the EC ony has a\n> >> >single EC member anyway.\n> >> >\n> >> > (gdb) p key->pk_eclass->ec_members\n> >> > $14 = (List *) 0x12eb958\n> >> > (gdb) p *key->pk_eclass->ec_members\n> >> > $15 = {type = T_List, length = 1, max_length = 5, elements = 0x12eb970, initial_elements = 0x12eb970}\n> >> >\n> >> >and the member is a Var with varno=0 (with a RelabelType on top, but\n> >> >that's irrelevant).\n> >> >\n> >> > (gdb) p *(Var*)((RelabelType*)member->em_expr)->arg\n> >> > $12 = {xpr = {type = T_Var}, varno = 0, varattno = 1, vartype = 12445, vartypmod = -1, varcollid = 950, varlevelsup = 0, varnosyn = 0, varattnosyn = 1, location = -1}\n> >> >\n> >> >which then triggers the assert in find_base_rel. When looking for other\n> >> >places calling estimate_num_groups I found this in prepunion.c:\n> >> >\n> >> > * XXX you don't really want to know about this: we do the estimation\n> >> > * using the subquery's original targetlist expressions, not the\n> >> > * subroot->processed_tlist which might seem more appropriate. The\n> >> > * reason is that if the subquery is itself a setop, it may return a\n> >> > * processed_tlist containing \"varno 0\" Vars generated by\n> >> > * generate_append_tlist, and those would confuse estimate_num_groups\n> >> > * mightily. We ought to get rid of the \"varno 0\" hack, but that\n> >> > * requires a redesign of the parsetree representation of setops, so\n> >> > * that there can be an RTE corresponding to each setop's output.\n> >> >\n> >> >which seems pretty similar to the issue at hand, because the subpath is\n> >> >T_UpperUniquePath (not sure if that passes as setop, but the symptoms\n> >> >match nicely).\n> >> >\n> >> >Not sure what to do about it in cost_incremental_sort, though :-(\n> >> >\n> >>\n> >> I've been messing with this the whole day, without much progress :-(\n> >>\n> >> I'm 99.9999% sure it's the same issue described by the quoted comment,\n> >> because the plan looks like this:\n> >>\n> >> Nested Loop Left Join\n> >> -> Sample Scan on pg_namespace\n> >> Sampling: system ('7.2'::real)\n> >> -> Incremental Sort\n> >> Sort Key: ...\n> >> Presorted Key: ...\n> >> -> Unique\n> >> -> Sort\n> >> Sort Key: ...\n> >> -> Append\n> >> -> Nested Loop\n> >> ...\n> >> -> Nested Loop\n> >> ...\n> >>\n> >> so yeah, the plan does have set operations, and generate_append_tlist\n> >> does generate Vars with varno == 0, causing this issue.\n> >\n> >This is a bit of an oddly shaped plan anyway, right? In an ideal world\n> >the sort for the unique would have knowledge about what would be\n> >useful for the parent node, and we wouldn't need the incremental sort\n> >at all.\n> >\n>\n> Well, yeah. The problem is the Unique simply compares the columns in the\n> order it sees them, and it does not match the column order desired by\n> incremental sort. But we don't push down this information at all :-(\n>\n> In fact, there may be other reasons to reorder the comparisons, e.g.\n> when the cost is different for different columns. There was a patch by\n> Teodor IIRC correctly doing exactly that.\n>\n> >I'm not sure that that kind of thing is really a new problem, though,\n> >and it might not even be entirely possible to fix directly by trying\n> >to push down knowledge about useful sort keys to whatever created that\n> >sort path; it might only be fixable by having the incremental sort (or\n> >even regular sort) path creation know to \"subsume\" a sort underneath\n> >it.\n> >\n> >Anyway, I think that's a bit off topic, but it stood out to me.\n> >\n>\n> It's not a new problem. It's an optimization we don't have.\n>\n> >> But I'm not entirely sure what to do about it in cost_incremental_sort.\n> >> The comment (introduced by 89deca582a in 2017) suggests a proper fix\n> >> would require redesigning the parsetree representation of setops, and\n> >> it's a bit too late for that.\n> >>\n> >> So I wonder what a possible solution might look like. I was hoping we\n> >> might grab the original target list and use that, similarly to\n> >> recurse_set_operations, but I'm not sure how/where to get it.\n> >\n> >This is also not an area I'm familiar with. Reading through the\n> >prepunion.c code alongside cost_incremental_sort, it seems that we\n> >don't have access to the same level of information as the prepunion\n> >code (i.e., we're only looking at the result of the union, not the\n> >components of it), and trying descend down into it seems even more\n> >gross, so, see below...\n> >\n>\n> Yeah. And I'm not even sure having that information would allow good\n> estimates e.g. for UNIONs of multiple relations etc.\n>\n> >> Another option is to use something as simple as checking for Vars with\n> >> varno==0 in cost_incremental_sort() and ignoring them somehow. We could\n> >> simply use some arbitrary estimate - by assuming the rows are unique or\n> >> something like that. Yes, I agree it's pretty ugly and I'd much rather\n> >> find a way to generate something sensible, but I'm not even sure we can\n> >> generate good estimate when doing UNION of data from different relations\n> >> and so on. The attached (ugly) patch does this.\n> >\n> >...therefore I think this is worth proceeding with.\n> >\n>\n> OK, then the question is what estimate to use in this case. Should we\n> assume 1 group or uniqueness? I'd assume a single group produces costs\n> slightly above regular sort, right?\n\nOriginally I'd intuitively leaned towards assuming they were unique.\nBut that would be the best case for memory/disk space usage, for\nexample, and the costing for incremental sort is always (at least\nmildly) higher than regular sort if the number of groups is 1. That\nalso guarantees the startup cost is higher than regular sort also.\n\nSo I think using a number of groups estimate of 1, we just wouldn't\nchoose an incremental sort ever in this case.\n\nMaybe that's the right choice? It'd certainly be the conservative\nchoice. What are your thoughts on the trade-offs there?\n\nJames\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:26:12 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Mon, Apr 13, 2020 at 8:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> I've been messing with this the whole day, without much progress :-(\n>\n> I'm 99.9999% sure it's the same issue described by the quoted comment,\n> because the plan looks like this:\n>\n> Nested Loop Left Join\n> -> Sample Scan on pg_namespace\n> Sampling: system ('7.2'::real)\n> -> Incremental Sort\n> Sort Key: ...\n> Presorted Key: ...\n> -> Unique\n> -> Sort\n> Sort Key: ...\n> -> Append\n> -> Nested Loop\n> ...\n> -> Nested Loop\n> ...\n>\n> so yeah, the plan does have set operations, and generate_append_tlist\n> does generate Vars with varno == 0, causing this issue.\n>\n\nAfter some digging I believe here is what happened.\n\n1. For the UNION query, we build an upper rel of UPPERREL_SETOP and\ngenerate Append path for it. Since Append doesn't actually evaluate its\ntargetlist, we generate 'varno 0' Vars for its targetlist. (setrefs.c\nwould just replace them with OUTER_VAR when adjusting the final plan so\nthis usually does not cause problems.)\n\n2. To remove duplicates for UNION, we use hash/sort to unique-ify the\nresult. If sort is chosen, we add Sort path and then Unique path above\nAppend path, with pathkeys made from Append's targetlist.\n\n3. Also the Append's targetlist would be built into\nroot->processed_tlist and with that we calculate root->sort_pathkeys.\n\n4. When handling ORDER BY clause, we figure out the pathkeys of\nUnique->Sort->Append path share some same prefix with\nroot->sort_pathkeys and thus incremental sort would be considered.\n\n5. When calculating cost for incremental sort, estimate_num_groups does\nnot cope with 'varno 0' Vars extracted from root->sort_pathkeys.\n\n\nWith this scenario, here is a simple recipe:\n\ncreate table foo(a int, b int, c int);\nset enable_hashagg to off;\nexplain select * from foo union select * from foo order by 1,3;\n\nThanks\nRichard\n\nOn Mon, Apr 13, 2020 at 8:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nI've been messing with this the whole day, without much progress :-(\n\nI'm 99.9999% sure it's the same issue described by the quoted comment,\nbecause the plan looks like this:\n\n  Nested Loop Left Join\n    ->  Sample Scan on pg_namespace\n          Sampling: system ('7.2'::real)\n    ->  Incremental Sort\n          Sort Key: ...\n          Presorted Key: ...\n          ->  Unique\n                ->  Sort\n                      Sort Key: ...\n                      ->  Append\n                            ->  Nested Loop\n                                ...\n                            ->  Nested Loop\n                                ...\n\nso yeah, the plan does have set operations, and generate_append_tlist\ndoes generate Vars with varno == 0, causing this issue.After some digging I believe here is what happened.1. For the UNION query, we build an upper rel of UPPERREL_SETOP andgenerate Append path for it. Since Append doesn't actually evaluate itstargetlist, we generate 'varno 0' Vars for its targetlist. (setrefs.cwould just replace them with OUTER_VAR when adjusting the final plan sothis usually does not cause problems.)2. To remove duplicates for UNION, we use hash/sort to unique-ify theresult. If sort is chosen, we add Sort path and then Unique path aboveAppend path, with pathkeys made from Append's targetlist.3. Also the Append's targetlist would be built intoroot->processed_tlist and with that we calculate root->sort_pathkeys.4. When handling ORDER BY clause, we figure out the pathkeys ofUnique->Sort->Append path share some same prefix withroot->sort_pathkeys and thus incremental sort would be considered.5. When calculating cost for incremental sort, estimate_num_groups doesnot cope with 'varno 0' Vars extracted from root->sort_pathkeys. With this scenario, here is a simple recipe:create table foo(a int, b int, c int);set enable_hashagg to off;explain select * from foo union select * from foo order by 1,3;ThanksRichard", "msg_date": "Thu, 16 Apr 2020 16:44:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> Well, yeah. The problem is the Unique simply compares the columns in the\n> order it sees them, and it does not match the column order desired by\n> incremental sort. But we don't push down this information at all :-(\n>\n\nThis is a nice optimization better to have. Since the 'Sort and Unique'\nwould unique-ify the result of a UNION by sorting on all columns, why\nnot we adjust the sort order trying to match parse->sortClause so that\nwe can avoid the final sort node?\n\nDoing that we can transform plan from:\n\n# explain (costs off) select * from foo union select * from foo order by\n1,3;\n QUERY PLAN\n-----------------------------------------------\n Incremental Sort\n Sort Key: foo.a, foo.c\n Presorted Key: foo.a\n -> Unique\n -> Sort\n Sort Key: foo.a, foo.b, foo.c\n -> Append\n -> Seq Scan on foo\n -> Seq Scan on foo foo_1\n(9 rows)\n\nTo:\n\n# explain (costs off) select * from foo union select * from foo order by\n1,3;\n QUERY PLAN\n-----------------------------------------\n Unique\n -> Sort\n Sort Key: foo.a, foo.c, foo.b\n -> Append\n -> Seq Scan on foo\n -> Seq Scan on foo foo_1\n(6 rows)\n\nThanks\nRichard\n\nOn Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nWell, yeah. The problem is the Unique simply compares the columns in the\norder it sees them, and it does not match the column order desired by\nincremental sort. But we don't push down this information at all :-(This is a nice optimization better to have. Since the 'Sort and Unique'would unique-ify the result of a UNION by sorting on all columns, whynot we adjust the sort order trying to match parse->sortClause so thatwe can avoid the final sort node?Doing that we can transform plan from:# explain (costs off) select * from foo union select * from foo order by 1,3;                  QUERY PLAN----------------------------------------------- Incremental Sort   Sort Key: foo.a, foo.c   Presorted Key: foo.a   ->  Unique         ->  Sort               Sort Key: foo.a, foo.b, foo.c               ->  Append                     ->  Seq Scan on foo                     ->  Seq Scan on foo foo_1(9 rows)To:# explain (costs off) select * from foo union select * from foo order by 1,3;               QUERY PLAN----------------------------------------- Unique   ->  Sort         Sort Key: foo.a, foo.c, foo.b         ->  Append               ->  Seq Scan on foo               ->  Seq Scan on foo foo_1(6 rows)ThanksRichard", "msg_date": "Thu, 16 Apr 2020 18:35:20 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote:\n>\n>>\n>> Well, yeah. The problem is the Unique simply compares the columns in the\n>> order it sees them, and it does not match the column order desired by\n>> incremental sort. But we don't push down this information at all :-(\n>>\n>\n> This is a nice optimization better to have. Since the 'Sort and Unique'\n> would unique-ify the result of a UNION by sorting on all columns, why\n> not we adjust the sort order trying to match parse->sortClause so that\n> we can avoid the final sort node?\n>\n> Doing that we can transform plan from:\n>\n> # explain (costs off) select * from foo union select * from foo order by\n> 1,3;\n> QUERY PLAN\n> -----------------------------------------------\n> Incremental Sort\n> Sort Key: foo.a, foo.c\n> Presorted Key: foo.a\n> -> Unique\n> -> Sort\n> Sort Key: foo.a, foo.b, foo.c\n> -> Append\n> -> Seq Scan on foo\n> -> Seq Scan on foo foo_1\n> (9 rows)\n>\n> To:\n>\n> # explain (costs off) select * from foo union select * from foo order by\n> 1,3;\n> QUERY PLAN\n> -----------------------------------------\n> Unique\n> -> Sort\n> Sort Key: foo.a, foo.c, foo.b\n> -> Append\n> -> Seq Scan on foo\n> -> Seq Scan on foo foo_1\n> (6 rows)\n>\n>\nAttached is what I'm thinking about this optimization. Does it make any\nsense?\n\nThanks\nRichard", "msg_date": "Thu, 16 Apr 2020 20:21:51 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 04:44:10PM +0800, Richard Guo wrote:\n>On Mon, Apr 13, 2020 at 8:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>>\n>> I've been messing with this the whole day, without much progress :-(\n>>\n>> I'm 99.9999% sure it's the same issue described by the quoted comment,\n>> because the plan looks like this:\n>>\n>> Nested Loop Left Join\n>> -> Sample Scan on pg_namespace\n>> Sampling: system ('7.2'::real)\n>> -> Incremental Sort\n>> Sort Key: ...\n>> Presorted Key: ...\n>> -> Unique\n>> -> Sort\n>> Sort Key: ...\n>> -> Append\n>> -> Nested Loop\n>> ...\n>> -> Nested Loop\n>> ...\n>>\n>> so yeah, the plan does have set operations, and generate_append_tlist\n>> does generate Vars with varno == 0, causing this issue.\n>>\n>\n>After some digging I believe here is what happened.\n>\n>1. For the UNION query, we build an upper rel of UPPERREL_SETOP and\n>generate Append path for it. Since Append doesn't actually evaluate its\n>targetlist, we generate 'varno 0' Vars for its targetlist. (setrefs.c\n>would just replace them with OUTER_VAR when adjusting the final plan so\n>this usually does not cause problems.)\n>\n>2. To remove duplicates for UNION, we use hash/sort to unique-ify the\n>result. If sort is chosen, we add Sort path and then Unique path above\n>Append path, with pathkeys made from Append's targetlist.\n>\n>3. Also the Append's targetlist would be built into\n>root->processed_tlist and with that we calculate root->sort_pathkeys.\n>\n>4. When handling ORDER BY clause, we figure out the pathkeys of\n>Unique->Sort->Append path share some same prefix with\n>root->sort_pathkeys and thus incremental sort would be considered.\n>\n>5. When calculating cost for incremental sort, estimate_num_groups does\n>not cope with 'varno 0' Vars extracted from root->sort_pathkeys.\n>\n\nRight.\n\n>\n>With this scenario, here is a simple recipe:\n>\n>create table foo(a int, b int, c int);\n>set enable_hashagg to off;\n>explain select * from foo union select * from foo order by 1,3;\n>\n\nYep, that's a much simpler query / plan. Thanks.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 16 Apr 2020 14:51:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 8:22 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>>\n>> On Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>\n>>> Well, yeah. The problem is the Unique simply compares the columns in the\n>>> order it sees them, and it does not match the column order desired by\n>>> incremental sort. But we don't push down this information at all :-(\n>>\n>>\n>> This is a nice optimization better to have. Since the 'Sort and Unique'\n>> would unique-ify the result of a UNION by sorting on all columns, why\n>> not we adjust the sort order trying to match parse->sortClause so that\n>> we can avoid the final sort node?\n>>\n>> Doing that we can transform plan from:\n>>\n>> # explain (costs off) select * from foo union select * from foo order by 1,3;\n>> QUERY PLAN\n>> -----------------------------------------------\n>> Incremental Sort\n>> Sort Key: foo.a, foo.c\n>> Presorted Key: foo.a\n>> -> Unique\n>> -> Sort\n>> Sort Key: foo.a, foo.b, foo.c\n>> -> Append\n>> -> Seq Scan on foo\n>> -> Seq Scan on foo foo_1\n>> (9 rows)\n>>\n>> To:\n>>\n>> # explain (costs off) select * from foo union select * from foo order by 1,3;\n>> QUERY PLAN\n>> -----------------------------------------\n>> Unique\n>> -> Sort\n>> Sort Key: foo.a, foo.c, foo.b\n>> -> Append\n>> -> Seq Scan on foo\n>> -> Seq Scan on foo foo_1\n>> (6 rows)\n>>\n>\n> Attached is what I'm thinking about this optimization. Does it make any\n> sense?\n\nShouldn't this go one either a new thread or on the thread for the\npatch Tomas was referencing (by Teodor I believe)?\n\nOr are you saying you believe this patch guarantees we never see this\nproblem in incremental sort costing?\n\nJames\n\n\n", "msg_date": "Thu, 16 Apr 2020 12:04:03 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 12:04:03PM -0400, James Coleman wrote:\n>On Thu, Apr 16, 2020 at 8:22 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>>\n>> On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>>\n>>>\n>>> On Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>>\n>>>>\n>>>> Well, yeah. The problem is the Unique simply compares the columns in the\n>>>> order it sees them, and it does not match the column order desired by\n>>>> incremental sort. But we don't push down this information at all :-(\n>>>\n>>>\n>>> This is a nice optimization better to have. Since the 'Sort and Unique'\n>>> would unique-ify the result of a UNION by sorting on all columns, why\n>>> not we adjust the sort order trying to match parse->sortClause so that\n>>> we can avoid the final sort node?\n>>>\n>>> Doing that we can transform plan from:\n>>>\n>>> # explain (costs off) select * from foo union select * from foo order by 1,3;\n>>> QUERY PLAN\n>>> -----------------------------------------------\n>>> Incremental Sort\n>>> Sort Key: foo.a, foo.c\n>>> Presorted Key: foo.a\n>>> -> Unique\n>>> -> Sort\n>>> Sort Key: foo.a, foo.b, foo.c\n>>> -> Append\n>>> -> Seq Scan on foo\n>>> -> Seq Scan on foo foo_1\n>>> (9 rows)\n>>>\n>>> To:\n>>>\n>>> # explain (costs off) select * from foo union select * from foo order by 1,3;\n>>> QUERY PLAN\n>>> -----------------------------------------\n>>> Unique\n>>> -> Sort\n>>> Sort Key: foo.a, foo.c, foo.b\n>>> -> Append\n>>> -> Seq Scan on foo\n>>> -> Seq Scan on foo foo_1\n>>> (6 rows)\n>>>\n>>\n>> Attached is what I'm thinking about this optimization. Does it make any\n>> sense?\n>\n>Shouldn't this go one either a new thread or on the thread for the\n>patch Tomas was referencing (by Teodor I believe)?\n>\n\nFWIW the optimization I had in mind is this:\n\n https://commitfest.postgresql.org/21/1651/\n\nI now realize that was about GROUP BY, but it's not all that different\nand the concerns will / should be fairly similar, I think.\n\nIMO simply tweaking the sort keys to match the upper parts of the plan\nis probably way too simplistic, I'm afraid. For example, if the Unique\nsignificantly reduces cardinality, then the cost of the additional sort\nis much less important. It may be much better to optimize the \"large\"\nsort of the whole data set, either by reordering the columns as proposed\nby Teodor in his patch (by number of distinct values and/or cost of the\ncomparison function function).\n\nFurthermore, this is one of the places that is not using incremental\nsort yet - I can easily imagine doing something like this:\n\n\n Sort\n -> Unique\n -> Incremenal Sort\n\t -> ...\n\ncould be a massive win. So I think we can't just rejigger the sort keys\nabitrarily, we should / need to consider those alternatives.\n\n>Or are you saying you believe this patch guarantees we never see this\n>problem in incremental sort costing?\n>\n\nYeah, that's not entirely close to me. But maybe it shows us where we to\nget the unprocessed target list?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 20:44:16 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 08:44:16PM +0200, Tomas Vondra wrote:\n>On Thu, Apr 16, 2020 at 12:04:03PM -0400, James Coleman wrote:\n>>On Thu, Apr 16, 2020 at 8:22 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>>>\n>>>\n>>>On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>>>\n>>>>\n>>>>On Wed, Apr 15, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>>>\n>>>>>\n>>>>>Well, yeah. The problem is the Unique simply compares the columns in the\n>>>>>order it sees them, and it does not match the column order desired by\n>>>>>incremental sort. But we don't push down this information at all :-(\n>>>>\n>>>>\n>>>>This is a nice optimization better to have. Since the 'Sort and Unique'\n>>>>would unique-ify the result of a UNION by sorting on all columns, why\n>>>>not we adjust the sort order trying to match parse->sortClause so that\n>>>>we can avoid the final sort node?\n>>>>\n>>>>Doing that we can transform plan from:\n>>>>\n>>>># explain (costs off) select * from foo union select * from foo order by 1,3;\n>>>> QUERY PLAN\n>>>>-----------------------------------------------\n>>>> Incremental Sort\n>>>> Sort Key: foo.a, foo.c\n>>>> Presorted Key: foo.a\n>>>> -> Unique\n>>>> -> Sort\n>>>> Sort Key: foo.a, foo.b, foo.c\n>>>> -> Append\n>>>> -> Seq Scan on foo\n>>>> -> Seq Scan on foo foo_1\n>>>>(9 rows)\n>>>>\n>>>>To:\n>>>>\n>>>># explain (costs off) select * from foo union select * from foo order by 1,3;\n>>>> QUERY PLAN\n>>>>-----------------------------------------\n>>>> Unique\n>>>> -> Sort\n>>>> Sort Key: foo.a, foo.c, foo.b\n>>>> -> Append\n>>>> -> Seq Scan on foo\n>>>> -> Seq Scan on foo foo_1\n>>>>(6 rows)\n>>>>\n>>>\n>>>Attached is what I'm thinking about this optimization. Does it make any\n>>>sense?\n>>\n>>Shouldn't this go one either a new thread or on the thread for the\n>>patch Tomas was referencing (by Teodor I believe)?\n>>\n>\n>FWIW the optimization I had in mind is this:\n>\n> https://commitfest.postgresql.org/21/1651/\n>\n>I now realize that was about GROUP BY, but it's not all that different\n>and the concerns will / should be fairly similar, I think.\n>\n>IMO simply tweaking the sort keys to match the upper parts of the plan\n>is probably way too simplistic, I'm afraid. For example, if the Unique\n>significantly reduces cardinality, then the cost of the additional sort\n>is much less important. It may be much better to optimize the \"large\"\n>sort of the whole data set, either by reordering the columns as proposed\n>by Teodor in his patch (by number of distinct values and/or cost of the\n>comparison function function).\n>\n>Furthermore, this is one of the places that is not using incremental\n>sort yet - I can easily imagine doing something like this:\n>\n>\n> Sort\n> -> Unique\n> -> Incremenal Sort\n>\t -> ...\n>\n>could be a massive win. So I think we can't just rejigger the sort keys\n>abitrarily, we should / need to consider those alternatives.\n>\n>>Or are you saying you believe this patch guarantees we never see this\n>>problem in incremental sort costing?\n>>\n>\n>Yeah, that's not entirely close to me. But maybe it shows us where we to\n>get the unprocessed target list?\n>\n\nI think at the very least this needs to apply the same change also to\ngenerate_nonunion_paths, because otherwise this fails because of the\nsame issue:\n\n set enable_hashagg = off;\n explain select * from foo except select * from foo order by 1, 3;\n\nI'm still of the opinion that this is really an optimization and\nbehavior change, and I feel rather uneasy about pushing it post feature\nfreeze without appropriate review. I also think we really ought to\nconsider how would this work with the other optimizations I outlined\nelsewhere in this thread (comparison costs, ...).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 01:13:00 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:26:12AM -0400, James Coleman wrote:\n>On Wed, Apr 15, 2020 at 10:47 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> ...\n>>\n>> Yeah. And I'm not even sure having that information would allow good\n>> estimates e.g. for UNIONs of multiple relations etc.\n>>\n>> >> Another option is to use something as simple as checking for Vars with\n>> >> varno==0 in cost_incremental_sort() and ignoring them somehow. We could\n>> >> simply use some arbitrary estimate - by assuming the rows are unique or\n>> >> something like that. Yes, I agree it's pretty ugly and I'd much rather\n>> >> find a way to generate something sensible, but I'm not even sure we can\n>> >> generate good estimate when doing UNION of data from different relations\n>> >> and so on. The attached (ugly) patch does this.\n>> >\n>> >...therefore I think this is worth proceeding with.\n>> >\n>>\n>> OK, then the question is what estimate to use in this case. Should we\n>> assume 1 group or uniqueness? I'd assume a single group produces costs\n>> slightly above regular sort, right?\n>\n>Originally I'd intuitively leaned towards assuming they were unique.\n>But that would be the best case for memory/disk space usage, for\n>example, and the costing for incremental sort is always (at least\n>mildly) higher than regular sort if the number of groups is 1. That\n>also guarantees the startup cost is higher than regular sort also.\n>\n>So I think using a number of groups estimate of 1, we just wouldn't\n>choose an incremental sort ever in this case.\n>\n>Maybe that's the right choice? It'd certainly be the conservative\n>choice. What are your thoughts on the trade-offs there?\n>\n\nI think we have essentially three options:\n\n1) assuming there's just a single group\n\nThis should produce cost estimate higher than plain sort, disabling\nincremental sort. I'd say this is \"worst case\" assumption. I think this\nmight be overly pessimistic, though.\n\n2) assuming each row is a separate group\n\nIf (1) is worst case scenario, then this is probably the best case,\nparticularly when the query is sensitive to startup cost.\n\n\n3) something in between\n\nIf (1) and (2) are worst/best-case scenarios, maybe we should pick\nsomething in between. We have DEFAULT_NUM_DISTINCT (200) which\nessentially says \"we don't know what the number of groups is\" so maybe\nwe should use that. Another option would be nrows/10, which is the cap\nwe use in estimate_num_groups without extended stats.\n\n\nI was leaning towards (1) as \"worst case\" choice seems natural to\nprevent possible regressions. But consider this:\n\n create table small (a int);\n create table large (a int);\n \n insert into small select mod(i,10) from generate_series(1,100) s(i);\n insert into large select mod(i,10) from generate_series(1,100000) s(i);\n\n analyze small;\n analyze large;\n\n explain select i from large union select i from small;\n\n QUERY PLAN\n -------------------------------------------------------------------------------\n Unique (cost=11260.35..11760.85 rows=100100 width=4)\n -> Sort (cost=11260.35..11510.60 rows=100100 width=4)\n Sort Key: large.i\n -> Append (cost=0.00..2946.50 rows=100100 width=4)\n -> Seq Scan on large (cost=0.00..1443.00 rows=100000 width=4)\n -> Seq Scan on small (cost=0.00..2.00 rows=100 width=4)\n (6 rows)\n\nThe estimate fo number of groups is clearly bogus - we know there are\nonly 10 groups in each relation, but here we end up with 100100.\n\nSo perhaps we should do (2) to keep the behavior consistent?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 02:54:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 8:54 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 11:26:12AM -0400, James Coleman wrote:\n> >On Wed, Apr 15, 2020 at 10:47 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> ...\n> >>\n> >> Yeah. And I'm not even sure having that information would allow good\n> >> estimates e.g. for UNIONs of multiple relations etc.\n> >>\n> >> >> Another option is to use something as simple as checking for Vars with\n> >> >> varno==0 in cost_incremental_sort() and ignoring them somehow. We could\n> >> >> simply use some arbitrary estimate - by assuming the rows are unique or\n> >> >> something like that. Yes, I agree it's pretty ugly and I'd much rather\n> >> >> find a way to generate something sensible, but I'm not even sure we can\n> >> >> generate good estimate when doing UNION of data from different relations\n> >> >> and so on. The attached (ugly) patch does this.\n> >> >\n> >> >...therefore I think this is worth proceeding with.\n> >> >\n> >>\n> >> OK, then the question is what estimate to use in this case. Should we\n> >> assume 1 group or uniqueness? I'd assume a single group produces costs\n> >> slightly above regular sort, right?\n> >\n> >Originally I'd intuitively leaned towards assuming they were unique.\n> >But that would be the best case for memory/disk space usage, for\n> >example, and the costing for incremental sort is always (at least\n> >mildly) higher than regular sort if the number of groups is 1. That\n> >also guarantees the startup cost is higher than regular sort also.\n> >\n> >So I think using a number of groups estimate of 1, we just wouldn't\n> >choose an incremental sort ever in this case.\n> >\n> >Maybe that's the right choice? It'd certainly be the conservative\n> >choice. What are your thoughts on the trade-offs there?\n> >\n>\n> I think we have essentially three options:\n>\n> 1) assuming there's just a single group\n>\n> This should produce cost estimate higher than plain sort, disabling\n> incremental sort. I'd say this is \"worst case\" assumption. I think this\n> might be overly pessimistic, though.\n>\n> 2) assuming each row is a separate group\n>\n> If (1) is worst case scenario, then this is probably the best case,\n> particularly when the query is sensitive to startup cost.\n>\n>\n> 3) something in between\n>\n> If (1) and (2) are worst/best-case scenarios, maybe we should pick\n> something in between. We have DEFAULT_NUM_DISTINCT (200) which\n> essentially says \"we don't know what the number of groups is\" so maybe\n> we should use that. Another option would be nrows/10, which is the cap\n> we use in estimate_num_groups without extended stats.\n>\n>\n> I was leaning towards (1) as \"worst case\" choice seems natural to\n> prevent possible regressions. But consider this:\n>\n> create table small (a int);\n> create table large (a int);\n>\n> insert into small select mod(i,10) from generate_series(1,100) s(i);\n> insert into large select mod(i,10) from generate_series(1,100000) s(i);\n>\n> analyze small;\n> analyze large;\n>\n> explain select i from large union select i from small;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------\n> Unique (cost=11260.35..11760.85 rows=100100 width=4)\n> -> Sort (cost=11260.35..11510.60 rows=100100 width=4)\n> Sort Key: large.i\n> -> Append (cost=0.00..2946.50 rows=100100 width=4)\n> -> Seq Scan on large (cost=0.00..1443.00 rows=100000 width=4)\n> -> Seq Scan on small (cost=0.00..2.00 rows=100 width=4)\n> (6 rows)\n>\n> The estimate fo number of groups is clearly bogus - we know there are\n> only 10 groups in each relation, but here we end up with 100100.\n>\n> So perhaps we should do (2) to keep the behavior consistent?\n\nFirst of all, I agree that we shouldn't (at this point in the cycle)\ntry to apply pathkey reordering etc. We already have ideas about\nadditional ways to apply and improve usage of incremental sort, and it\nseem like this naturally fits into that list.\n\nSecond of all, I like the idea of keeping it consistent (even if\nconsistency boils down to \"we're just guessing\").\n\nJames\n\n\n", "msg_date": "Thu, 16 Apr 2020 21:02:39 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I think we have essentially three options:\n> 1) assuming there's just a single group\n> 2) assuming each row is a separate group\n> 3) something in between\n> If (1) and (2) are worst/best-case scenarios, maybe we should pick\n> something in between. We have DEFAULT_NUM_DISTINCT (200) which\n> essentially says \"we don't know what the number of groups is\" so maybe\n> we should use that.\n\nI wouldn't recommend picking either the best or worst cases.\n\nPossibly DEFAULT_NUM_DISTINCT is a sane choice, though it's fair to\nwonder if it's quite applicable to the case where we already know\nwe've grouped by some columns.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 21:26:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Fri, Apr 17, 2020 at 2:44 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Apr 16, 2020 at 12:04:03PM -0400, James Coleman wrote:\n> >On Thu, Apr 16, 2020 at 8:22 AM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >> On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >>\n> >> Attached is what I'm thinking about this optimization. Does it make any\n> >> sense?\n> >\n> >Shouldn't this go one either a new thread or on the thread for the\n> >patch Tomas was referencing (by Teodor I believe)?\n> >\n>\n> FWIW the optimization I had in mind is this:\n>\n> https://commitfest.postgresql.org/21/1651/\n>\n> I now realize that was about GROUP BY, but it's not all that different\n> and the concerns will / should be fairly similar, I think.\n>\n\nThanks for pointing out this thread. Very helpful.\n\n\n>\n> IMO simply tweaking the sort keys to match the upper parts of the plan\n> is probably way too simplistic, I'm afraid. For example, if the Unique\n> significantly reduces cardinality, then the cost of the additional sort\n> is much less important. It may be much better to optimize the \"large\"\n> sort of the whole data set, either by reordering the columns as proposed\n> by Teodor in his patch (by number of distinct values and/or cost of the\n> comparison function function).\n>\n\nSince we don't have Teodor's patch for now, I think it is a clear win if\nwe can reorder the sort keys in 'Unique/SetOp->Sort->Append' path to\navoid a final Sort/Incremental Sort node, because currently the 'Sort\nand Unique/SetOp' above Append simply performs sorting in the order of\ncolumns it sees. I think this is the same logic we do for mergejoin\npaths that we try to match the requested query_pathkeys to avoid a\nsecond sort.\n\n\n>\n> Furthermore, this is one of the places that is not using incremental\n> sort yet - I can easily imagine doing something like this:\n>\n>\n> Sort\n> -> Unique\n> -> Incremenal Sort\n> -> ...\n>\n> could be a massive win. So I think we can't just rejigger the sort keys\n> abitrarily, we should / need to consider those alternatives.\n>\n\nThis optimization would only apply to 'Unique/SetOp->Sort->Append' path.\nI don't think it will affect our choise of incremental sort in other\ncases. For example, with this optimization, we still can choose\nincremental sort:\n\n# explain (costs off)\nselect * from\n (select distinct a, c from (select * from foo order by 1,3) as sub) as\nsub1\norder by 2,1;\n QUERY PLAN\n-------------------------------------------------------\n Sort\n Sort Key: sub.c, sub.a\n -> Unique\n -> Subquery Scan on sub\n -> Incremental Sort\n Sort Key: foo.a, foo.c\n Presorted Key: foo.a\n -> Index Scan using foo_a on foo\n(8 rows)\n\n\n>\n> >Or are you saying you believe this patch guarantees we never see this\n> >problem in incremental sort costing?\n> >\n>\n> Yeah, that's not entirely close to me. But maybe it shows us where we to\n> get the unprocessed target list?\n>\n>\nI'm not sure if there are other cases that we would build\ntargetlit/pathkeys out of 'varno 0' Vars. But for this case here, the\n'Unique/SetOp->Sort' above Append would sort the result of Append on all\ncolumns, in the arbitrary order as it sees, (not based on any statistics\nas Teodor's patch does), we can always reorder the sort keys trying to\nmatch with result ordering requirements, thus to avoid the final\nSort/Incremental Sort node. So that we can prevent this problem in\nincremental sort costing for this case.\n\nAm I missing something? Please correct me.\n\nThanks\nRichard\n\nOn Fri, Apr 17, 2020 at 2:44 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Apr 16, 2020 at 12:04:03PM -0400, James Coleman wrote:\n>On Thu, Apr 16, 2020 at 8:22 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>> On Thu, Apr 16, 2020 at 6:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>> Attached is what I'm thinking about this optimization. Does it make any\n>> sense?\n>\n>Shouldn't this go one either a new thread or on the thread for the\n>patch Tomas was referencing (by Teodor I believe)?\n>\n\nFWIW the optimization I had in mind is this:\n\n   https://commitfest.postgresql.org/21/1651/\n\nI now realize that was about GROUP BY, but it's not all that different\nand the concerns will / should be fairly similar, I think.Thanks for pointing out this thread. Very helpful. \n\nIMO simply tweaking the sort keys to match the upper parts of the plan\nis probably way too simplistic, I'm afraid. For example, if the Unique\nsignificantly reduces cardinality, then the cost of the additional sort\nis much less important. It may be much better to optimize the \"large\"\nsort of the whole data set, either by reordering the columns as proposed\nby Teodor in his patch (by number of distinct values and/or cost of the\ncomparison function function).Since we don't have Teodor's patch for now, I think it is a clear win ifwe can reorder the sort keys in 'Unique/SetOp->Sort->Append' path toavoid a final Sort/Incremental Sort node, because currently the 'Sortand Unique/SetOp' above Append simply performs sorting in the order ofcolumns it sees. I think this is the same logic we do for mergejoinpaths that we try to match the requested query_pathkeys to avoid asecond sort. \n\nFurthermore, this is one of the places that is not using incremental\nsort yet - I can easily imagine doing something like this:\n\n\n    Sort\n      -> Unique\n         -> Incremenal Sort\n           -> ...\n\ncould be a massive win. So I think we can't just rejigger the sort keys\nabitrarily, we should / need to consider those alternatives.This optimization would only apply to 'Unique/SetOp->Sort->Append' path.I don't think it will affect our choise of incremental sort in othercases. For example, with this optimization, we still can chooseincremental sort:# explain (costs off)select * from    (select distinct a, c from (select * from foo order by 1,3) as sub) as sub1order by 2,1;                      QUERY PLAN------------------------------------------------------- Sort   Sort Key: sub.c, sub.a   ->  Unique         ->  Subquery Scan on sub               ->  Incremental Sort                     Sort Key: foo.a, foo.c                     Presorted Key: foo.a                     ->  Index Scan using foo_a on foo(8 rows) \n\n>Or are you saying you believe this patch guarantees we never see this\n>problem in incremental sort costing?\n>\n\nYeah, that's not entirely close to me. But maybe it shows us where we to\nget the unprocessed target list?I'm not sure if there are other cases that we would buildtargetlit/pathkeys out of 'varno 0' Vars. But for this case here, the'Unique/SetOp->Sort' above Append would sort the result of Append on allcolumns, in the arbitrary order as it sees, (not based on any statisticsas Teodor's patch does), we can always reorder the sort keys trying tomatch with result ordering requirements, thus to avoid the finalSort/Incremental Sort node. So that we can prevent this problem inincremental sort costing for this case.Am I missing something? Please correct me.ThanksRichard", "msg_date": "Fri, 17 Apr 2020 16:41:34 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Fri, Apr 17, 2020 at 7:13 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Apr 16, 2020 at 08:44:16PM +0200, Tomas Vondra wrote:\n> >\n> >Yeah, that's not entirely close to me. But maybe it shows us where we to\n> >get the unprocessed target list?\n> >\n>\n> I think at the very least this needs to apply the same change also to\n> generate_nonunion_paths, because otherwise this fails because of the\n> same issue:\n>\n> set enable_hashagg = off;\n> explain select * from foo except select * from foo order by 1, 3;\n>\n\nAh yes, that's what I'll have to do to cope with EXCEPT/INTERSECT.\n\nThanks\nRichard\n\nOn Fri, Apr 17, 2020 at 7:13 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Apr 16, 2020 at 08:44:16PM +0200, Tomas Vondra wrote:\n>\n>Yeah, that's not entirely close to me. But maybe it shows us where we to\n>get the unprocessed target list?\n>\n\nI think at the very least this needs to apply the same change also to\ngenerate_nonunion_paths, because otherwise this fails because of the\nsame issue:\n\n   set enable_hashagg = off;\n   explain select * from foo except select * from foo order by 1, 3;Ah yes, that's what I'll have to do to cope with EXCEPT/INTERSECT.ThanksRichard", "msg_date": "Fri, 17 Apr 2020 17:03:34 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 16, 2020 at 9:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > I think we have essentially three options:\n> > 1) assuming there's just a single group\n> > 2) assuming each row is a separate group\n> > 3) something in between\n> > If (1) and (2) are worst/best-case scenarios, maybe we should pick\n> > something in between. We have DEFAULT_NUM_DISTINCT (200) which\n> > essentially says \"we don't know what the number of groups is\" so maybe\n> > we should use that.\n>\n> I wouldn't recommend picking either the best or worst cases.\n>\n> Possibly DEFAULT_NUM_DISTINCT is a sane choice, though it's fair to\n> wonder if it's quite applicable to the case where we already know\n> we've grouped by some columns.\n\nDo you think defining a new default, say,\nDEFAULT_NUM_DISTINCT_PRESORTED is preferred then? And choose some\nvalue like \"1/2 of the normal DEFAULT_NUM_DISTINCT groups\" or some\nsuch?\n\nJames\n\n\n", "msg_date": "Sat, 18 Apr 2020 14:23:25 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Sat, Apr 18, 2020 at 02:23:25PM -0400, James Coleman wrote:\n>On Thu, Apr 16, 2020 at 9:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> > I think we have essentially three options:\n>> > 1) assuming there's just a single group\n>> > 2) assuming each row is a separate group\n>> > 3) something in between\n>> > If (1) and (2) are worst/best-case scenarios, maybe we should pick\n>> > something in between. We have DEFAULT_NUM_DISTINCT (200) which\n>> > essentially says \"we don't know what the number of groups is\" so maybe\n>> > we should use that.\n>>\n>> I wouldn't recommend picking either the best or worst cases.\n>>\n>> Possibly DEFAULT_NUM_DISTINCT is a sane choice, though it's fair to\n>> wonder if it's quite applicable to the case where we already know\n>> we've grouped by some columns.\n>\n>Do you think defining a new default, say,\n>DEFAULT_NUM_DISTINCT_PRESORTED is preferred then? And choose some\n>value like \"1/2 of the normal DEFAULT_NUM_DISTINCT groups\" or some\n>such?\n>\n\nIf we had a better intuition what a better value is, maybe. But I don't\nthink we have that at all, so I'd just use the existing one.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 01:47:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Mon, Apr 20, 2020 at 01:47:29AM +0200, Tomas Vondra wrote:\n>On Sat, Apr 18, 2020 at 02:23:25PM -0400, James Coleman wrote:\n>>On Thu, Apr 16, 2020 at 9:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>> I think we have essentially three options:\n>>>> 1) assuming there's just a single group\n>>>> 2) assuming each row is a separate group\n>>>> 3) something in between\n>>>> If (1) and (2) are worst/best-case scenarios, maybe we should pick\n>>>> something in between. We have DEFAULT_NUM_DISTINCT (200) which\n>>>> essentially says \"we don't know what the number of groups is\" so maybe\n>>>> we should use that.\n>>>\n>>>I wouldn't recommend picking either the best or worst cases.\n>>>\n>>>Possibly DEFAULT_NUM_DISTINCT is a sane choice, though it's fair to\n>>>wonder if it's quite applicable to the case where we already know\n>>>we've grouped by some columns.\n>>\n>>Do you think defining a new default, say,\n>>DEFAULT_NUM_DISTINCT_PRESORTED is preferred then? And choose some\n>>value like \"1/2 of the normal DEFAULT_NUM_DISTINCT groups\" or some\n>>such?\n>>\n>\n>If we had a better intuition what a better value is, maybe. But I don't\n>think we have that at all, so I'd just use the existing one.\n>\n\nI've pushed fix with the DEFAULT_NUM_DISTINCT. The input comes from a\nset operation (which is where we call generate_append_tlist), so it's\nprobably fairly unique, so maybe we should use input_tuples. But it's\nnot guaranteed, so DEFAULT_NUM_DISTINCT seems reasonably defensive.\n\nOne detail I've changed is that instead of matching the expression\ndirectly to a Var, it now calls pull_varnos() to also detect Vars\nsomewhere deeper. Lookig at examine_variable() it calls find_base_rel\nfor such case too, but I haven't tried constructing a query triggering\nthe issue.\n\nOne improvement I can think of is handling lists with only some\nexpressions containing varno 0. We could still call estimate_num_groups\nfor expressions with varno != 0, and multiply that by the estimate for\nthe other part (be it DEFAULT_NUM_DISTINCT). This might produce a higher\nestimate than just using DEFAULT_NUM_DISTINCT directly, resulting in a\nlower incremenal sort cost. But it's not clear to me if this can even\nhappen - AFAICS either all Vars have varno 0 or none, so I haven't done\nthis.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 00:59:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 23, 2020 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> I've pushed fix with the DEFAULT_NUM_DISTINCT. The input comes from a\n> set operation (which is where we call generate_append_tlist), so it's\n> probably fairly unique, so maybe we should use input_tuples. But it's\n> not guaranteed, so DEFAULT_NUM_DISTINCT seems reasonably defensive.\n>\n\nThanks for the fix. Verified that the crash has been fixed.\n\n\n>\n> One detail I've changed is that instead of matching the expression\n> directly to a Var, it now calls pull_varnos() to also detect Vars\n> somewhere deeper. Lookig at examine_variable() it calls find_base_rel\n> for such case too, but I haven't tried constructing a query triggering\n> the issue.\n>\n\nA minor comment is that I don't think we need to strip relabel\nexplicitly before calling pull_varnos(), because this function would\nrecurse into T_RelabelType nodes.\n\nAlso do we need to call bms_free(varnos) for each pathkey here to avoid\nwaste of memory?\n\n\n>\n> One improvement I can think of is handling lists with only some\n> expressions containing varno 0. We could still call estimate_num_groups\n> for expressions with varno != 0, and multiply that by the estimate for\n> the other part (be it DEFAULT_NUM_DISTINCT). This might produce a higher\n> estimate than just using DEFAULT_NUM_DISTINCT directly, resulting in a\n> lower incremenal sort cost. But it's not clear to me if this can even\n> happen - AFAICS either all Vars have varno 0 or none, so I haven't done\n> this.\n>\n\nI don't think this case would happen either.\n\nThanks\nRichard\n\nOn Thu, Apr 23, 2020 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nI've pushed fix with the DEFAULT_NUM_DISTINCT. The input comes from a\nset operation (which is where we call generate_append_tlist), so it's\nprobably fairly unique, so maybe we should use input_tuples. But it's\nnot guaranteed, so DEFAULT_NUM_DISTINCT seems reasonably defensive.Thanks for the fix. Verified that the crash has been fixed. \n\nOne detail I've changed is that instead of matching the expression\ndirectly to a Var, it now calls pull_varnos() to also detect Vars\nsomewhere deeper. Lookig at examine_variable() it calls find_base_rel\nfor such case too, but I haven't tried constructing a query triggering\nthe issue.A minor comment is that I don't think we need to strip relabelexplicitly before calling pull_varnos(), because this function wouldrecurse into T_RelabelType nodes.Also do we need to call bms_free(varnos) for each pathkey here to avoidwaste of memory? \n\nOne improvement I can think of is handling lists with only some\nexpressions containing varno 0. We could still call estimate_num_groups\nfor expressions with varno != 0, and multiply that by the estimate for\nthe other part (be it DEFAULT_NUM_DISTINCT). This might produce a higher\nestimate than just using DEFAULT_NUM_DISTINCT directly, resulting in a\nlower incremenal sort cost. But it's not clear to me if this can even\nhappen - AFAICS either all Vars have varno 0 or none, so I haven't done\nthis.I don't think this case would happen either. ThanksRichard", "msg_date": "Thu, 23 Apr 2020 15:28:21 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "On Thu, Apr 23, 2020 at 03:28:21PM +0800, Richard Guo wrote:\n>On Thu, Apr 23, 2020 at 6:59 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> I've pushed fix with the DEFAULT_NUM_DISTINCT. The input comes from a\n>> set operation (which is where we call generate_append_tlist), so it's\n>> probably fairly unique, so maybe we should use input_tuples. But it's\n>> not guaranteed, so DEFAULT_NUM_DISTINCT seems reasonably defensive.\n>>\n>\n>Thanks for the fix. Verified that the crash has been fixed.\n>\n>\n>>\n>> One detail I've changed is that instead of matching the expression\n>> directly to a Var, it now calls pull_varnos() to also detect Vars\n>> somewhere deeper. Lookig at examine_variable() it calls find_base_rel\n>> for such case too, but I haven't tried constructing a query triggering\n>> the issue.\n>>\n>\n>A minor comment is that I don't think we need to strip relabel\n>explicitly before calling pull_varnos(), because this function would\n>recurse into T_RelabelType nodes.\n>\n\nHmmm, yeah. I think you're right that's unnecessary. I misread the\nwalker function, I think.\n\n>Also do we need to call bms_free(varnos) for each pathkey here to avoid\n>waste of memory?\n>\n\nI don't think so. It wouldn't hurt, but we don't do that for other\npull_vernos calls either AFAICS.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:57:00 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Apr 23, 2020 at 03:28:21PM +0800, Richard Guo wrote:\n>> A minor comment is that I don't think we need to strip relabel\n>> explicitly before calling pull_varnos(), because this function would\n>> recurse into T_RelabelType nodes.\n\n> Hmmm, yeah. I think you're right that's unnecessary. I misread the\n> walker function, I think.\n\n+1, might as well simplify the code.\n\n>> Also do we need to call bms_free(varnos) for each pathkey here to avoid\n>> waste of memory?\n\n> I don't think so. It wouldn't hurt, but we don't do that for other\n> pull_vernos calls either AFAICS.\n\nYeah, the planner is generally pretty profligate of memory, and these\nbitmaps aren't likely to be huge anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 10:56:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sqlsmith crash incremental sort" } ]
[ { "msg_contents": "Hi,\n\nI am experimenting with postgres and am wondering if there is any tutorial\non how to properly add a new command to postgres.\n\nI want to add a new constraint on \"CREATE ROLE\" that requires an integer,\nit has an identifier that is not a known (reserved or unreserved keyword)\nin postgres, say we call it TestPatrick. In other words, I want to do this\n\"CREATE ROLE X TestPatrick=10\". I am having an issue with having postgres\nrecognize my new syntax.\n\nI have seen this video: https://www.youtube.com/watch?v=uSEXTcEiXGQ and was\nable to add have my postgres compile with my added word (modified gram.y,\nkwlist.h, gram.cpp etc based on the video). However, when I use my syntax\non a client session, it still doesn't recognize my syntax... Are there any\nspecific lexer changes I need to make? I followed the example of CONNECTION\nLIMIT and tried to mimic it for Create ROLE.\n\nBest,\nPatrick\n\nHi,I am experimenting with postgres and am wondering if there is any tutorial on how to properly add a new command to postgres. I want to add a new constraint on \"CREATE ROLE\" that requires an integer, it has an identifier that is not a known (reserved or unreserved keyword) in postgres, say we call it TestPatrick. In other words, I want to do this \"CREATE ROLE X TestPatrick=10\". I am having an issue with having postgres recognize my new syntax.I have seen this video: https://www.youtube.com/watch?v=uSEXTcEiXGQ and was able to add have my postgres compile with my added word (modified gram.y, kwlist.h, gram.cpp etc based on the video). However, when I use my syntax on a client session, it still doesn't recognize my syntax... Are there any specific lexer changes I need to make? I followed the example of CONNECTION LIMIT and tried to mimic it for Create ROLE. Best,Patrick", "msg_date": "Sun, 12 Apr 2020 22:46:17 -0400", "msg_from": "Patrick REED <patrickreed352@gmail.com>", "msg_from_op": true, "msg_subject": "Lexer issues" }, { "msg_contents": "Hello,\n\nOn Mon, Apr 13, 2020 at 4:04 PM Patrick REED <patrickreed352@gmail.com> wrote:\n>\n> I am experimenting with postgres and am wondering if there is any tutorial on how to properly add a new command to postgres.\n>\n> I want to add a new constraint on \"CREATE ROLE\" that requires an integer, it has an identifier that is not a known (reserved or unreserved keyword) in postgres, say we call it TestPatrick. In other words, I want to do this \"CREATE ROLE X TestPatrick=10\". I am having an issue with having postgres recognize my new syntax.\n>\n> I have seen this video: https://www.youtube.com/watch?v=uSEXTcEiXGQ and was able to add have my postgres compile with my added word (modified gram.y, kwlist.h, gram.cpp etc based on the video). However, when I use my syntax on a client session, it still doesn't recognize my syntax... Are there any specific lexer changes I need to make? I followed the example of CONNECTION LIMIT and tried to mimic it for Create ROLE.\n\nI'd think that if you can get a successful compilation with a modified\ngram.y (and any kwlist change needed) the new syntax should be\naccepted (at least up to the parser, whether the utility command is\nproperly handled is another thing), since there's a single version of\nthe CreateRoleStmt. Is there any chance that you're somehow\nconnecting to something else than the freshly make-install-ed binary,\nor that the error is coming from later stage than parsing?\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:00:21 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Lexer issues" }, { "msg_contents": "Hi Julien,\n\nSorry for the late reply. I was able to solve the issue. It had to do with\nthe extra syntax I had introduced in gram.y. However, since you mentioned\nthe utility command, can you elaborate a bit more on that?\n\nThanks,\nPatrick\n\nOn Tue, Apr 14, 2020 at 5:00 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hello,\n>\n> On Mon, Apr 13, 2020 at 4:04 PM Patrick REED <patrickreed352@gmail.com>\n> wrote:\n> >\n> > I am experimenting with postgres and am wondering if there is any\n> tutorial on how to properly add a new command to postgres.\n> >\n> > I want to add a new constraint on \"CREATE ROLE\" that requires an\n> integer, it has an identifier that is not a known (reserved or unreserved\n> keyword) in postgres, say we call it TestPatrick. In other words, I want to\n> do this \"CREATE ROLE X TestPatrick=10\". I am having an issue with having\n> postgres recognize my new syntax.\n> >\n> > I have seen this video: https://www.youtube.com/watch?v=uSEXTcEiXGQ and\n> was able to add have my postgres compile with my added word (modified\n> gram.y, kwlist.h, gram.cpp etc based on the video). However, when I use my\n> syntax on a client session, it still doesn't recognize my syntax... Are\n> there any specific lexer changes I need to make? I followed the example of\n> CONNECTION LIMIT and tried to mimic it for Create ROLE.\n>\n> I'd think that if you can get a successful compilation with a modified\n> gram.y (and any kwlist change needed) the new syntax should be\n> accepted (at least up to the parser, whether the utility command is\n> properly handled is another thing), since there's a single version of\n> the CreateRoleStmt. Is there any chance that you're somehow\n> connecting to something else than the freshly make-install-ed binary,\n> or that the error is coming from later stage than parsing?\n>\n\nHi Julien,Sorry for the late reply. I was able to solve the issue. It had to do with the extra syntax I had introduced in gram.y. However, since you mentioned the utility command, can you elaborate a bit more on that?Thanks,PatrickOn Tue, Apr 14, 2020 at 5:00 AM Julien Rouhaud <rjuju123@gmail.com> wrote:Hello,\n\nOn Mon, Apr 13, 2020 at 4:04 PM Patrick REED <patrickreed352@gmail.com> wrote:\n>\n> I am experimenting with postgres and am wondering if there is any tutorial on how to properly add a new command to postgres.\n>\n> I want to add a new constraint on \"CREATE ROLE\" that requires an integer, it has an identifier that is not a known (reserved or unreserved keyword) in postgres, say we call it TestPatrick. In other words, I want to do this \"CREATE ROLE X TestPatrick=10\". I am having an issue with having postgres recognize my new syntax.\n>\n> I have seen this video: https://www.youtube.com/watch?v=uSEXTcEiXGQ and was able to add have my postgres compile with my added word (modified gram.y, kwlist.h, gram.cpp etc based on the video). However, when I use my syntax on a client session, it still doesn't recognize my syntax... Are there any specific lexer changes I need to make? I followed the example of CONNECTION LIMIT and tried to mimic it for Create ROLE.\n\nI'd think that if you can get a successful compilation with a modified\ngram.y (and any kwlist change needed) the new syntax should be\naccepted (at least up to the parser, whether the utility command is\nproperly handled is another thing), since there's a single version of\nthe CreateRoleStmt.  Is there any chance that you're somehow\nconnecting to something else than the freshly make-install-ed binary,\nor that the error is coming from later stage than parsing?", "msg_date": "Thu, 16 Apr 2020 21:57:02 -0400", "msg_from": "Patrick REED <patrickreed352@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Lexer issues" }, { "msg_contents": "On Fri, Apr 17, 2020 at 3:57 AM Patrick REED <patrickreed352@gmail.com> wrote:\n>\n> Hi Julien,\n>\n> Sorry for the late reply. I was able to solve the issue. It had to do with the extra syntax I had introduced in gram.y. However, since you mentioned the utility command, can you elaborate a bit more on that?\n\nUtility commands are basically everything except DML, with each\ncommand having its own set of function(s) to actually implement it.\nSo if you modify the parser you want to make sure that those functions\nare also modified to accept the changes. The main entry point is\nProcessUtility() in utility.c. Or maybe you wanted more specific\ninformation?\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:24:35 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Lexer issues" } ]
[ { "msg_contents": "Hi \n\nI found an exception using the latest master branch of PGSQL and wanted to check if it was a bug\n\nPlease refer this below scenario\n1)initdb ./initdb -k -D data\n2)Connect to server using single user mode ( ./postgres --single -D data postgres) and create a table\n ./postgres --single -D data Postgres\n\nPostgreSQL stand-alone backend 13devel\nbackend>create table t_create_by_standalone(n int);\n\n--Press Ctl+D to exit\n\n3) use pg_ctl start database\npg_ctl -D data -c start\n\n4) use psql connect database, and create a table\ncreate table t_create_by_psql(n int);\n\n5) check the pg_type info\npostgres=# select oid,relname,reltype from pg_class where relname like 't_create%';\n oid | relname | reltype \n-------+------------------------+---------\n 13581 | t_create_by_standalone | 13582\n 16384 | t_create_by_psql | 16386\n(2 rows)\n\npostgres=# SELECT oid,typname, typarray FROM pg_catalog.pg_type WHERE oid in (13582,16386);\n oid | typname | typarray \n-------+------------------------+----------\n 13582 | t_create_by_standalone | 0\n 16386 | t_create_by_psql | 16385\n(2 rows)\n\nUse single user mode (t_create_by_standalone) typarray = 0, but use psql t_create_by_psql typarray has oid.\n\nIs there something wrong to have different catalog information with the same sql?\n\n\nHi I found an exception using the latest master branch of PGSQL and wanted to check if it was a bugPlease refer this below scenario1)initdb  ./initdb -k -D data2)Connect to server using single user mode ( ./postgres --single -D data postgres) and create a  table ./postgres --single -D data PostgresPostgreSQL stand-alone backend 13develbackend>create table t_create_by_standalone(n int);--Press Ctl+D to exit3) use pg_ctl start databasepg_ctl -D data -c start4) use psql connect database, and create a tablecreate table t_create_by_psql(n int);5) check the pg_type infopostgres=# select oid,relname,reltype from pg_class where relname like 't_create%';  oid  |        relname         | reltype -------+------------------------+--------- 13581 | t_create_by_standalone |   13582 16384 | t_create_by_psql       |   16386(2 rows)postgres=# SELECT oid,typname, typarray FROM pg_catalog.pg_type WHERE oid in (13582,16386);  oid  |        typname         | typarray -------+------------------------+---------- 13582 | t_create_by_standalone |        0 16386 | t_create_by_psql       |    16385(2 rows)Use single user mode (t_create_by_standalone) typarray = 0, but use psql t_create_by_psql typarray has oid.Is there something wrong to have different catalog information with the same sql?", "msg_date": "Mon, 13 Apr 2020 16:25:02 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": true, "msg_subject": "[bug] Table not have typarray when created by single user mode" }, { "msg_contents": "wenjing <wjzeng2012@gmail.com> writes:\n> Use single user mode (t_create_by_standalone) typarray = 0, but use psql t_create_by_psql typarray has oid.\n\nThat's because of this in heap_create_with_catalog:\n\n /*\n * Decide whether to create an array type over the relation's rowtype. We\n * do not create any array types for system catalogs (ie, those made\n * during initdb). We do not create them where the use of a relation as\n * such is an implementation detail: toast tables, sequences and indexes.\n */\n if (IsUnderPostmaster && (relkind == RELKIND_RELATION ||\n relkind == RELKIND_VIEW ||\n relkind == RELKIND_MATVIEW ||\n relkind == RELKIND_FOREIGN_TABLE ||\n relkind == RELKIND_COMPOSITE_TYPE ||\n relkind == RELKIND_PARTITIONED_TABLE))\n new_array_oid = AssignTypeArrayOid();\n\nAdmittedly, \"!IsUnderPostmaster\" is not exactly the same thing as \"running\nduring initdb\", but I do not consider this a bug. You generally should\nnot be using single-user mode for anything except disaster recovery.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:51:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "> 2020年4月13日 下午10:51,Tom Lane <tgl@sss.pgh.pa.us> 写道:\n> \n> wenjing <wjzeng2012@gmail.com> writes:\n>> Use single user mode (t_create_by_standalone) typarray = 0, but use psql t_create_by_psql typarray has oid.\n> \n> That's because of this in heap_create_with_catalog:\n> \n> /*\n> * Decide whether to create an array type over the relation's rowtype. We\n> * do not create any array types for system catalogs (ie, those made\n> * during initdb). We do not create them where the use of a relation as\n> * such is an implementation detail: toast tables, sequences and indexes.\n> */\n> if (IsUnderPostmaster && (relkind == RELKIND_RELATION ||\n> relkind == RELKIND_VIEW ||\n> relkind == RELKIND_MATVIEW ||\n> relkind == RELKIND_FOREIGN_TABLE ||\n> relkind == RELKIND_COMPOSITE_TYPE ||\n> relkind == RELKIND_PARTITIONED_TABLE))\n> new_array_oid = AssignTypeArrayOid();\n> \n> Admittedly, \"!IsUnderPostmaster\" is not exactly the same thing as \"running\n> during initdb\", but I do not consider this a bug. You generally should\n> not be using single-user mode for anything except disaster recovery.\n> \n> \t\t\tregards, tom lane\n\n\nThanks for explain. I can understand your point.\n\nHowever, if such a table exists, an error with pg_upgrade is further raised\n\n./initdb -k -D datanew\n./pg_upgrade -d data -d datanew - b. -b.\n\nRestoring database schemas in the new cluster\n postgres \n*failure*\n\nConsult the last few lines of \"pg_upgrade_dump_13580.log\" for\nthe probable cause of the failure.\nFailure, exiting\n\npg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone wenjing\npg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\nCommand was:\n-- For binary upgrade, must preserve pg_type oid\nSELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('13582'::pg_catalog.oid);\n\nI wonder if there are any restrictions that need to be put in somewhere?\n\n\n\nWenjing\n\n\n\n\n\n2020年4月13日 下午10:51,Tom Lane <tgl@sss.pgh.pa.us> 写道:wenjing <wjzeng2012@gmail.com> writes:Use single user mode (t_create_by_standalone) typarray = 0, but use psql t_create_by_psql typarray has oid.That's because of this in heap_create_with_catalog:    /*     * Decide whether to create an array type over the relation's rowtype. We     * do not create any array types for system catalogs (ie, those made     * during initdb). We do not create them where the use of a relation as     * such is an implementation detail: toast tables, sequences and indexes.     */    if (IsUnderPostmaster && (relkind == RELKIND_RELATION ||                              relkind == RELKIND_VIEW ||                              relkind == RELKIND_MATVIEW ||                              relkind == RELKIND_FOREIGN_TABLE ||                              relkind == RELKIND_COMPOSITE_TYPE ||                              relkind == RELKIND_PARTITIONED_TABLE))        new_array_oid = AssignTypeArrayOid();Admittedly, \"!IsUnderPostmaster\" is not exactly the same thing as \"runningduring initdb\", but I do not consider this a bug.  You generally shouldnot be using single-user mode for anything except disaster recovery. regards, tom laneThanks for explain. I can understand your point.However, if such a table exists, an error with pg_upgrade is further raised./initdb -k -D datanew./pg_upgrade -d data -d datanew - b. -b.Restoring database schemas in the new cluster  postgres                                                  *failure*Consult the last few lines of \"pg_upgrade_dump_13580.log\" forthe probable cause of the failure.Failure, exitingpg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone wenjingpg_restore: error: could not execute query: ERROR:  pg_type array OID value not set when in binary upgrade modeCommand was:-- For binary upgrade, must preserve pg_type oidSELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('13582'::pg_catalog.oid);I wonder if there are any restrictions that need to be put in somewhere?Wenjing", "msg_date": "Tue, 14 Apr 2020 11:16:36 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "On 2020-Apr-14, wenjing wrote:\n\n> However, if such a table exists, an error with pg_upgrade is further raised\n> \n> ./initdb -k -D datanew\n> ./pg_upgrade -d data -d datanew - b. -b.\n> \n> Restoring database schemas in the new cluster\n> postgres \n> *failure*\n> \n> Consult the last few lines of \"pg_upgrade_dump_13580.log\" for\n> the probable cause of the failure.\n> Failure, exiting\n> \n> pg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone wenjing\n> pg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\n\nMaybe the solution is to drop the table before pg_upgrade.\n\n(Perhaps in --check mode pg_upgrade could warn you about such\nsituations. But then, should it warn you specifically about random\nother instances of catalog corruption?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 14:00:19 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "> 2020年4月15日 上午2:00,Alvaro Herrera <alvherre@2ndquadrant.com> 写道:\n> \n> On 2020-Apr-14, wenjing wrote:\n> \n>> However, if such a table exists, an error with pg_upgrade is further raised\n>> \n>> ./initdb -k -D datanew\n>> ./pg_upgrade -d data -d datanew - b. -b.\n>> \n>> Restoring database schemas in the new cluster\n>> postgres \n>> *failure*\n>> \n>> Consult the last few lines of \"pg_upgrade_dump_13580.log\" for\n>> the probable cause of the failure.\n>> Failure, exiting\n>> \n>> pg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone wenjing\n>> pg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\n> \n> Maybe the solution is to drop the table before pg_upgrade.\n> \n> (Perhaps in --check mode pg_upgrade could warn you about such\n> situations. But then, should it warn you specifically about random\n> other instances of catalog corruption?)\n\nI fixed the problem along your lines.\nPlease check.\n\n\nWenjing\n\n\n\n\n\n> \n> -- \n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 May 2020 16:19:56 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> 于2020年5月19日周二 下午5:18写道:\n\n> On 2020-Apr-14, wenjing wrote:\n>\n> > However, if such a table exists, an error with pg_upgrade is further\n> raised\n> >\n> > ./initdb -k -D datanew\n> > ./pg_upgrade -d data -d datanew - b. -b.\n> >\n> > Restoring database schemas in the new cluster\n> > postgres\n> > *failure*\n> >\n> > Consult the last few lines of \"pg_upgrade_dump_13580.log\" for\n> > the probable cause of the failure.\n> > Failure, exiting\n> >\n> > pg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone\n> wenjing\n> > pg_restore: error: could not execute query: ERROR: pg_type array OID\n> value not set when in binary upgrade mode\n>\n> Maybe the solution is to drop the table before pg_upgrade.\n>\n> Hi, I don't understand about dropping the table.\nAlthough single-user mode is used for bootstrapping by initdb. Sometimes it\nis used for debugging or disaster recovery.\nHowever, it is still possible for users to process data in this mode. If\nonly the table is deleted,\nI worry that it will cause inconvenience to the user.\nI don't understand why we must be IsUnderPostmaster to create an array type\ntoo.\nIf we could create an array type in single-user mode, there is not this\nissue.\n\nRegards,\n\n--\nShawn Wang\n\nAlvaro Herrera <alvherre@2ndquadrant.com> 于2020年5月19日周二 下午5:18写道:On 2020-Apr-14, wenjing wrote:\n\n> However, if such a table exists, an error with pg_upgrade is further raised\n> \n> ./initdb -k -D datanew\n> ./pg_upgrade -d data -d datanew - b. -b.\n> \n> Restoring database schemas in the new cluster\n>   postgres                                                  \n> *failure*\n> \n> Consult the last few lines of \"pg_upgrade_dump_13580.log\" for\n> the probable cause of the failure.\n> Failure, exiting\n> \n> pg_restore: from TOC entry 200; 1259 13581 TABLE t_create_by_standalone wenjing\n> pg_restore: error: could not execute query: ERROR:  pg_type array OID value not set when in binary upgrade mode\n\nMaybe the solution is to drop the table before pg_upgrade.Hi, I don't understand about dropping the table.Although single-user mode is used for bootstrapping by initdb. Sometimes it is used for debugging or disaster recovery.However, it is still possible for users to process data in this mode. If only the table is deleted,I worry that it will cause inconvenience to the user.I don't understand why we must be IsUnderPostmaster to create an array type too. If we could create an array type in single-user mode, there is not this issue.Regards,--Shawn Wang", "msg_date": "Tue, 19 May 2020 17:40:38 +0800", "msg_from": "shawn wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "On 2020-May-19, shawn wang wrote:\n\n> Although single-user mode is used for bootstrapping by initdb. Sometimes it\n> is used for debugging or disaster recovery.\n> However, it is still possible for users to process data in this mode. If\n> only the table is deleted,\n> I worry that it will cause inconvenience to the user.\n> I don't understand why we must be IsUnderPostmaster to create an array type\n> too.\n> If we could create an array type in single-user mode, there is not this\n> issue.\n\nLooking at the code again, there is one other possible solution: remove\nthe ereport(ERROR) from AssignTypeArrayOid. This means that we'll\nreturn InvalidOid and the array type will be marked as 0 in the upgraded\ncluster ... which is exactly the case in the original server. (Of\ncourse, when array_oid is returned as invalid, the creation of the array\nshould be skipped, in callers of AssignTypeArrayOid.)\n\nI think the argument to have that error check there, is that it's a\ncross-check to avoid pg_upgrade bugs for cases where\nbinary_upgrade_next_array_type_oid is not set when it should have been\nset. But I think we've hammered the pg_upgrade code sufficiently now,\nthat we don't need that check anymore. Any bugs that result in that\nbehavior will be very evident by lack of consistency on some upgrade\nanyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 11:32:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I think the argument to have that error check there, is that it's a\n> cross-check to avoid pg_upgrade bugs for cases where\n> binary_upgrade_next_array_type_oid is not set when it should have been\n> set. But I think we've hammered the pg_upgrade code sufficiently now,\n> that we don't need that check anymore. Any bugs that result in that\n> behavior will be very evident by lack of consistency on some upgrade\n> anyway.\n\nI don't buy that argument at all; that's a pretty critical cross-check\nIMO, because it's quite important that pg_upgrade control all type OIDs\nassigned in the new cluster. And I think it's probably easier to\nbreak than you're hoping :-(\n\nI think a safer fix is to replace the IsUnderPostmaster check in\nheap_create_with_catalog with !IsBootstrapProcessingMode() or the\nlike. That would have the result that we'd create array types for\nthe information_schema views, as well as the system views made in\nsystem_views.sql, which is slightly annoying but probably no real\nharm in the big scheme of things. (I wonder if we ought to reverse\nthe sense of the adjacent relkind check, turning it into a blacklist,\nwhile at it.)\n\nI remain however of the opinion that doing this sort of thing in\nsingle-user mode, or really much of anything beyond emergency\nvacuuming, is unwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 12:09:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> 于2020年5月20日周三 上午12:09写道:\n\n> I think a safer fix is to replace the IsUnderPostmaster check in\n> heap_create_with_catalog with !IsBootstrapProcessingMode() or the\n> like. That would have the result that we'd create array types for\n> the information_schema views, as well as the system views made in\n> system_views.sql, which is slightly annoying but probably no real\n> harm in the big scheme of things. (I wonder if we ought to reverse\n> the sense of the adjacent relkind check, turning it into a blacklist,\n> while at it.)\n>\n\nThank you for the explanation.\nI prefer to change the conditions too.\n\n\n>\n> I remain however of the opinion that doing this sort of thing in\n> single-user mode, or really much of anything beyond emergency\n> vacuuming, is unwise.\n>\n\nI do agree with you, but there is no clear point in the document (maybe I\ndid not read it all),\nit is recommended to make it clear in the document.\n\nRegards,\n--\nShawn Wang\n\nTom Lane <tgl@sss.pgh.pa.us> 于2020年5月20日周三 上午12:09写道:I think a safer fix is to replace the IsUnderPostmaster check in\nheap_create_with_catalog with !IsBootstrapProcessingMode() or the\nlike.  That would have the result that we'd create array types for\nthe information_schema views, as well as the system views made in\nsystem_views.sql, which is slightly annoying but probably no real\nharm in the big scheme of things.  (I wonder if we ought to reverse\nthe sense of the adjacent relkind check, turning it into a blacklist,\nwhile at it.)Thank you for the explanation.I prefer to change the conditions too. \n\nI remain however of the opinion that doing this sort of thing in\nsingle-user mode, or really much of anything beyond emergency\nvacuuming, is unwise. I do agree with you, but there is no clear point in the document (maybe I did not read it all), it is recommended to make it clear in the document. Regards,--Shawn Wang", "msg_date": "Wed, 20 May 2020 16:15:32 +0800", "msg_from": "shawn wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "> 2020年5月20日 上午12:09,Tom Lane <tgl@sss.pgh.pa.us> 写道:\n> \n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I think the argument to have that error check there, is that it's a\n>> cross-check to avoid pg_upgrade bugs for cases where\n>> binary_upgrade_next_array_type_oid is not set when it should have been\n>> set. But I think we've hammered the pg_upgrade code sufficiently now,\n>> that we don't need that check anymore. Any bugs that result in that\n>> behavior will be very evident by lack of consistency on some upgrade\n>> anyway.\n> \n> I don't buy that argument at all; that's a pretty critical cross-check\n> IMO, because it's quite important that pg_upgrade control all type OIDs\n> assigned in the new cluster. And I think it's probably easier to\n> break than you're hoping :-(\n> \n> I think a safer fix is to replace the IsUnderPostmaster check in\n> heap_create_with_catalog with !IsBootstrapProcessingMode() or the\n> like. That would have the result that we'd create array types for\n> the information_schema views, as well as the system views made in\n> system_views.sql, which is slightly annoying but probably no real\n> harm in the big scheme of things. (I wonder if we ought to reverse\n> the sense of the adjacent relkind check, turning it into a blacklist,\n> while at it.)\nThanks for your help, This method passed all regression tests and pg_upgrade checks.\nIt looks perfect.\n\n\n\nWenjing\n\n\n\n\n\n\n> \n> I remain however of the opinion that doing this sort of thing in\n> single-user mode, or really much of anything beyond emergency\n> vacuuming, is unwise.\n> \n> \t\t\tregards, tom lane", "msg_date": "Thu, 21 May 2020 15:28:28 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI verified and found no problems.", "msg_date": "Thu, 21 May 2020 08:10:12 +0000", "msg_from": "Shawn Wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "On Wed, May 20, 2020 at 1:45 PM shawn wang <shawn.wang.pg@gmail.com> wrote:\n>\n> Tom Lane <tgl@sss.pgh.pa.us> 于2020年5月20日周三 上午12:09写道:\n>>\n>>\n>> I remain however of the opinion that doing this sort of thing in\n>> single-user mode, or really much of anything beyond emergency\n>> vacuuming, is unwise.\n>\n>\n> I do agree with you, but there is no clear point in the document (maybe I did not read it all),\n> it is recommended to make it clear in the document.\n>\n\nIt seems to be indicated in the docs [1] (see the paragraph starting\nwith \"The postgres command can also be called in single-user mode\n...\") that it is used for debugging or disaster recovery.\n\n[1] - https://www.postgresql.org/docs/devel/app-postgres.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 May 2020 17:07:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> 于2020年5月21日周四 下午7:37写道:\n\n> On Wed, May 20, 2020 at 1:45 PM shawn wang <shawn.wang.pg@gmail.com>\n> wrote:\n\n\nThank you for your reply.\n\n>\n>\nIt seems to be indicated in the docs [1] (see the paragraph starting\n> with \"The postgres command can also be called in single-user mode\n> ...\") that it is used for debugging or disaster recovery.\n>\n> [1] - https://www.postgresql.org/docs/devel/app-postgres.html\n\n\nThe description here is what users should do, and my suggestion is to\nclearly indicate what users cannot do.\n\nRegards,\n--\nShawn Wang\n\nAmit Kapila <amit.kapila16@gmail.com> 于2020年5月21日周四 下午7:37写道:On Wed, May 20, 2020 at 1:45 PM shawn wang <shawn.wang.pg@gmail.com> wrote:Thank you for your reply.  \nIt seems to be indicated in the docs [1] (see the paragraph starting\nwith \"The postgres command can also be called in single-user mode\n...\") that it is used for debugging or disaster recovery.\n\n[1] - https://www.postgresql.org/docs/devel/app-postgres.htmlThe description here is what users should do, and my suggestion is to clearly indicate what users cannot do. Regards,--Shawn Wang", "msg_date": "Mon, 25 May 2020 11:37:42 +0800", "msg_from": "shawn wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "I poked at this patch a bit, and was reminded of the real reason why\nwe'd skipped making these array types in the first place: it bloats\npg_type noticeably. As of HEAD, a freshly initialized database has\n411 rows in pg_type. As written this patch results in 543 entries,\nor a 32% increase. That seems like kind of a lot. On the other hand,\nin the big scheme of things maybe it's negligible. pg_type is still\nfar from the largest catalog:\n\npostgres=# select relname, relpages from pg_class order by 2 desc;\n relname | relpages \n-----------------------------------------------+----------\n pg_proc | 81\n pg_toast_2618 | 60\n pg_depend | 59\n pg_attribute | 53\n pg_depend_reference_index | 44\n pg_description | 36\n pg_depend_depender_index | 35\n pg_collation | 32\n pg_proc_proname_args_nsp_index | 32\n pg_description_o_c_o_index | 21\n pg_statistic | 19\n pg_attribute_relid_attnam_index | 15\n pg_operator | 14\n pg_type | 14 <--- up from 10\n pg_class | 13\n pg_rewrite | 12\n pg_proc_oid_index | 11\n ...\n\nHowever, if we're going to go this far, I think there's a good\ncase to be made for going all the way and eliminating the policy\nof not making array types for system catalogs. That was never\nanything but a wart justified by space savings in pg_type, and\nthis patch already kills most of the space savings. If we\ndrop the system-state test in heap_create_with_catalog altogether,\nwe end up with 601 initial pg_type entries. That still leaves\nthe four bootstrap catalogs without array types, because they are\nnot created by heap_create_with_catalog; but we can manually add\nthose too for a total of 605 initial entries. (That brings initial\npg_type to 14 pages as I show above; I think it was 13 with the\noriginal version of the patch.)\n\nIn short, if we're gonna do this, I think we should do it like\nthe attached. Or we could do nothing, but there is some appeal\nto removing this old inconsistency.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 01 Jul 2020 12:47:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "I wrote:\n> However, if we're going to go this far, I think there's a good\n> case to be made for going all the way and eliminating the policy\n> of not making array types for system catalogs. That was never\n> anything but a wart justified by space savings in pg_type, and\n> this patch already kills most of the space savings. If we\n> drop the system-state test in heap_create_with_catalog altogether,\n> we end up with 601 initial pg_type entries. That still leaves\n> the four bootstrap catalogs without array types, because they are\n> not created by heap_create_with_catalog; but we can manually add\n> those too for a total of 605 initial entries. (That brings initial\n> pg_type to 14 pages as I show above; I think it was 13 with the\n> original version of the patch.)\n> In short, if we're gonna do this, I think we should do it like\n> the attached. Or we could do nothing, but there is some appeal\n> to removing this old inconsistency.\n\nI pushed that, but while working on it I had a further thought:\nwhy is it that we create composite types but not arrays over those\ntypes for *any* relkinds? That is, we could create even more\nconsistency, as well as buying back some of the pg_type bloat added\nhere, by not creating pg_type entries at all for toast tables or\nsequences. A little bit of hacking later, I have the attached.\n\nOne could argue it either way as to whether sequences should have\ncomposite types. It's possible to demonstrate queries that will\nfail without one:\n\nregression=# create sequence seq1;\nCREATE SEQUENCE\nregression=# select s from seq1 s;\nERROR: relation \"seq1\" does not have a composite type\n\nbut it's pretty hard to believe anyone's using that in practice.\nAlso, we've talked more than once about changing the implementation\nof sequences to not have a relation per sequence, in which case the\nability to do something like the above would go away anyway.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 06 Jul 2020 16:22:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" }, { "msg_contents": "Thank you very much Tom. It looks perfect.\n I don't have any more questions.\n\n\nWenjing\n\n\n> 2020年7月7日 上午4:22,Tom Lane <tgl@sss.pgh.pa.us> 写道:\n> \n> I wrote:\n>> However, if we're going to go this far, I think there's a good\n>> case to be made for going all the way and eliminating the policy\n>> of not making array types for system catalogs. That was never\n>> anything but a wart justified by space savings in pg_type, and\n>> this patch already kills most of the space savings. If we\n>> drop the system-state test in heap_create_with_catalog altogether,\n>> we end up with 601 initial pg_type entries. That still leaves\n>> the four bootstrap catalogs without array types, because they are\n>> not created by heap_create_with_catalog; but we can manually add\n>> those too for a total of 605 initial entries. (That brings initial\n>> pg_type to 14 pages as I show above; I think it was 13 with the\n>> original version of the patch.)\n>> In short, if we're gonna do this, I think we should do it like\n>> the attached. Or we could do nothing, but there is some appeal\n>> to removing this old inconsistency.\n> \n> I pushed that, but while working on it I had a further thought:\n> why is it that we create composite types but not arrays over those\n> types for *any* relkinds? That is, we could create even more\n> consistency, as well as buying back some of the pg_type bloat added\n> here, by not creating pg_type entries at all for toast tables or\n> sequences. A little bit of hacking later, I have the attached.\n> \n> One could argue it either way as to whether sequences should have\n> composite types. It's possible to demonstrate queries that will\n> fail without one:\n> \n> regression=# create sequence seq1;\n> CREATE SEQUENCE\n> regression=# select s from seq1 s;\n> ERROR: relation \"seq1\" does not have a composite type\n> \n> but it's pretty hard to believe anyone's using that in practice.\n> Also, we've talked more than once about changing the implementation\n> of sequences to not have a relation per sequence, in which case the\n> ability to do something like the above would go away anyway.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\n> index 003d278370..7471ba53f2 100644\n> --- a/doc/src/sgml/catalogs.sgml\n> +++ b/doc/src/sgml/catalogs.sgml\n> @@ -1895,7 +1895,8 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l\n> </para>\n> <para>\n> The OID of the data type that corresponds to this table's row type,\n> - if any (zero for indexes, which have no <structname>pg_type</structname> entry)\n> + if any (zero for indexes, sequences, and toast tables, which have\n> + no <structname>pg_type</structname> entry)\n> </para></entry>\n> </row>\n> \n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index fd04e82b20..ae509d9d49 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -1118,8 +1118,6 @@ heap_create_with_catalog(const char *relname,\n> \tOid\t\t\texisting_relid;\n> \tOid\t\t\told_type_oid;\n> \tOid\t\t\tnew_type_oid;\n> -\tObjectAddress new_type_addr;\n> -\tOid\t\t\tnew_array_oid = InvalidOid;\n> \tTransactionId relfrozenxid;\n> \tMultiXactId relminmxid;\n> \n> @@ -1262,44 +1260,45 @@ heap_create_with_catalog(const char *relname,\n> \tnew_rel_desc->rd_rel->relrewrite = relrewrite;\n> \n> \t/*\n> -\t * Decide whether to create an array type over the relation's rowtype.\n> -\t * Array types are made except where the use of a relation as such is an\n> +\t * Decide whether to create a pg_type entry for the relation's rowtype.\n> +\t * These types are made except where the use of a relation as such is an\n> \t * implementation detail: toast tables, sequences and indexes.\n> \t */\n> \tif (!(relkind == RELKIND_SEQUENCE ||\n> \t\t relkind == RELKIND_TOASTVALUE ||\n> \t\t relkind == RELKIND_INDEX ||\n> \t\t relkind == RELKIND_PARTITIONED_INDEX))\n> -\t\tnew_array_oid = AssignTypeArrayOid();\n> -\n> -\t/*\n> -\t * Since defining a relation also defines a complex type, we add a new\n> -\t * system type corresponding to the new relation. The OID of the type can\n> -\t * be preselected by the caller, but if reltypeid is InvalidOid, we'll\n> -\t * generate a new OID for it.\n> -\t *\n> -\t * NOTE: we could get a unique-index failure here, in case someone else is\n> -\t * creating the same type name in parallel but hadn't committed yet when\n> -\t * we checked for a duplicate name above.\n> -\t */\n> -\tnew_type_addr = AddNewRelationType(relname,\n> -\t\t\t\t\t\t\t\t\t relnamespace,\n> -\t\t\t\t\t\t\t\t\t relid,\n> -\t\t\t\t\t\t\t\t\t relkind,\n> -\t\t\t\t\t\t\t\t\t ownerid,\n> -\t\t\t\t\t\t\t\t\t reltypeid,\n> -\t\t\t\t\t\t\t\t\t new_array_oid);\n> -\tnew_type_oid = new_type_addr.objectId;\n> -\tif (typaddress)\n> -\t\t*typaddress = new_type_addr;\n> -\n> -\t/*\n> -\t * Now make the array type if wanted.\n> -\t */\n> -\tif (OidIsValid(new_array_oid))\n> \t{\n> +\t\tOid\t\t\tnew_array_oid;\n> +\t\tObjectAddress new_type_addr;\n> \t\tchar\t *relarrayname;\n> \n> +\t\t/*\n> +\t\t * We'll make an array over the composite type, too. For largely\n> +\t\t * historical reasons, the array type's OID is assigned first.\n> +\t\t */\n> +\t\tnew_array_oid = AssignTypeArrayOid();\n> +\n> +\t\t/*\n> +\t\t * The OID of the composite type can be preselected by the caller, but\n> +\t\t * if reltypeid is InvalidOid, we'll generate a new OID for it.\n> +\t\t *\n> +\t\t * NOTE: we could get a unique-index failure here, in case someone\n> +\t\t * else is creating the same type name in parallel but hadn't\n> +\t\t * committed yet when we checked for a duplicate name above.\n> +\t\t */\n> +\t\tnew_type_addr = AddNewRelationType(relname,\n> +\t\t\t\t\t\t\t\t\t\t relnamespace,\n> +\t\t\t\t\t\t\t\t\t\t relid,\n> +\t\t\t\t\t\t\t\t\t\t relkind,\n> +\t\t\t\t\t\t\t\t\t\t ownerid,\n> +\t\t\t\t\t\t\t\t\t\t reltypeid,\n> +\t\t\t\t\t\t\t\t\t\t new_array_oid);\n> +\t\tnew_type_oid = new_type_addr.objectId;\n> +\t\tif (typaddress)\n> +\t\t\t*typaddress = new_type_addr;\n> +\n> +\t\t/* Now create the array type. */\n> \t\trelarrayname = makeArrayTypeName(relname, relnamespace);\n> \n> \t\tTypeCreate(new_array_oid,\t/* force the type's OID to this */\n> @@ -1336,6 +1335,14 @@ heap_create_with_catalog(const char *relname,\n> \n> \t\tpfree(relarrayname);\n> \t}\n> +\telse\n> +\t{\n> +\t\t/* Caller should not be expecting a type to be created. */\n> +\t\tAssert(reltypeid == InvalidOid);\n> +\t\tAssert(typaddress == NULL);\n> +\n> +\t\tnew_type_oid = InvalidOid;\n> +\t}\n> \n> \t/*\n> \t * now create an entry in pg_class for the relation.\n> diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c\n> index 3f7ab8d389..8b8888af5e 100644\n> --- a/src/backend/catalog/toasting.c\n> +++ b/src/backend/catalog/toasting.c\n> @@ -34,9 +34,6 @@\n> #include \"utils/rel.h\"\n> #include \"utils/syscache.h\"\n> \n> -/* Potentially set by pg_upgrade_support functions */\n> -Oid\t\t\tbinary_upgrade_next_toast_pg_type_oid = InvalidOid;\n> -\n> static void CheckAndCreateToastTable(Oid relOid, Datum reloptions,\n> \t\t\t\t\t\t\t\t\t LOCKMODE lockmode, bool check);\n> static bool create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,\n> @@ -135,7 +132,6 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,\n> \tRelation\ttoast_rel;\n> \tRelation\tclass_rel;\n> \tOid\t\t\ttoast_relid;\n> -\tOid\t\t\ttoast_typid = InvalidOid;\n> \tOid\t\t\tnamespaceid;\n> \tchar\t\ttoast_relname[NAMEDATALEN];\n> \tchar\t\ttoast_idxname[NAMEDATALEN];\n> @@ -181,8 +177,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,\n> \t\t * problem that it might take up an OID that will conflict with some\n> \t\t * old-cluster table we haven't seen yet.\n> \t\t */\n> -\t\tif (!OidIsValid(binary_upgrade_next_toast_pg_class_oid) ||\n> -\t\t\t!OidIsValid(binary_upgrade_next_toast_pg_type_oid))\n> +\t\tif (!OidIsValid(binary_upgrade_next_toast_pg_class_oid))\n> \t\t\treturn false;\n> \t}\n> \n> @@ -234,17 +229,6 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,\n> \telse\n> \t\tnamespaceid = PG_TOAST_NAMESPACE;\n> \n> -\t/*\n> -\t * Use binary-upgrade override for pg_type.oid, if supplied. We might be\n> -\t * in the post-schema-restore phase where we are doing ALTER TABLE to\n> -\t * create TOAST tables that didn't exist in the old cluster.\n> -\t */\n> -\tif (IsBinaryUpgrade && OidIsValid(binary_upgrade_next_toast_pg_type_oid))\n> -\t{\n> -\t\ttoast_typid = binary_upgrade_next_toast_pg_type_oid;\n> -\t\tbinary_upgrade_next_toast_pg_type_oid = InvalidOid;\n> -\t}\n> -\n> \t/* Toast table is shared if and only if its parent is. */\n> \tshared_relation = rel->rd_rel->relisshared;\n> \n> @@ -255,7 +239,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,\n> \t\t\t\t\t\t\t\t\t\t namespaceid,\n> \t\t\t\t\t\t\t\t\t\t rel->rd_rel->reltablespace,\n> \t\t\t\t\t\t\t\t\t\t toastOid,\n> -\t\t\t\t\t\t\t\t\t\t toast_typid,\n> +\t\t\t\t\t\t\t\t\t\t InvalidOid,\n> \t\t\t\t\t\t\t\t\t\t InvalidOid,\n> \t\t\t\t\t\t\t\t\t\t rel->rd_rel->relowner,\n> \t\t\t\t\t\t\t\t\t\t table_relation_toast_am(rel),\n> diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\n> index f79044f39f..4b2548f33f 100644\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -12564,8 +12564,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock\n> \t\t/*\n> \t\t * Also change the ownership of the table's row type, if it has one\n> \t\t */\n> -\t\tif (tuple_class->relkind != RELKIND_INDEX &&\n> -\t\t\ttuple_class->relkind != RELKIND_PARTITIONED_INDEX)\n> +\t\tif (OidIsValid(tuple_class->reltype))\n> \t\t\tAlterTypeOwnerInternal(tuple_class->reltype, newOwnerId);\n> \n> \t\t/*\n> @@ -15009,9 +15008,10 @@ AlterTableNamespaceInternal(Relation rel, Oid oldNspOid, Oid nspOid,\n> \tAlterRelationNamespaceInternal(classRel, RelationGetRelid(rel), oldNspOid,\n> \t\t\t\t\t\t\t\t nspOid, true, objsMoved);\n> \n> -\t/* Fix the table's row type too */\n> -\tAlterTypeNamespaceInternal(rel->rd_rel->reltype,\n> -\t\t\t\t\t\t\t nspOid, false, false, objsMoved);\n> +\t/* Fix the table's row type too, if it has one */\n> +\tif (OidIsValid(rel->rd_rel->reltype))\n> +\t\tAlterTypeNamespaceInternal(rel->rd_rel->reltype,\n> +\t\t\t\t\t\t\t\t nspOid, false, false, objsMoved);\n> \n> \t/* Fix other dependent stuff */\n> \tif (rel->rd_rel->relkind == RELKIND_RELATION ||\n> @@ -15206,11 +15206,11 @@ AlterSeqNamespaces(Relation classRel, Relation rel,\n> \t\t\t\t\t\t\t\t\t true, objsMoved);\n> \n> \t\t/*\n> -\t\t * Sequences have entries in pg_type. We need to be careful to move\n> -\t\t * them to the new namespace, too.\n> +\t\t * Sequences used to have entries in pg_type, but no longer do. If we\n> +\t\t * ever re-instate that, we'll need to move the pg_type entry to the\n> +\t\t * new namespace, too (using AlterTypeNamespaceInternal).\n> \t\t */\n> -\t\tAlterTypeNamespaceInternal(RelationGetForm(seqRel)->reltype,\n> -\t\t\t\t\t\t\t\t newNspOid, false, false, objsMoved);\n> +\t\tAssert(RelationGetForm(seqRel)->reltype == InvalidOid);\n> \n> \t\t/* Now we can close it. Keep the lock till end of transaction. */\n> \t\trelation_close(seqRel, NoLock);\n> diff --git a/src/backend/nodes/makefuncs.c b/src/backend/nodes/makefuncs.c\n> index b442b5a29e..49de285f01 100644\n> --- a/src/backend/nodes/makefuncs.c\n> +++ b/src/backend/nodes/makefuncs.c\n> @@ -145,8 +145,10 @@ makeWholeRowVar(RangeTblEntry *rte,\n> \t\t\t/* relation: the rowtype is a named composite type */\n> \t\t\ttoid = get_rel_type_id(rte->relid);\n> \t\t\tif (!OidIsValid(toid))\n> -\t\t\t\telog(ERROR, \"could not find type OID for relation %u\",\n> -\t\t\t\t\t rte->relid);\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> +\t\t\t\t\t\t errmsg(\"relation \\\"%s\\\" does not have a composite type\",\n> +\t\t\t\t\t\t\t\tget_rel_name(rte->relid))));\n> \t\t\tresult = makeVar(varno,\n> \t\t\t\t\t\t\t InvalidAttrNumber,\n> \t\t\t\t\t\t\t toid,\n> diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c\n> index 18f2ee8226..14d9eb2b5b 100644\n> --- a/src/backend/utils/adt/pg_upgrade_support.c\n> +++ b/src/backend/utils/adt/pg_upgrade_support.c\n> @@ -51,17 +51,6 @@ binary_upgrade_set_next_array_pg_type_oid(PG_FUNCTION_ARGS)\n> \tPG_RETURN_VOID();\n> }\n> \n> -Datum\n> -binary_upgrade_set_next_toast_pg_type_oid(PG_FUNCTION_ARGS)\n> -{\n> -\tOid\t\t\ttypoid = PG_GETARG_OID(0);\n> -\n> -\tCHECK_IS_BINARY_UPGRADE;\n> -\tbinary_upgrade_next_toast_pg_type_oid = typoid;\n> -\n> -\tPG_RETURN_VOID();\n> -}\n> -\n> Datum\n> binary_upgrade_set_next_heap_pg_class_oid(PG_FUNCTION_ARGS)\n> {\n> diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\n> index a41a3db876..ee0947dda7 100644\n> --- a/src/bin/pg_dump/pg_dump.c\n> +++ b/src/bin/pg_dump/pg_dump.c\n> @@ -272,7 +272,7 @@ static void binary_upgrade_set_type_oids_by_type_oid(Archive *fout,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t PQExpBuffer upgrade_buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t Oid pg_type_oid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t bool force_array_type);\n> -static bool binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> +static void binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tPQExpBuffer upgrade_buffer, Oid pg_rel_oid);\n> static void binary_upgrade_set_pg_class_oids(Archive *fout,\n> \t\t\t\t\t\t\t\t\t\t\t PQExpBuffer upgrade_buffer,\n> @@ -4493,7 +4493,7 @@ binary_upgrade_set_type_oids_by_type_oid(Archive *fout,\n> \tdestroyPQExpBuffer(upgrade_query);\n> }\n> \n> -static bool\n> +static void\n> binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> \t\t\t\t\t\t\t\t\t\tPQExpBuffer upgrade_buffer,\n> \t\t\t\t\t\t\t\t\t\tOid pg_rel_oid)\n> @@ -4501,48 +4501,23 @@ binary_upgrade_set_type_oids_by_rel_oid(Archive *fout,\n> \tPQExpBuffer upgrade_query = createPQExpBuffer();\n> \tPGresult *upgrade_res;\n> \tOid\t\t\tpg_type_oid;\n> -\tbool\t\ttoast_set = false;\n> \n> -\t/*\n> -\t * We only support old >= 8.3 for binary upgrades.\n> -\t *\n> -\t * We purposefully ignore toast OIDs for partitioned tables; the reason is\n> -\t * that versions 10 and 11 have them, but 12 does not, so emitting them\n> -\t * causes the upgrade to fail.\n> -\t */\n> \tappendPQExpBuffer(upgrade_query,\n> -\t\t\t\t\t \"SELECT c.reltype AS crel, t.reltype AS trel \"\n> +\t\t\t\t\t \"SELECT c.reltype AS crel \"\n> \t\t\t\t\t \"FROM pg_catalog.pg_class c \"\n> -\t\t\t\t\t \"LEFT JOIN pg_catalog.pg_class t ON \"\n> -\t\t\t\t\t \" (c.reltoastrelid = t.oid AND c.relkind <> '%c') \"\n> \t\t\t\t\t \"WHERE c.oid = '%u'::pg_catalog.oid;\",\n> -\t\t\t\t\t RELKIND_PARTITIONED_TABLE, pg_rel_oid);\n> +\t\t\t\t\t pg_rel_oid);\n> \n> \tupgrade_res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n> \n> \tpg_type_oid = atooid(PQgetvalue(upgrade_res, 0, PQfnumber(upgrade_res, \"crel\")));\n> \n> -\tbinary_upgrade_set_type_oids_by_type_oid(fout, upgrade_buffer,\n> -\t\t\t\t\t\t\t\t\t\t\t pg_type_oid, false);\n> -\n> -\tif (!PQgetisnull(upgrade_res, 0, PQfnumber(upgrade_res, \"trel\")))\n> -\t{\n> -\t\t/* Toast tables do not have pg_type array rows */\n> -\t\tOid\t\t\tpg_type_toast_oid = atooid(PQgetvalue(upgrade_res, 0,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t PQfnumber(upgrade_res, \"trel\")));\n> -\n> -\t\tappendPQExpBufferStr(upgrade_buffer, \"\\n-- For binary upgrade, must preserve pg_type toast oid\\n\");\n> -\t\tappendPQExpBuffer(upgrade_buffer,\n> -\t\t\t\t\t\t \"SELECT pg_catalog.binary_upgrade_set_next_toast_pg_type_oid('%u'::pg_catalog.oid);\\n\\n\",\n> -\t\t\t\t\t\t pg_type_toast_oid);\n> -\n> -\t\ttoast_set = true;\n> -\t}\n> +\tif (OidIsValid(pg_type_oid))\n> +\t\tbinary_upgrade_set_type_oids_by_type_oid(fout, upgrade_buffer,\n> +\t\t\t\t\t\t\t\t\t\t\t\t pg_type_oid, false);\n> \n> \tPQclear(upgrade_res);\n> \tdestroyPQExpBuffer(upgrade_query);\n> -\n> -\treturn toast_set;\n> }\n> \n> static void\n> diff --git a/src/include/catalog/binary_upgrade.h b/src/include/catalog/binary_upgrade.h\n> index 12d94fe1b3..02fecb90f7 100644\n> --- a/src/include/catalog/binary_upgrade.h\n> +++ b/src/include/catalog/binary_upgrade.h\n> @@ -16,7 +16,6 @@\n> \n> extern PGDLLIMPORT Oid binary_upgrade_next_pg_type_oid;\n> extern PGDLLIMPORT Oid binary_upgrade_next_array_pg_type_oid;\n> -extern PGDLLIMPORT Oid binary_upgrade_next_toast_pg_type_oid;\n> \n> extern PGDLLIMPORT Oid binary_upgrade_next_heap_pg_class_oid;\n> extern PGDLLIMPORT Oid binary_upgrade_next_index_pg_class_oid;\n> diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\n> index 38295aca48..a995a104b6 100644\n> --- a/src/include/catalog/pg_proc.dat\n> +++ b/src/include/catalog/pg_proc.dat\n> @@ -10306,10 +10306,6 @@\n> proname => 'binary_upgrade_set_next_array_pg_type_oid', provolatile => 'v',\n> proparallel => 'r', prorettype => 'void', proargtypes => 'oid',\n> prosrc => 'binary_upgrade_set_next_array_pg_type_oid' },\n> -{ oid => '3585', descr => 'for use by pg_upgrade',\n> - proname => 'binary_upgrade_set_next_toast_pg_type_oid', provolatile => 'v',\n> - proparallel => 'r', prorettype => 'void', proargtypes => 'oid',\n> - prosrc => 'binary_upgrade_set_next_toast_pg_type_oid' },\n> { oid => '3586', descr => 'for use by pg_upgrade',\n> proname => 'binary_upgrade_set_next_heap_pg_class_oid', provolatile => 'v',\n> proparallel => 'r', prorettype => 'void', proargtypes => 'oid',\n> diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c\n> index 828ff5a288..e7f4a5f291 100644\n> --- a/src/pl/plpgsql/src/pl_comp.c\n> +++ b/src/pl/plpgsql/src/pl_comp.c\n> @@ -1778,6 +1778,7 @@ PLpgSQL_type *\n> plpgsql_parse_wordrowtype(char *ident)\n> {\n> \tOid\t\t\tclassOid;\n> +\tOid\t\t\ttypOid;\n> \n> \t/*\n> \t * Look up the relation. Note that because relation rowtypes have the\n> @@ -1792,8 +1793,16 @@ plpgsql_parse_wordrowtype(char *ident)\n> \t\t\t\t(errcode(ERRCODE_UNDEFINED_TABLE),\n> \t\t\t\t errmsg(\"relation \\\"%s\\\" does not exist\", ident)));\n> \n> +\t/* Some relkinds lack type OIDs */\n> +\ttypOid = get_rel_type_id(classOid);\n> +\tif (!OidIsValid(typOid))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> +\t\t\t\t errmsg(\"relation \\\"%s\\\" does not have a composite type\",\n> +\t\t\t\t\t\tident)));\n> +\n> \t/* Build and return the row type struct */\n> -\treturn plpgsql_build_datatype(get_rel_type_id(classOid), -1, InvalidOid,\n> +\treturn plpgsql_build_datatype(typOid, -1, InvalidOid,\n> \t\t\t\t\t\t\t\t makeTypeName(ident));\n> }\n> \n> @@ -1806,6 +1815,7 @@ PLpgSQL_type *\n> plpgsql_parse_cwordrowtype(List *idents)\n> {\n> \tOid\t\t\tclassOid;\n> +\tOid\t\t\ttypOid;\n> \tRangeVar *relvar;\n> \tMemoryContext oldCxt;\n> \n> @@ -1825,10 +1835,18 @@ plpgsql_parse_cwordrowtype(List *idents)\n> \t\t\t\t\t\t -1);\n> \tclassOid = RangeVarGetRelid(relvar, NoLock, false);\n> \n> +\t/* Some relkinds lack type OIDs */\n> +\ttypOid = get_rel_type_id(classOid);\n> +\tif (!OidIsValid(typOid))\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> +\t\t\t\t errmsg(\"relation \\\"%s\\\" does not have a composite type\",\n> +\t\t\t\t\t\tstrVal(lsecond(idents)))));\n> +\n> \tMemoryContextSwitchTo(oldCxt);\n> \n> \t/* Build and return the row type struct */\n> -\treturn plpgsql_build_datatype(get_rel_type_id(classOid), -1, InvalidOid,\n> +\treturn plpgsql_build_datatype(typOid, -1, InvalidOid,\n> \t\t\t\t\t\t\t\t makeTypeNameFromNameList(idents));\n> }\n> \n\n\n\n", "msg_date": "Wed, 8 Jul 2020 19:24:52 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [bug] Table not have typarray when created by single user mode" } ]
[ { "msg_contents": "I'm not a fan of error messages like\n\n relation \"%s\" is not a table, foreign table, or materialized view\n\nIt doesn't tell me what's wrong, it only tells me what else could have \nworked. It's also tedious to maintain and the number of combinations \ngrows over time.\n\nThis was discussed many years ago in [0], with the same arguments, and \nthere appeared to have been general agreement to change this, but then \nthe thread stalled somehow on some technical details.\n\nAttached is another attempt to improve this. I have rewritten the \nprimary error messages using the principle of \"cannot do this with that\" \nand then added a detail message to show what relkind the object has. \nFor example:\n\n-ERROR: relation \"ti\" is not a table, foreign table, or materialized view\n+ERROR: cannot define statistics for relation \"ti\"\n+DETAIL: \"ti\" is an index.\n\nand\n\n-ERROR: \"test_foreign_table\" is not a table, materialized view, or \nTOAST table\n+ERROR: relation \"test_foreign_table\" does not have a visibility map\n+DETAIL: \"test_foreign_table\" is a foreign table.\n\nYou can see more instances of this in the test diffs in the attached patch.\n\nIn passing, I also changed a few places to use the RELKIND_HAS_STORAGE() \nmacro. This is related because it allows writing more helpful error \nmessages, such as in pgstatindex.c.\n\nOne question on a detail arose:\n\ncheck_relation_relkind() in pg_visibility.c accepts RELKIND_RELATION, \nRELKIND_MATVIEW, and RELKIND_TOASTVALUE, but pgstatapprox.c only accepts \nRELKIND_RELATION and RELKIND_MATVIEW, even though they both look for a \nvisibility map. Is that an intentional omission? If so, it should be \ncommented better.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/AANLkTimR_sZ_wKd1cgqVG1PEvTvdr9j7zD%2B3_NPvfaa_%40mail.gmail.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 13 Apr 2020 15:54:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "wrong relkind error messages" }, { "msg_contents": "On Mon, Apr 13, 2020 at 9:55 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Attached is another attempt to improve this.\n\nNice effort. Most of these seem like clear improvements, but some I don't like:\n\n+ errmsg(\"relation \\\"%s\\\" is of unsupported kind\",\n+ RelationGetRelationName(rel)),\n+ errdetail_relkind(RelationGetRelationName(rel), rel->rd_rel->relkind)));\n\nIt would help to work \"pgstattuple\" into the message somehow. \"cannot\nuse pgstattuple on relation \\\"%s\\\"\", perhaps?\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"action cannot be performed on relation \\\"%s\\\"\",\n+ RelationGetRelationName(rel)),\n\nSuper-vague.\n\n+ errmsg(\"cannot set relation options of relation \\\"%s\\\"\",\n+ RelationGetRelationName(rel)),\n\nI suggest \"cannot set options for relation \\\"%s\\\"\"; that is, use \"for\"\ninstead of \"of\", and don't say \"relation\" twice.\n\n+ errmsg(\"cannot create trigger on relation \\\"%s\\\"\",\n+ RelationGetRelationName(rel)),\n+ errmsg(\"relation \\\"%s\\\" cannot have triggers\",\n+ RelationGetRelationName(rel)),\n+ errmsg(\"relation \\\"%s\\\" cannot have triggers\",\n+ rv->relname),\n\nMaybe use the second wording for all three? And similarly for rules?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 11:06:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I'm not a fan of error messages like\n> relation \"%s\" is not a table, foreign table, or materialized view\n\nAgreed, they're not great.\n\n> For example:\n\n> -ERROR: relation \"ti\" is not a table, foreign table, or materialized view\n> +ERROR: cannot define statistics for relation \"ti\"\n> +DETAIL: \"ti\" is an index.\n\nI see where you'e going, and it seems like a generally-better idea,\nbut I feel like this phrasing is omitting some critical background\ninformation that users don't necessarily have. At the very least\nit's not stating clearly that the failure is *because* ti is an\nindex. More generally, the whole concept that statistics can only\nbe defined for certain kinds of relations has disappeared from view.\nI fear that users who're less deeply into Postgres hacking than we\nare might not have that concept at all, or at least it might not\ncome to mind immediately when they get this message.\n\nFixing this while avoiding your concern about proliferation of messages\nseems a bit difficult though. The best I can do after a couple minutes'\nthought is\n\nERROR: cannot define statistics for relation \"ti\"\nDETAIL: \"ti\" is an index, and this operation is not supported for that\nkind of relation.\n\nwhich seems a little long and awkward. Another idea is\n\nERROR: cannot define statistics for relation \"ti\"\nDETAIL: This operation is not supported for indexes.\n\nwhich still leaves implicit that \"ti\" is an index, but probably that's\nsomething the user can figure out.\n\nMaybe someone else can do better?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 11:13:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "\nOn 4/13/20 11:13 AM, Tom Lane wrote:\n>\n> ERROR: cannot define statistics for relation \"ti\"\n> DETAIL: This operation is not supported for indexes.\n>\n> which still leaves implicit that \"ti\" is an index, but probably that's\n> something the user can figure out.\n>\n\n+1 for this. It's clear and succinct.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 13 Apr 2020 11:17:54 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On Mon, Apr 13, 2020 at 11:13:15AM -0400, Tom Lane wrote:\n> Fixing this while avoiding your concern about proliferation of messages\n> seems a bit difficult though. The best I can do after a couple minutes'\n> thought is\n> \n> ERROR: cannot define statistics for relation \"ti\"\n> DETAIL: \"ti\" is an index, and this operation is not supported for that\n> kind of relation.\n> \n> which seems a little long and awkward. Another idea is\n> \n> ERROR: cannot define statistics for relation \"ti\"\n> DETAIL: This operation is not supported for indexes.\n> \n> which still leaves implicit that \"ti\" is an index, but probably that's\n> something the user can figure out.\n> \n> Maybe someone else can do better?\n\n\"This operation is not supported for put_relkind_here \\\"%s\\\".\"? I\nthink that it is better to provide a relation name in the error\nmessage (even optionally a namespace). That's less to guess for the\nuser.\n\n+int\n+errdetail_relkind(const char *relname, char relkind)\n+{\n+ switch (relkind)\n+ {\n+ case RELKIND_RELATION:\n+ return errdetail(\"\\\"%s\\\" is a table.\", relname);\n+ case RELKIND_INDEX:\nIt seems to me that we should optionally add the namespace in the\nerror message, or just have a separate routine for that. I think that\nit would be useful in some cases (see for example the part about the\nstatistics in the patch), still annoying in some others (instability\nin test output for temporary schemas for example) so there is a point\nfor both in my view.\n\n- if (rel->rd_rel->relkind != RELKIND_VIEW &&\n- rel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE &&\n- rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&\n- rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)\n- {\n+ if (RELKIND_HAS_STORAGE(rel->rd_rel->relkind))\n RelationDropStorage(rel);\nThese should be applied separately in my opinion. Nice catch.\n\n- errmsg(\"\\\"%s\\\" is not a table, view, materialized view, sequence, or foreign table\",\n- rv->relname)));\n+ errmsg(\"cannot change schema of relation \\\"%s\\\"\",\n+ rv->relname),\n+ (relkind == RELKIND_INDEX || relkind == RELKIND_PARTITIONED_INDEX ? errhint(\"Change the schema of the table instead.\") :\n+ (relkind == RELKIND_COMPOSITE_TYPE ? errhint(\"Use ALTER TYPE instead.\") : 0))));\n\nThis is not great style either and reduces readability, so I would\nrecommend to split the errhint generation using a switch/case.\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"action cannot be performed on relation\n\\\"%s\\\"\",\n+ RelationGetRelationName(rel)),\nEchoing Robert upthread, \"action\" is not really useful for the user,\nand it seems to me that it should be reworked as \"cannot perform foo\non relation \\\"hoge\\\"\"\n\n+ errmsg(\"relation \\\"%s\\\" does not support comments\",\n+ RelationGetRelationName(relation)),\nThis is not project-style as full sentences cannot be used in error\nmessages, no? The former is not that good either, still, while this\nis getting touched... Say, \"cannot use COMMENT on relation \\\"%s\\\"\"?\n\nOverall +1 on the patch by the way. Thanks for sending something to\nimprove the situation\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 10:32:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 2020-Apr-14, Michael Paquier wrote:\n\n> On Mon, Apr 13, 2020 at 11:13:15AM -0400, Tom Lane wrote:\n\n> > ERROR: cannot define statistics for relation \"ti\"\n> > DETAIL: This operation is not supported for indexes.\n> > \n> > which still leaves implicit that \"ti\" is an index, but probably that's\n> > something the user can figure out.\n> > \n> > Maybe someone else can do better?\n> \n> \"This operation is not supported for put_relkind_here \\\"%s\\\".\"? I\n> think that it is better to provide a relation name in the error\n> message (even optionally a namespace). That's less to guess for the\n> user.\n\nBut the relation name is already in the ERROR line -- why do you care so\nmuch about also having it in the DETAIL? Besides, I think part of the\npoint Tom was making is that if you say \"not supported for the index\nfoo\" is that the user is left wondering whether the operation is not\nsupported for that particular index only or for any index.\n\nTom's other proposal\n\n> > DETAIL: \"ti\" is an index, and this operation is not supported for that kind of relation.\n\naddresses that problem, but seems excessively verbose.\n\nAlso, elsewhere Peter said[1] that we should not try to list the things\nthat would be allowed, so it's pointless to try to list the relkinds for\nwhich the operation is permissible.\n\nSo I +1 this idea:\n\n ERROR: cannot define statistics for relation \"ti\"\n DETAIL: This operation is not supported for indexes.\n\n[1] https://www.postgresql.org/message-id/1293803569.19789.6.camel%40fsopti579.F-Secure.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 18:36:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 2020-Apr-13, Robert Haas wrote:\n\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"action cannot be performed on relation \\\"%s\\\"\",\n> + RelationGetRelationName(rel)),\n> \n> Super-vague.\n\nMaybe, but note that the patch proposed to replace this current error\nmessage:\n ERROR: foo is not an index or foreign table\nwith \n ERROR: action cannot be performed on \"foo\"\n DETAIL: \"foo\" is a materialized view.\n\nor, if we're to adopt Tom's proposed wording,\n\n ERROR: cannot perform action on relation \"ti\"\n DETAIL: This operation is not supported for materialized views.\n\nso it's not like this is making things any worse; the error was already\nsuper-vague. \n\nThis could be improved if we had stringification of ALTER TABLE\nsubcommand types:\n\n ERROR: ALTER TABLE ... ADD COLUMN cannot be performed on \"foo\"\n DETAIL: \"foo\" is a gummy bear.\nor\n ERROR: ALTER TABLE ... ADD COLUMN cannot be performed on foo\n DETAIL: This action cannot be performed on gummy bears.\n\nbut that seems material for a different patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 19:02:08 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Apr-13, Robert Haas wrote:\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n>> + errmsg(\"action cannot be performed on relation \\\"%s\\\"\",\n>> + RelationGetRelationName(rel)),\n>> \n>> Super-vague.\n\n> Maybe, but note that the patch proposed to replace this current error\n> message:\n> ERROR: foo is not an index or foreign table\n> ...\n> so it's not like this is making things any worse; the error was already\n> super-vague. \n\nYeah. I share Robert's feeling that \"action\" is not really desirable\nhere, but I have to concur that this is an improvement on the existing\ntext, which also fails to mention what command is being rejected.\n\n> This could be improved if we had stringification of ALTER TABLE\n> subcommand types:\n> ERROR: ALTER TABLE ... ADD COLUMN cannot be performed on \"foo\"\n\nIn the meantime could we at least say \"ALTER TABLE action cannot\nbe performed\"? The worst aspect of the existing text is that if\nan error comes out of a script with a lot of different commands,\nit doesn't give you any hint at all about which command failed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 20:15:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On Tue, Apr 14, 2020 at 7:02 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Apr-13, Robert Haas wrote:\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> > + errmsg(\"action cannot be performed on relation \\\"%s\\\"\",\n> > + RelationGetRelationName(rel)),\n> >\n> > Super-vague.\n>\n> Maybe, but note that the patch proposed to replace this current error\n> message:\n> ERROR: foo is not an index or foreign table\n> with\n> ERROR: action cannot be performed on \"foo\"\n> DETAIL: \"foo\" is a materialized view.\n\nSure, but the point is that this case is not improved nearly as much\nas most of the others. In a whole bunch of cases, he made the error\nmessage describe the attempted operation, but here he didn't. I'm not\nsaying that makes it worse than what we had before, just that it would\nbe better if we could make this look like the other cases the patch\nalso changes.\n\n> This could be improved if we had stringification of ALTER TABLE\n> subcommand types:\n>\n> ERROR: ALTER TABLE ... ADD COLUMN cannot be performed on \"foo\"\n> DETAIL: \"foo\" is a gummy bear.\n> or\n> ERROR: ALTER TABLE ... ADD COLUMN cannot be performed on foo\n> DETAIL: This action cannot be performed on gummy bears.\n>\n> but that seems material for a different patch.\n\nEven without that, you could at least say \"this form of ALTER TABLE is\nnot supported for foo\" or something like that.\n\nI'm not trying to block the patch. I think it's a good patch. I was\njust making an observation about some parts of it where it seems like\nwe could try slightly harder to do better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 10:38:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 2020-Apr-15, Robert Haas wrote:\n\n> [good arguments]\n\nI don't disagree with anything you said, and I don't have anything to\nadd for now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:07:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 2020-04-15 02:15, Tom Lane wrote:\n> In the meantime could we at least say \"ALTER TABLE action cannot\n> be performed\"?\n\nWe don't know whether ALTER TABLE was the command. For example, in one \nof the affected regression test cases, the command is ALTER VIEW.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 14:45:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-04-15 02:15, Tom Lane wrote:\n>> In the meantime could we at least say \"ALTER TABLE action cannot\n>> be performed\"?\n\n> We don't know whether ALTER TABLE was the command. For example, in one \n> of the affected regression test cases, the command is ALTER VIEW.\n\nMaybe just \"ALTER action cannot be performed\"? I share Robert's\ndislike of being so vague as to just say \"action\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 09:09:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 13.04.20 15:54, Peter Eisentraut wrote:\n> I'm not a fan of error messages like\n> \n>     relation \"%s\" is not a table, foreign table, or materialized view\n> \n> It doesn't tell me what's wrong, it only tells me what else could have \n> worked.  It's also tedious to maintain and the number of combinations \n> grows over time.\n\nAnother go at this. I believe in the attached patch I have addressed \nall the feedback during this thread last year. In particular, I have \nrephrased the detail message per discussion, and I have improved the \nmessages produced by ATSimplePermissions() with more details. Examples:\n\n CREATE STATISTICS tststats.s2 ON a, b FROM tststats.ti;\n-ERROR: relation \"ti\" is not a table, foreign table, or materialized view\n+ERROR: cannot define statistics for relation \"ti\"\n+DETAIL: This operation is not supported for indexes.\n\n ALTER FOREIGN TABLE ft1 ALTER CONSTRAINT ft1_c9_check DEFERRABLE; -- ERROR\n-ERROR: \"ft1\" is not a table\n+ERROR: ALTER action ALTER CONSTRAINT cannot be performed on relation \"ft1\"\n+DETAIL: This operation is not supported for foreign tables.\n\nThere might be room for some wordsmithing in a few places, but generally \nI think this is complete.", "msg_date": "Thu, 24 Jun 2021 10:12:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On Thu, Jun 24, 2021 at 10:12:49AM +0200, Peter Eisentraut wrote:\n> There might be room for some wordsmithing in a few places, but generally I\n> think this is complete.\n\nI have been looking at that, and it seems to me that you nailed it.\nThat's a nice improvement compared to the existing error handling with\nmultiple relkinds.\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"ALTER action %s cannot be performed on relation \\\"%s\\\"\",\n+ action_str, RelationGetRelationName(rel)),\n+ errdetail_relkind_not_supported(rel->rd_rel->relkind)));\nPerhaps the result of alter_table_type_to_string() is worth a note for\ntranslators?\n\n+ case AT_DetachPartitionFinalize:\n+ return \"DETACH PARTITION FINALIZE\";\nTo be exact, I think that this one should be \"DETACH PARTITION\n... FINALIZE\".\n\n+ if (relkind == RELKIND_INDEX || relkind == RELKIND_PARTITIONED_INDEX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"cannot change schema of index \\\"%s\\\"\",\n+ rv->relname),\n+ errhint(\"Change the schema of the table instead.\")));\n+ else if (relkind == RELKIND_COMPOSITE_TYPE)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"cannot change schema of composite type\n\\\"%s\\\"\",\n+ rv->relname),\n+ errhint(\"Use ALTER TYPE instead.\")));\nI would simplify this part by removing the errhint(), and use \"cannot\nchange schema of relation ..\" as error string, with a dose of\nerrdetail_relkind_not_supported().\n\n+ errmsg(\"relation \\\"%s\\\" cannot have triggers\",\n+ RelationGetRelationName(rel)),\nBetter as \"cannot create/rename/remove triggers on relation \\\"%s\\\"\"\nfor the three code paths of trigger.c?\n\n+ errmsg(\"relation \\\"%s\\\" cannot have rules\",\n[...]\n+ errmsg(\"relation \\\"%s\\\" cannot have rules\",\nFor rewriteDefine.c, this could be \"cannot create/rename rules on\nrelation\".\n--\nMichael", "msg_date": "Fri, 2 Jul 2021 15:25:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 02.07.21 08:25, Michael Paquier wrote:\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"ALTER action %s cannot be performed on relation \\\"%s\\\"\",\n> + action_str, RelationGetRelationName(rel)),\n> + errdetail_relkind_not_supported(rel->rd_rel->relkind)));\n> Perhaps the result of alter_table_type_to_string() is worth a note for\n> translators?\n\nok\n\n> + case AT_DetachPartitionFinalize:\n> + return \"DETACH PARTITION FINALIZE\";\n> To be exact, I think that this one should be \"DETACH PARTITION\n> ... FINALIZE\".\n\nok\n\n> + if (relkind == RELKIND_INDEX || relkind == RELKIND_PARTITIONED_INDEX)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"cannot change schema of index \\\"%s\\\"\",\n> + rv->relname),\n> + errhint(\"Change the schema of the table instead.\")));\n> + else if (relkind == RELKIND_COMPOSITE_TYPE)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"cannot change schema of composite type\n> \\\"%s\\\"\",\n> + rv->relname),\n> + errhint(\"Use ALTER TYPE instead.\")));\n> I would simplify this part by removing the errhint(), and use \"cannot\n> change schema of relation ..\" as error string, with a dose of\n> errdetail_relkind_not_supported().\n\nI aimed for parity with the error reporting in ATExecChangeOwner() here.\n\n> + errmsg(\"relation \\\"%s\\\" cannot have triggers\",\n> + RelationGetRelationName(rel)),\n> Better as \"cannot create/rename/remove triggers on relation \\\"%s\\\"\"\n> for the three code paths of trigger.c?\n> \n> + errmsg(\"relation \\\"%s\\\" cannot have rules\",\n> [...]\n> + errmsg(\"relation \\\"%s\\\" cannot have rules\",\n> For rewriteDefine.c, this could be \"cannot create/rename rules on\n> relation\".\n\nI had it like that, but in previous reviews some people liked it better \nthis way. ;-) I tend to agree with that, since the error condition \nisn't that you can't create a rule/etc. (like, due to incorrect \nprerequisite state) but that there cannot be one ever.\n\n\n", "msg_date": "Fri, 2 Jul 2021 12:53:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 2021-Jun-24, Peter Eisentraut wrote:\n\n> There might be room for some wordsmithing in a few places, but generally I\n> think this is complete.\n\nThis looks good to me. I am +0.1 on your proposal of \"cannot have\ntriggers\" vs Michael's \"cannot create triggers\", but really I could go\nwith either. Michael's idea has the disadvantage that if the user fails\nto see the trailing \"s\" in \"triggers\" they could get the idea that it's\npossible to create some other trigger; that seems impossible to miss\nwith your wording. But it's not that bad either.\n\nIt seemed odd to me at first that errdetail_relkind_not_supported()\nreturns int, but I realized that it's a trick to let you write \"return\nerrdetail()\" so you don't have to have \"break\" which would require one\nextra line. Looks fine.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 2 Jul 2021 12:10:29 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 02.07.21 18:10, Alvaro Herrera wrote:\n> On 2021-Jun-24, Peter Eisentraut wrote:\n> \n>> There might be room for some wordsmithing in a few places, but generally I\n>> think this is complete.\n> \n> This looks good to me. I am +0.1 on your proposal of \"cannot have\n> triggers\" vs Michael's \"cannot create triggers\", but really I could go\n> with either. Michael's idea has the disadvantage that if the user fails\n> to see the trailing \"s\" in \"triggers\" they could get the idea that it's\n> possible to create some other trigger; that seems impossible to miss\n> with your wording. But it's not that bad either.\n> \n> It seemed odd to me at first that errdetail_relkind_not_supported()\n> returns int, but I realized that it's a trick to let you write \"return\n> errdetail()\" so you don't have to have \"break\" which would require one\n> extra line. Looks fine.\n\nThanks, committed.\n\n\n", "msg_date": "Thu, 8 Jul 2021 09:54:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "While reviewing the logical decoding of sequences patch, I found a few \nmore places that could be updated in the new style introduced by this \nthread. See attached patch.", "msg_date": "Tue, 20 Jul 2021 17:08:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On Tue, Jul 20, 2021 at 05:08:53PM +0200, Peter Eisentraut wrote:\n> While reviewing the logical decoding of sequences patch, I found a few more\n> places that could be updated in the new style introduced by this thread.\n> See attached patch.\n\nThose changes look fine. I am spotting one instance in\ninit_sequence() that looks worth aligning with the others?\n\nDid you consider changing RangeVarCallbackForAlterRelation() or\nExecGrant_Relation() when it came to this thread? Just noticing that,\nwhile going through the code.\n--\nMichael", "msg_date": "Wed, 21 Jul 2021 11:21:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" }, { "msg_contents": "On 21.07.21 04:21, Michael Paquier wrote:\n> On Tue, Jul 20, 2021 at 05:08:53PM +0200, Peter Eisentraut wrote:\n>> While reviewing the logical decoding of sequences patch, I found a few more\n>> places that could be updated in the new style introduced by this thread.\n>> See attached patch.\n> \n> Those changes look fine. I am spotting one instance in\n> init_sequence() that looks worth aligning with the others?\n\nI think if you write \"ALTER SEQUENCE foo\", then \"foo is not a sequence\" \nwould be an appropriate error message, so this doesn't need changing.\n\n> Did you consider changing RangeVarCallbackForAlterRelation() or\n> ExecGrant_Relation() when it came to this thread? Just noticing that,\n> while going through the code.\n\nThese might be worth another look, but I'd need to investigate more in \nwhat situations they happen.\n\n\n", "msg_date": "Wed, 21 Jul 2021 08:03:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: wrong relkind error messages" } ]
[ { "msg_contents": "As discussed in the thread at [1], I've been working on redesigning\nthe tables we use to present SQL functions and operators. The\nfirst installment of that is now up; see tables 9.30 and 9.31 at\n\nhttps://www.postgresql.org/docs/devel/functions-datetime.html\n\nand table 9.33 at\n\nhttps://www.postgresql.org/docs/devel/functions-enum.html\n\nBefore I spend more time on this, I want to make sure that people\nare happy with this line of attack. Comparing these tables to\nthe way they look in v12, they clearly take more vertical space;\nbut at least to my eye they're less cluttered and more readable.\nThey definitely scale a lot better for cases where a long function\ndescription is needed, or where we'd like to have more than one\nexample. Does anyone prefer the old way, or have a better idea?\n\nI know that the table headings are a bit weirdly laid out; hopefully\nthat can be resolved [2].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/9326.1581457869%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/6169.1586794603%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 13 Apr 2020 13:13:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 1:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n>\n> https://www.postgresql.org/docs/devel/functions-datetime.html\n>\n> and table 9.33 at\n>\n> https://www.postgresql.org/docs/devel/functions-enum.html\n>\n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n\nI find the new way quite hard to read. I prefer the old way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 13:37:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-04-13 19:13, Tom Lane wrote:\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html\n> \n> and table 9.33 at\n> \n> https://www.postgresql.org/docs/devel/functions-enum.html\n> \n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n> \n\n+1\n\nIn the pdf it is a big improvement; and the html is better too.\n\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:52:25 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, 13 Apr 2020 at 13:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n>\n> https://www.postgresql.org/docs/devel/functions-datetime.html\n>\n> and table 9.33 at\n>\n> https://www.postgresql.org/docs/devel/functions-enum.html\n>\n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n>\n\nI honestly don’t know. My initial reaction is a combination of “that’s\nweird” and “that’s cool”. So a few comments, which shouldn’t be taken as\nindicating a definite preference:\n\n- showing the signature like this is interesting. For a moment I was\nwondering why it doesn’t say, for example, \"interval → interval → interval”\nthen I remembered this is Postgres, not Haskell. On the one hand, I like\nputting the signature like this; on the other, I don’t like that the return\ntype is in a different place in each one. Could it be split into the same\ntwo columns as the example(s); first column inputs, second column results?\n\n- another possibility for the parameters: list each one on a separate line,\ntogether with default (if applicable). Maybe that would be excessively\ntall, but it would sure make completely clear just exactly how many\nparameters there are and never wrap (well, maybe on a phone, but we can\nonly do so much).\n\n- for the various current-time-related functions (age, current_time, etc.),\nrather than saying “variable”, could it be the actual result with “now”\nbeing taken to be a specific fixed time within the year in which the\ndocumentation was generated? This would be really helpful for example with\nbeing clear that current_time is only the time of day with no date.\n\n- the specific fixed time should be something like (current year)-06-30\n18:45:54. I’ve deliberately chosen all values to be outside of the range of\nvalues with smaller ranges. For example, the hour is >12, the limit of the\nmonth field.\n\n- I think there should be much more distinctive lines between the different\nfunctions. As it is the fact that the table is groups of 3 lines doesn’t\njump out at the eye.\n\nOn Mon, 13 Apr 2020 at 13:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:As discussed in the thread at [1], I've been working on redesigning\nthe tables we use to present SQL functions and operators.  The\nfirst installment of that is now up; see tables 9.30 and 9.31 at\n\nhttps://www.postgresql.org/docs/devel/functions-datetime.html\n\nand table 9.33 at\n\nhttps://www.postgresql.org/docs/devel/functions-enum.html\n\nBefore I spend more time on this, I want to make sure that people\nare happy with this line of attack.  Comparing these tables to\nthe way they look in v12, they clearly take more vertical space;\nbut at least to my eye they're less cluttered and more readable.\nThey definitely scale a lot better for cases where a long function\ndescription is needed, or where we'd like to have more than one\nexample.  Does anyone prefer the old way, or have a better idea?\nI honestly don’t know. My initial reaction is a combination of “that’s weird” and “that’s cool”. So a few comments, which shouldn’t be taken as indicating a definite preference:- showing the signature like this is interesting. For a moment I was wondering why it doesn’t say, for example, \"interval → interval → interval” then I remembered this is Postgres, not Haskell. On the one hand, I like putting the signature like this; on the other, I don’t like that the return type is in a different place in each one. Could it be split into the same two columns as the example(s); first column inputs, second column results?- another possibility for the parameters: list each one on a separate line, together with default (if applicable). Maybe that would be excessively tall, but it would sure make completely clear just exactly how many parameters there are and never wrap (well, maybe on a phone, but we can only do so much).- for the various current-time-related functions (age, current_time, etc.), rather than saying “variable”, could it be the actual result with “now” being taken to be a specific fixed time within the year in which the documentation was generated? This would be really helpful for example with being clear that current_time is only the time of day with no date.- the specific fixed time should be something like (current year)-06-30 18:45:54. I’ve deliberately chosen all values to be outside of the range of values with smaller ranges. For example, the hour is >12, the limit of the month field.- I think there should be much more distinctive lines between the different functions. As it is the fact that the table is groups of 3 lines doesn’t jump out at the eye.", "msg_date": "Mon, 13 Apr 2020 13:57:03 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-Apr-13, Tom Lane wrote:\n\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html\n> \n> and table 9.33 at\n> \n> https://www.postgresql.org/docs/devel/functions-enum.html\n> \n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example.\n\nI am torn. On the one side, I think this new format is so much better\nthan the old one that we should definitely use it for all tables. On\nthe other side, I also think this format is slightly more complicated to\nread, so perhaps it would be sensible to keep using the old format for\nthe simplest tables.\n\nOne argument for the first of those positions is that if this new table\nlayout is everywhere, it'll take less total time to get used to it.\n\n\nOne improvement (that I don't know is possible in docbook) would be to\nhave the inter-logical-row line be slightly thicker than the\nintra-logical-row one. That'd make each entry visually more obvious.\n\nI think you already mentioned the PDF issue that these multi-row entries\nare sometimes split across pages. I cannot believe docbook is so stupid\nnot to have a solution to that problem, but I don't know what that\nsolution would be.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:07:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> - showing the signature like this is interesting. For a moment I was\n> wondering why it doesn’t say, for example, \"interval → interval → interval”\n> then I remembered this is Postgres, not Haskell. On the one hand, I like\n> putting the signature like this; on the other, I don’t like that the return\n> type is in a different place in each one. Could it be split into the same\n> two columns as the example(s); first column inputs, second column results?\n\nWe tried that in an earlier iteration (see the referenced thread). It\ndoesn't work very well because you end up having to allocate the max\namount of space for any result type or example result on every line.\nGiving up the separate cell for return type is a lot of what makes this\nworkable.\n\n> - another possibility for the parameters: list each one on a separate line,\n> together with default (if applicable). Maybe that would be excessively\n> tall, but it would sure make completely clear just exactly how many\n> parameters there are and never wrap (well, maybe on a phone, but we can\n> only do so much).\n\nSince so few built-in functions have default parameters, that's going to\nwaste an awful lot of space in most cases. I actually ended up removing\nthe explicit \"default\" clauses from make_interval (which is the only\nfunction with defaults that I dealt with so far) and instead explained\nthat they all default to zero in the text description, because that took\nway less space.\n\n> - for the various current-time-related functions (age, current_time, etc.),\n> rather than saying “variable”, could it be the actual result with “now”\n> being taken to be a specific fixed time within the year in which the\n> documentation was generated? This would be really helpful for example with\n> being clear that current_time is only the time of day with no date.\n\nYeah, I've been waffling about that. On the one hand, we regularly get\ndocs complaints from people who say \"I tried this example and I didn't\nget the claimed result\". On the other hand you could figure that\neverybody should understand that current_timestamp won't work like that\n... but the first such example in the table is age() for which that\nautomatic understanding might not apply.\n\nThe examples down in 9.9.4 use a specific time, which is looking pretty\nlong in the tooth right now, and no one has complained --- but that's\nin a context where it's absolutely plain that every mentioned function\nis going to have a time-varying result.\n\nOn the whole I'm kind of leaning to going back to using a specific time.\nBut that's a detail that's not very relevant to the bigger picture here.\n(No, I'm not going to try to make it update every year; too much work\nfor too little reward.)\n\n> - I think there should be much more distinctive lines between the different\n> functions. As it is the fact that the table is groups of 3 lines doesn’t\n> jump out at the eye.\n\nI don't know any easy way to do that. We do already have the grouping\nvisible in the first column...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:27:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> One improvement (that I don't know is possible in docbook) would be to\n> have the inter-logical-row line be slightly thicker than the\n> intra-logical-row one. That'd make each entry visually more obvious.\n\nYeah, I don't see any way to do that :-(. We could suppress the row\nlines entirely between the members of the logical group, but that'd\nalmost surely look worse.\n\n(I tried to implement this to see, and couldn't get rowsep=\"0\" in\na <spanspec> to render the way I expected, so there may be toolchain\nbugs in the way of it anyway.)\n\nWe could leave an entirely empty row between logical groups, but\nthat would be really wasteful of vertical space.\n\nAnother possibility, which'd only help in HTML, would be to render\nsome of the cells with a slightly different background color.\nThat's beyond my docbook/css skills, but it might be possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:47:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 2:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Another possibility, which'd only help in HTML, would be to render\n> some of the cells with a slightly different background color.\n> That's beyond my docbook/css skills, but it might be possible.\n\nI think some visual distinction would be really helpful, if we can get it.\n\nI just wonder if there's too much clutter here. Like, line 1:\n\ndate - interval → timestamp\n\nOK, gotcha. Line 2:\n\nSubtract an interval from a date\n\nWell, is that really adding anything non-obvious?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:18:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\nOn 4/13/20 1:13 PM, Tom Lane wrote:\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n>\n> https://www.postgresql.org/docs/devel/functions-datetime.html\n>\n> and table 9.33 at\n>\n> https://www.postgresql.org/docs/devel/functions-enum.html\n>\n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n>\n> I know that the table headings are a bit weirdly laid out; hopefully\n> that can be resolved [2].\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/9326.1581457869%40sss.pgh.pa.us\n> [2] https://www.postgresql.org/message-id/6169.1586794603%40sss.pgh.pa.us\n>\n\nGotta say I'm not a huge fan. I appreciate the effort, and I get the\nproblem, but I'm not sure we have a net improvement here.\n\n\nOne thing that did occur to me is that the function/operator name is\nessentially redundant, as it's in the signature anyway. Not sure if that\nhelps us any though.\n\n\nMaybe we're just trying to shoehorn too much information into a single\ntable.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:20:38 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I just wonder if there's too much clutter here. Like, line 1:\n\n> date - interval → timestamp\n\n> OK, gotcha. Line 2:\n\n> Subtract an interval from a date\n\n> Well, is that really adding anything non-obvious?\n\nYeah, back in the other thread I said\n\n>>> I decided to try converting the date/time operators table too, to\n>>> see how well this works for that. It's bulkier than before, but\n>>> also (I think) more precise. I realized that this table actually\n>>> had three examples already for float8 * interval, but it wasn't\n>>> at all obvious that they were the same operator. So that aspect\n>>> is a lot nicer here. On the other hand, it seems like the text\n>>> descriptions are only marginally useful here. I can imagine that\n>>> they would be useful in some other operator tables, such as\n>>> geometric operators, but I'm a bit tempted to leave them out\n>>> in this particular table. The format would adapt to that easily.\n\nI wouldn't be averse to dropping the text descriptions for operators\nin places where they seem obvious ... but who decides what is obvious?\n\nIndeed, we've gotten more than one complaint in the past that some of the\ngeometric and JSON operators require a longer explanation than they've\ngot. So one of the points here was to have a format that could adapt to\nthat. But in this particular table I agree they're marginal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:29:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>\n> > - I think there should be much more distinctive lines between the\n> different\n> > functions. As it is the fact that the table is groups of 3 lines doesn’t\n> > jump out at the eye.\n>\n> I don't know any easy way to do that. We do already have the grouping\n> visible in the first column...\n>\n\nCan we lightly background color every other rowgroup (i.e., \"greenbar\")?\n\nI don't think having a separate Result column helps. The additional\nhorizontal whitespace distances all relevant context information (at least\non a wide monitor). Having the example rows mirror the Signature row seems\nlike an easier to consume choice.\n\ne.g.,\n\nenum_first(null::rainbow) → red\n\ndate '2001-09-28' + 7 → 2001-10-05\n\nIts also removes the left alignment in a fixed width column which draws\nunwanted visual attention.\n\nDavid J.\n\nOn Mon, Apr 13, 2020 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> - I think there should be much more distinctive lines between the different\n> functions. As it is the fact that the table is groups of 3 lines doesn’t\n> jump out at the eye.\n\nI don't know any easy way to do that.  We do already have the grouping\nvisible in the first column...Can we lightly background color every other rowgroup (i.e., \"greenbar\")?I don't think having a separate Result column helps.  The additional horizontal whitespace distances all relevant context information (at least on a wide monitor).  Having the example rows mirror the Signature row seems like an easier to consume choice.e.g., enum_first(null::rainbow)\t→ reddate '2001-09-28' + 7\t\n\n→ 2001-10-05Its also removes the left alignment in a fixed width column which draws unwanted visual attention.David J.", "msg_date": "Mon, 13 Apr 2020 13:31:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> One thing that did occur to me is that the function/operator name is\n> essentially redundant, as it's in the signature anyway. Not sure if that\n> helps us any though.\n\nHm, you have a point there. However, if we drop the lefthand column\nthen there really isn't any visual distinction between the row(s)\nassociated with one function and those of the next. Unless we can\nfind another fix for that aspect (as already discussed in this thread)\nI doubt it'd be an improvement.\n\n> Maybe we're just trying to shoehorn too much information into a single\n> table.\n\nYeah, back at the beginning of this exercise, Alvaro wondered aloud\nif we should go to something other than tables altogether. I dunno\nwhat that'd look like though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:33:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Can we lightly background color every other rowgroup (i.e., \"greenbar\")?\n\nIf you know how to do that at all, let alone in a maintainable way (ie\none where inserting a new function doesn't require touching the entries\nfor the ones after), let's see it. I agree it'd be a nice solution,\nif we could make it work, but I don't see how. I'd been imagining\ninstead that we could give a different background color to the first\nline of each group; which I don't know how to do but it at least seems\nplausible that a style could be attached to a <spanspec>.\n\n> I don't think having a separate Result column helps. The additional\n> horizontal whitespace distances all relevant context information (at least\n> on a wide monitor). Having the example rows mirror the Signature row seems\n> like an easier to consume choice.\n\nInteresting idea. I'm afraid that it would not look so great in cases\nwhere the example-plus-result overflows one line, which would inevitably\nhappen in PDF format. Still, maybe that would be rare enough to not be\na huge problem. In most places it'd be a win to not have to separately\nallocate example and result space.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:41:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "I wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> I don't think having a separate Result column helps. The additional\n>> horizontal whitespace distances all relevant context information (at least\n>> on a wide monitor). Having the example rows mirror the Signature row seems\n>> like an easier to consume choice.\n\n> Interesting idea. I'm afraid that it would not look so great in cases\n> where the example-plus-result overflows one line, which would inevitably\n> happen in PDF format. Still, maybe that would be rare enough to not be\n> a huge problem. In most places it'd be a win to not have to separately\n> allocate example and result space.\n\nActually ... if we did it like that, then it would be possible to treat\nthe signature + description + example(s) as one big table cell with line\nbreaks rather than row-separator bars. That would help address the\ninadequate-visual-separation-between-groups issue, but on the other hand\nmaybe we'd end up with too little visual separation between the elements\nof a function description.\n\nA quick google search turned up this suggestion about how to force\nline breaks in docbook table cells:\n\nhttp://www.sagehill.net/docbookxsl/LineBreaks.html\n\nwhich seems pretty hacky but it should work. Anyone know a better\nway?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:57:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Can we lightly background color every other rowgroup (i.e., \"greenbar\")?\n>\n> If you know how to do that at all, let alone in a maintainable way (ie\n> one where inserting a new function doesn't require touching the entries\n> for the ones after), let's see it.\n>\n\nThe nth-child({odd|even}) CSS Selector should provide the desired\nfunctionality, at least for HTML, but the structure will need to modified\nso that there is some single element that represents a single rowgroup. I\ntried (not too hard) to key off of the presence of the \"rowspan\" attribute\nbut that does not seem possible.\n\nhttps://www.w3schools.com/cssref/sel_nth-child.asp\n\nDavid J.\n\nOn Mon, Apr 13, 2020 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Can we lightly background color every other rowgroup (i.e., \"greenbar\")?\n\nIf you know how to do that at all, let alone in a maintainable way (ie\none where inserting a new function doesn't require touching the entries\nfor the ones after), let's see it.The nth-child({odd|even}) CSS Selector should provide the desired functionality, at least for HTML, but the structure will need to modified so that there is some single element that represents a single rowgroup.  I tried (not too hard) to key off of the presence of the \"rowspan\" attribute but that does not seem possible.https://www.w3schools.com/cssref/sel_nth-child.asp David J.", "msg_date": "Mon, 13 Apr 2020 14:20:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Actually ... if we did it like that, then it would be possible to treat\n> the signature + description + example(s) as one big table cell with line\n> breaks rather than row-separator bars.\n\n\n\n> That would help address the\n> inadequate-visual-separation-between-groups issue, but on the other hand\n> maybe we'd end up with too little visual separation between the elements\n> of a function description.\n>\n\nSpeaking in terms of HTML if we use <hr /> instead of <br /> we would get\nthe best of both worlds.\n\nDavid J.\n\nOn Mon, Apr 13, 2020 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Actually ... if we did it like that, then it would be possible to treat\nthe signature + description + example(s) as one big table cell with line\nbreaks rather than row-separator bars.   That would help address the\ninadequate-visual-separation-between-groups issue, but on the other hand\nmaybe we'd end up with too little visual separation between the elements\nof a function description.Speaking in terms of HTML if we use <hr /> instead of <br /> we would get the best of both worlds.David J.", "msg_date": "Mon, 13 Apr 2020 14:26:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/13/20 1:13 PM, Tom Lane wrote:\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html\n> \n> and table 9.33 at\n> \n> https://www.postgresql.org/docs/devel/functions-enum.html\n> \n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n> \n> I know that the table headings are a bit weirdly laid out; hopefully\n> that can be resolved [2].\n\n> [2] https://www.postgresql.org/message-id/6169.1586794603%40sss.pgh.pa.us\n\nWhen evaluating [2], I will admit at first I was very confused about the\nlayout and wasn't exactly sure what you were saying was incorrect in\nthat note. After fixing [2] on my local copy, I started to look at it again.\n\nFor positives, I do think it's an improvement for readability on mobile.\nFlow/content aside, it was easier to read and follow what was going on\nand there was less side scrolling.\n\nI think one thing that was throwing me off was having the function\nsignature before the description. I would recommend flipping them: have\nthe function description first, followed by signature, followed be\nexamples. I think that follows the natural flow more of what one is\ndoing when they look up the function.\n\nI think that would also benefit larger tables too: instead of having to\nscroll up to understand how things are laid out, it'd follow said flow.\n\nThere are probably some things we can do with shading on the pgweb side\nto make items more distinguishable, I don't think that would be too\nterrible to add.\n\nThinking out loud, it'd also be great if we could add in some anchors as\nwell, so perhaps in the future on the pgweb side we could add in some\ndiscoverable links that other documentation has -- which in turn people\ncould click / link to others directly to the function name.\n\nAnyway, change is hard. I'm warming up to it.\n\nJonathan", "msg_date": "Mon, 13 Apr 2020 18:33:58 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": ">\n> Thinking out loud, it'd also be great if we could add in some anchors as\n> well, so perhaps in the future on the pgweb side we could add in some\n> discoverable links that other documentation has -- which in turn people\n> could click / link to others directly to the function name.\n>\n\n+1\n\nThinking out loud, it'd also be great if we could add in some anchors as\nwell, so perhaps in the future on the pgweb side we could add in some\ndiscoverable links that other documentation has -- which in turn people\ncould click / link to others directly to the function name.+1", "msg_date": "Mon, 13 Apr 2020 18:38:23 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Apr 13, 2020 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually ... if we did it like that, then it would be possible to treat\n>> the signature + description + example(s) as one big table cell with line\n>> breaks rather than row-separator bars.\n>> That would help address the\n>> inadequate-visual-separation-between-groups issue, but on the other hand\n>> maybe we'd end up with too little visual separation between the elements\n>> of a function description.\n\n> Speaking in terms of HTML if we use <hr /> instead of <br /> we would get\n> the best of both worlds.\n\nHm. I quickly hacked up table 9.33 to use this approach. Attached\nare a patch for that, as well as screenshots of HTML and PDF output.\n(To get the equivalent of HTML-hr.png, use <hr/> not <br/> in the\nstylesheet.)\n\nI don't think I like the <hr/> version better than <br/> --- it adds\nquite a bit of vertical space, more than I was expecting really. The\ndocumentation I could find with Google suggests that <hr/> can be\nrendered with quite a bit of variation by different agents, so other\npeople might get different results. (This is with Safari.) It seems\nlike the font differentiation between the description and the other\nparts is almost, but perhaps not quite, enough separation already.\n\nI don't know how to get the equivalent of <hr/> in PDF output, so\nthat version just does line breaks. It seems like the vertical\nspacing in the examples is a bit wonky, but otherwise it's not awful.\n\nNote that the PDF rendering shows the header and function name\nalignment as I intended them; the HTML renderings are wrong due to\nwebsite stylesheet issues.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Apr 2020 18:44:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": ">\n> Yeah, back at the beginning of this exercise, Alvaro wondered aloud\n> if we should go to something other than tables altogether. I dunno\n> what that'd look like though.\n>\n\nIt would probably look like our acronyms and glossary pages.\n\nMaybe the return example and return values get replaced with a\nprogramlisting?\n\n\nYeah, back at the beginning of this exercise, Alvaro wondered aloud\nif we should go to something other than tables altogether.  I dunno\nwhat that'd look like though.It would probably look like our acronyms and glossary pages.Maybe the return example and return values get replaced with a programlisting?", "msg_date": "Mon, 13 Apr 2020 18:48:46 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I think one thing that was throwing me off was having the function\n> signature before the description. I would recommend flipping them: have\n> the function description first, followed by signature, followed be\n> examples. I think that follows the natural flow more of what one is\n> doing when they look up the function.\n\nThe trouble with that is it doesn't work very well when we have\nmultiple similarly-named functions with different signatures.\nConsider what the two enum_range() entries in 9.33 will look like,\nfor example. I think we need the signature to establish which function\nwe're talking about.\n\n> There are probably some things we can do with shading on the pgweb side\n> to make items more distinguishable, I don't think that would be too\n> terrible to add.\n\nPer David's earlier comment, it seems like alternating backgrounds might\nbe feasible if we can get it down to one <row> per function, as the\nversion I just posted has.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 18:51:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/13/20 6:51 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> I think one thing that was throwing me off was having the function\n>> signature before the description. I would recommend flipping them: have\n>> the function description first, followed by signature, followed be\n>> examples. I think that follows the natural flow more of what one is\n>> doing when they look up the function.\n> \n> The trouble with that is it doesn't work very well when we have\n> multiple similarly-named functions with different signatures.\n> Consider what the two enum_range() entries in 9.33 will look like,\n> for example. I think we need the signature to establish which function\n> we're talking about.\n\nI get that, I just find I'm doing too much thinking looking at it.\n\nPerhaps a counterproposal: We eliminate the content in the leftmost\n\"function column, but leave that there to allow the function name /\nsignature to span the full 3 columns. Then the rest of the info goes\nbelow. This will also compress the table height down a bit.\n\n>> There are probably some things we can do with shading on the pgweb side\n>> to make items more distinguishable, I don't think that would be too\n>> terrible to add.\n> \n> Per David's earlier comment, it seems like alternating backgrounds might\n> be feasible if we can get it down to one <row> per function, as the\n> version I just posted has.\n\nor a classname on the \"<tr>\" when a new function starts or the like.\nEasy enough to get the CSS to work off of that :)\n\nJonathan", "msg_date": "Mon, 13 Apr 2020 19:02:57 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/13/20 7:02 PM, Jonathan S. Katz wrote:\n> On 4/13/20 6:51 PM, Tom Lane wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> I think one thing that was throwing me off was having the function\n>>> signature before the description. I would recommend flipping them: have\n>>> the function description first, followed by signature, followed be\n>>> examples. I think that follows the natural flow more of what one is\n>>> doing when they look up the function.\n>>\n>> The trouble with that is it doesn't work very well when we have\n>> multiple similarly-named functions with different signatures.\n>> Consider what the two enum_range() entries in 9.33 will look like,\n>> for example. I think we need the signature to establish which function\n>> we're talking about.\n> \n> I get that, I just find I'm doing too much thinking looking at it.\n> \n> Perhaps a counterproposal: We eliminate the content in the leftmost\n> \"function column, but leave that there to allow the function name /\n> signature to span the full 3 columns. Then the rest of the info goes\n> below. This will also compress the table height down a bit.\n\nAn attempt at a \"POC\" of what I'm describing (attached image).\n\nI'm not sure if I 100% like it, but it does reduce the amount of\ninformation we're displaying but conveys all the details (and matches\nwhat we have in the previous version).\n\nThe alignment could be adjusted if need be, too.\n\nJonathan", "msg_date": "Mon, 13 Apr 2020 19:38:15 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "I wrote:\n> I don't think I like the <hr/> version better than <br/> --- it adds\n> quite a bit of vertical space, more than I was expecting really.\n\nActually, after staring more at HTML-hr.png, what's *really* bothering\nme about that rendering is that the lines made by <hr/> are actually\nwider than the inter-table-cell lines. Surely we want the opposite\nrelationship. Presumably that could be fixed with some css-level\nadjustments; and maybe the spacing could be tightened up a bit too?\nI do like having that visual separation, it just needs to be toned\ndown compared to the table cell separators.\n\nReproducing the effect in the PDF build remains an issue, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:48:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-Apr-13, Jonathan S. Katz wrote:\n\n> On 4/13/20 7:02 PM, Jonathan S. Katz wrote:\n\n> > Perhaps a counterproposal: We eliminate the content in the leftmost\n> > \"function column, but leave that there to allow the function name /\n> > signature to span the full 3 columns. Then the rest of the info goes\n> > below. This will also compress the table height down a bit.\n> \n> An attempt at a \"POC\" of what I'm describing (attached image).\n> \n> I'm not sure if I 100% like it, but it does reduce the amount of\n> information we're displaying but conveys all the details (and matches\n> what we have in the previous version).\n\nOoh, this seems a nice idea -- the indentation seems to be sufficient to\ntell apart entries from each other. Your point about information\nreduction refers to the fact that we no longer keep the unadorned name\nbut only the signature, right? That seems an improvement to me now that\nI look at it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:50:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/13/20 7:02 PM, Jonathan S. Katz wrote:\n>> Perhaps a counterproposal: We eliminate the content in the leftmost\n>> \"function column, but leave that there to allow the function name /\n>> signature to span the full 3 columns. Then the rest of the info goes\n>> below. This will also compress the table height down a bit.\n\n> An attempt at a \"POC\" of what I'm describing (attached image).\n\nHmm ... what is determining the width of the left-hand column?\nIt doesn't seem to have any content, since the function entries\nare being spanned across the whole table.\n\nI think the main practical problem though is that it wouldn't\nwork nicely for operators, since the key \"name\" you'd be looking\nfor would not be at the left of the signature line. I suppose we\ndon't necessarily have to have the same layout for operators as\nfor functions, but it feels like it'd be jarringly inconsistent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 19:55:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Hello Tom,\n\n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack.\n\n+1\n\nI like it this way, because the structure is quite readable, which is the \npoint.\n\nMy 0.02�:\n\nMaybe column heander \"Example Result\" should be simply \"Result\", because \nit is already on the same line as \"Example\" on its left, and \"Example | \nExample Result\" looks redundant.\n\nMaybe the signature and description lines could be exchanged: I'm more \ninterested and the description first, and the signature just above the \nexample would make sense.\n\nI'm wondering whether the function/operator name should be vertically \ncentered in its cell? I'd left it left justified.\n\n-- \nFabien.", "msg_date": "Tue, 14 Apr 2020 07:23:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\nOn 4/13/20 7:55 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 4/13/20 7:02 PM, Jonathan S. Katz wrote:\n>>> Perhaps a counterproposal: We eliminate the content in the leftmost\n>>> \"function column, but leave that there to allow the function name /\n>>> signature to span the full 3 columns. Then the rest of the info goes\n>>> below. This will also compress the table height down a bit.\n>> An attempt at a \"POC\" of what I'm describing (attached image).\n> Hmm ... what is determining the width of the left-hand column?\n> It doesn't seem to have any content, since the function entries\n> are being spanned across the whole table.\n>\n> I think the main practical problem though is that it wouldn't\n> work nicely for operators, since the key \"name\" you'd be looking\n> for would not be at the left of the signature line. I suppose we\n> don't necessarily have to have the same layout for operators as\n> for functions, but it feels like it'd be jarringly inconsistent.\n>\n> \t\t\t\n\n\n\nMaybe highlight the item by bolding or colour?\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 14 Apr 2020 09:01:00 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/13/20 7:13 PM, Tom Lane wrote:\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html\n> \n> and table 9.33 at\n> \n> https://www.postgresql.org/docs/devel/functions-enum.html\n> \n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n> \n> I know that the table headings are a bit weirdly laid out; hopefully\n> that can be resolved [2].\n\nI prefer the old way since I find it very hard to see which fields \nbelong to which function in the new way. I think what confuses my eyes \nis how some rows are split in half while others are not, especially for \nthose functions where there is only one example output. I do not have \nany issue reading those with many example outputs.\n\nFor the old tables I can at least just make the browser window \nridiculously wide ro read them.\n\nAndreas\n\n\n\n", "msg_date": "Tue, 14 Apr 2020 16:16:07 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> For the old tables I can at least just make the browser window \n> ridiculously wide ro read them.\n\nA large part of the point here is to make the tables usable\nwhen you don't have that option, as for example in PDF output.\n\nEven with a wide window, though, some of our function tables are\nmonstrously ugly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:29:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/14/20 4:29 PM, Tom Lane wrote:\n> Andreas Karlsson <andreas@proxel.se> writes:\n>> For the old tables I can at least just make the browser window\n>> ridiculously wide ro read them.\n> \n> A large part of the point here is to make the tables usable\n> when you don't have that option, as for example in PDF output.\n> \n> Even with a wide window, though, some of our function tables are\n> monstrously ugly.\n\nSure, but I wager the number of people using the HTML version of our \ndocumentation on laptops and desktop computers are the biggest group of \nusers.\n\nThat said, I agree with that quite many of our tables right now are \nugly, but I prefer ugly to hard to read. For me the mix of having every \nthird row split into two fields makes the tables very hard to read. I \nhave a hard time seeing which rows belong to which function.\n\nAndreas\n\n\n", "msg_date": "Tue, 14 Apr 2020 16:39:44 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> That said, I agree with that quite many of our tables right now are \n> ugly, but I prefer ugly to hard to read. For me the mix of having every \n> third row split into two fields makes the tables very hard to read. I \n> have a hard time seeing which rows belong to which function.\n\nDid you look at the variants without that discussed downthread?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:52:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wouldn't be averse to dropping the text descriptions for operators\n> in places where they seem obvious ... but who decides what is obvious?\n\nWell, we do. We're smart, right? I don't think it's a good idea to add\nclutter to table A just because table B needs more details. What\nmatters is whether table A needs more details.\n\nThe v12 version of the \"Table 9.30. Date/Time Operators\" is not that\nwide, and is really quite clear. The new version takes 3 lines per\noperator where the old one took one. That's because you've added (1) a\ndescription of the fact that + does addition and - does subtraction,\nrepeated for each operator, and (2) explicit information about the\ninput and result types. I don't think either add much, in this case.\nThe former doesn't really need to be explained, and the latter was\nclear enough from the way the examples were presented - everything had\nexplicit types.\n\nFor more complicated cases, one thing we could do is ditch the table\nand use a <variablelist> with a separate <varlistentry> for each\noperator. So you could have something like:\n\n<varlistentry>\n<term><literal>date + date &arrow; timestamp</literal></term>\n<listentry>\nLengthy elocution, including an example.\n</listentry>\n</varlistentry>\n\nBut I would only advocate for this style in cases where there is\nsubstantial explaining to be done.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:59:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The v12 version of the \"Table 9.30. Date/Time Operators\" is not that\n> wide, and is really quite clear.\n\nWell, no it isn't. The main nit I would pick in that claim is that\nit's far from obvious that the three examples of float8 * interval\nare all talking about the same operator; in fact, a reader would\nbe very likely to draw the false conclusion that there is an\ninteger * interval operator.\n\nThis is an aspect of the general problem that we don't have a nice\nway to deal with multiple examples in the tables. Somebody kluged\ntheir way around it here in this particular way, but I'd really like\na clearer way, because we need more examples.\n\nI would also point out that this table is quite out of step with\nthe rest of the docs in its practice of showing the results as\nthough they were typed literals. Most places that show results\njust show what you'd expect to see in a psql output column, making\nit necessary to show the result data type somewhere else.\n\n> The new version takes 3 lines per\n> operator where the old one took one. That's because you've added (1) a\n> description of the fact that + does addition and - does subtraction,\n> repeated for each operator, and (2) explicit information about the\n> input and result types. I don't think either add much, in this case.\n\nAs I already said, I agree about the text descriptions being of marginal\nvalue in this case. I disagree about the explicit datatypes, because the\nfloat8 * interval cases already show a hole in that argument, and surely\nwe don't want to require every example to use explicitly-typed literals\nand nothing but. Besides, what will you do for operators that take\nanyarray or the like?\n\n> For more complicated cases, one thing we could do is ditch the table\n> and use a <variablelist> with a separate <varlistentry> for each\n> operator. So you could have something like:\n> ...\n> But I would only advocate for this style in cases where there is\n> substantial explaining to be done.\n\nI'd like to have more consistency, not less. I do not think it helps\nreaders to have each page in Chapter 9 have its own idiosyncratic way of\npresenting operators/functions. The operator tables are actually almost\nthat bad, right now --- compare section 9.1 (hasn't even bothered with\na formal <table>) with tables 9.1, 9.4, 9.9, 9.12, 9.14, 9.30, 9.34,\n9.37, 9.41, 9.44. The variation in level of detail and precision is\nstriking, and not in a good way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:26:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Tue, Apr 14, 2020 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, no it isn't. The main nit I would pick in that claim is that\n> it's far from obvious that the three examples of float8 * interval\n> are all talking about the same operator; in fact, a reader would\n> be very likely to draw the false conclusion that there is an\n> integer * interval operator.\n\nI agree that's not great. I think that could possibly be fixed by\nshowing all three examples in the same cell, and maybe by revising the\nchoice of examples.\n\n> I'd like to have more consistency, not less. I do not think it helps\n> readers to have each page in Chapter 9 have its own idiosyncratic way of\n> presenting operators/functions. The operator tables are actually almost\n> that bad, right now --- compare section 9.1 (hasn't even bothered with\n> a formal <table>) with tables 9.1, 9.4, 9.9, 9.12, 9.14, 9.30, 9.34,\n> 9.37, 9.41, 9.44. The variation in level of detail and precision is\n> striking, and not in a good way.\n\nWell, I don't know. Having two or even three formats is not the same\nas having infinitely many formats, and may be justified if the needs\nare sufficiently different from each other.\n\nAt any rate, if the price of more clarity and more examples is that\nthe tables become three times as long and harder to read, I am\nsomewhat inclined to think that the cure is worse than the disease. I\ncan readily see how something like table 9.10 (Other String Functions)\nmight be a mess on a narrow screen or in PDF format, but it's an\nextremely useful table on a normal-size screen in HTML format, and\npart of what makes it useful is that it's compact. Almost anything we\ndo is going to remove some of that compactness to save horizontal\nspace. Maybe that's OK, but it's sure not great. It's nice to be able\nto see more on one screen.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:47:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> At any rate, if the price of more clarity and more examples is that\n> the tables become three times as long and harder to read, I am\n> somewhat inclined to think that the cure is worse than the disease. I\n> can readily see how something like table 9.10 (Other String Functions)\n> might be a mess on a narrow screen or in PDF format, but it's an\n> extremely useful table on a normal-size screen in HTML format, and\n> part of what makes it useful is that it's compact. Almost anything we\n> do is going to remove some of that compactness to save horizontal\n> space. Maybe that's OK, but it's sure not great. It's nice to be able\n> to see more on one screen.\n\nI dunno, it doesn't look to me like 9.10 is some paragon of efficient\nuse of screen space, even with a wide window. (And my goodness it\nlooks bad if I try a window about half my usual web-browsing width.)\nMaybe I should go convert that one to see what it looks like in one of\nthe other layouts being discussed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 12:03:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n>\n> https://www.postgresql.org/docs/devel/functions-datetime.html\n>\n> and table 9.33 at\n>\n> https://www.postgresql.org/docs/devel/functions-enum.html\n>\n>\nAs I write this the enum headers are centered horizontally while the\ndatetime ones are left aligned. The centering doesn't do it for me. To\nmuch gap and the data itself is not centered so there is a large\ndisconnected between the header and the value.\n\nThe run-on aspect of the left-aligned setup is of some concern but maybe\njust adding some left padding to the second column - and right padding to\nthe first - can provide the desired negative space without adding so much\nas to break usability.\n\n(gonna use embedded images here...)\n\n[image: image.png]\n\n[image: image.png]\nDavid J.", "msg_date": "Tue, 14 Apr 2020 15:28:13 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Hi all,\n\nSorry I'm very new on this discussion. A colleague of mine told me I\ncould probably give my opinion on this thread.\n\nI'm sorry in advance if I'm off topic. I just wanted to mention that\nfrom Tom's proposal I played a bit with the generated HTML in order to\ntry to make things easier to read without thinking about technical\nissues for now.\n\nThe first big issue (that may have already been mentioned) in my opinion\nis that different elements are difficult to distinguish. It's difficult\nfor example to know what is the return type, what is the description, etc.\n\nI think that if the idea is to get rid of the columns, you need to make\nsure that it's easy to know which is which. With a very short amount of\ntime, the user should be able to find what he's looking for.\n\nThe best way to achieve this is to use some styling (font style and color).\n\nAttached you will find two different options I worked on very quickly.\n\nI would be happy to give more hints on how I did this of course and why\nI chose some options. Please let me know.\n\nKind regards,\n\n\nLe 13/04/2020 à 19:13, Tom Lane a écrit :\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n> \n> https://www.postgresql.org/docs/devel/functions-datetime.html\n> \n> and table 9.33 at\n> \n> https://www.postgresql.org/docs/devel/functions-enum.html\n> \n> Before I spend more time on this, I want to make sure that people\n> are happy with this line of attack. Comparing these tables to\n> the way they look in v12, they clearly take more vertical space;\n> but at least to my eye they're less cluttered and more readable.\n> They definitely scale a lot better for cases where a long function\n> description is needed, or where we'd like to have more than one\n> example. Does anyone prefer the old way, or have a better idea?\n> \n> I know that the table headings are a bit weirdly laid out; hopefully\n> that can be resolved [2].\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/flat/9326.1581457869%40sss.pgh.pa.us\n> [2] https://www.postgresql.org/message-id/6169.1586794603%40sss.pgh.pa.us\n> \n>", "msg_date": "Wed, 15 Apr 2020 17:25:58 +0200", "msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com>\nwrote:\n\n\n> The best way to achieve this is to use some styling (font style and color).\n>\n> Attached you will find two different options I worked on very quickly.\n>\n\nI really like the first. Just a couple of suggestions I would make:\n\n- leave a space between the function name and (. Regardless of opinions on\nwhat source code should look like, your documentation has space between\neach parameter and the next one, and between the ) and the -> and the ->.\nand the return type so it seems crowded not to have space between the\nfunction name and the (.\n- At this point it's not really a table any more; I would get rid of the\nlines, maybe tweak the spacing, and possibly use <dl> <dt> <dd> (definition\nlist) rather than table-related HTML elements. See\nhttps://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl.\n\nI think the bolding really makes stand out the crucial parts one needs to\nfind.\n\nOn Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com> wrote: \nThe best way to achieve this is to use some styling (font style and color).\n\nAttached you will find two different options I worked on very quickly.I really like the first. Just a couple of suggestions I would make:- leave a space between the function name and (. Regardless of opinions on what source code should look like, your documentation has space between each parameter and the next one, and between the ) and the -> and the ->. and the return type so it seems crowded not to have space between the function name and the (.- At this point it's not really a table any more; I would get rid of the lines, maybe tweak the spacing, and possibly use <dl> <dt> <dd> (definition list) rather than table-related HTML elements. See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl.I think the bolding really makes stand out the crucial parts one needs to find.", "msg_date": "Wed, 15 Apr 2020 11:43:24 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "st 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com>\n> wrote:\n>\n>\n>> The best way to achieve this is to use some styling (font style and\n>> color).\n>>\n>> Attached you will find two different options I worked on very quickly.\n>>\n>\n> I really like the first. Just a couple of suggestions I would make:\n>\n\nyes, it is very well readable\n\nPavel\n\n\n> - leave a space between the function name and (. Regardless of opinions on\n> what source code should look like, your documentation has space between\n> each parameter and the next one, and between the ) and the -> and the ->.\n> and the return type so it seems crowded not to have space between the\n> function name and the (.\n> - At this point it's not really a table any more; I would get rid of the\n> lines, maybe tweak the spacing, and possibly use <dl> <dt> <dd> (definition\n> list) rather than table-related HTML elements. See\n> https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl.\n>\n> I think the bolding really makes stand out the crucial parts one needs to\n> find.\n>\n>\n\nst 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com> wrote: \nThe best way to achieve this is to use some styling (font style and color).\n\nAttached you will find two different options I worked on very quickly.I really like the first. Just a couple of suggestions I would make:yes, it is very well readablePavel- leave a space between the function name and (. Regardless of opinions on what source code should look like, your documentation has space between each parameter and the next one, and between the ) and the -> and the ->. and the return type so it seems crowded not to have space between the function name and the (.- At this point it's not really a table any more; I would get rid of the lines, maybe tweak the spacing, and possibly use <dl> <dt> <dd> (definition list) rather than table-related HTML elements. See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl.I think the bolding really makes stand out the crucial parts one needs to find.", "msg_date": "Wed, 15 Apr 2020 17:53:54 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:54 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> st 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:\n>> On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n>>> The best way to achieve this is to use some styling (font style and color).\n>>>\n>>> Attached you will find two different options I worked on very quickly.\n>>\n>> I really like the first. Just a couple of suggestions I would make:\n>\n> yes, it is very well readable\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 12:04:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Is there a way to get a heavier line between each function? It would be\nhelpful to have a clearer demarcation of what belongs to each function.\n\nOn Wed, Apr 15, 2020 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 15, 2020 at 11:54 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > st 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com>\n> napsal:\n> >> On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com>\n> wrote:\n> >>> The best way to achieve this is to use some styling (font style and\n> color).\n> >>>\n> >>> Attached you will find two different options I worked on very quickly.\n> >>\n> >> I really like the first. Just a couple of suggestions I would make:\n> >\n> > yes, it is very well readable\n>\n> +1.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nIs there a way to get a heavier line between each function? It would be helpful to have a clearer demarcation of what belongs to each function.On Wed, Apr 15, 2020 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Apr 15, 2020 at 11:54 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> st 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:\n>> On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n>>> The best way to achieve this is to use some styling (font style and color).\n>>>\n>>> Attached you will find two different options I worked on very quickly.\n>>\n>> I really like the first. Just a couple of suggestions I would make:\n>\n> yes, it is very well readable\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 15 Apr 2020 10:10:59 -0700", "msg_from": "Steven Pousty <steve.pousty@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Steven Pousty <steve.pousty@gmail.com> writes:\n> Is there a way to get a heavier line between each function? It would be\n> helpful to have a clearer demarcation of what belongs to each function.\n\nThe first alternative I posted at\n\nhttps://www.postgresql.org/message-id/31833.1586817876%40sss.pgh.pa.us\n\nseems like it would accomplish that pretty well, by having lines\n*only* between functions. The last couple of things that have been\nposted seem way more cluttered than that one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Apr 2020 13:19:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, Apr 13, 2020 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> As discussed in the thread at [1], I've been working on redesigning\n> the tables we use to present SQL functions and operators. The\n> first installment of that is now up; see tables 9.30 and 9.31 at\n>\n> https://www.postgresql.org/docs/devel/functions-datetime.html\n>\n> and table 9.33 at\n>\n> https://www.postgresql.org/docs/devel/functions-enum.html\n>\n>\nThe centering of the headers doesn't do it for me. Too much gap and the\ndata itself is not centered so there is a large disconnect between the\nheaders and the values.\n\nThe run-on aspect of the left-aligned setup is of some concern but maybe\njust adding some left padding to the second column - and right padding to\nthe first - can provide the desired negative space without adding so much\nas to break usability.\n\nDavid J.\n\nOn Mon, Apr 13, 2020 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:As discussed in the thread at [1], I've been working on redesigning\nthe tables we use to present SQL functions and operators.  The\nfirst installment of that is now up; see tables 9.30 and 9.31 at\n\nhttps://www.postgresql.org/docs/devel/functions-datetime.html\n\nand table 9.33 at\n\nhttps://www.postgresql.org/docs/devel/functions-enum.html The centering of the headers doesn't do it for me.  Too much gap and the data itself is not centered so there is a large disconnect between the headers and the values.The run-on aspect of the left-aligned setup is of some concern but maybe just adding some left padding to the second column - and right padding to the first - can provide the desired negative space without adding so much as to break usability.David J.", "msg_date": "Wed, 15 Apr 2020 11:56:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "As I threatened to do earlier, I made a pass at converting table 9.10\nto a couple of the styles under discussion. (This is just a\ndraft-quality patch, so it might have some minor bugs --- the point\nis just to see what these styles look like.)\n\nI've concluded after looking around that the ideas involving not having\na <table> at all, but just a <variablelist> or the like, are not very\nwell-advised. That would eliminate, or at least greatly degrade, the\nvisual distinction between the per-function material and the surrounding\ncommentary. Which does not seem like a winner to me; for example it\nwould make it quite hard to skip over the detailed material when you're\njust trying to skim the docs.\n\nWe did have a number of people suggesting that just reordering things as\n\"description, signature, examples\" might be a good idea, so I gave that\na try; attached is a rendition of a portion of 9.10 in that style (the\n\"v1\" image). It's not bad, but there's still going to be a lot of\nwasted whitespace in tables that include even one long function name.\n(9.10's longest is \"regexp_split_to_array\", so it's showing this problem\nsignificantly.)\n\nI also experimented with Jonathan's idea of dropping the separate\nfunction name and allowing the function signature to span left into\nthat column -- see \"v2\" images. This actually works really well,\nand would work even better (IMO) if we could get rid of the inter-row\nand inter-column rules within a function entry. I failed to\naccomplish that with rowsep/colsep annotations, but from remarks\nupthread I suppose there might be a CSS way to accomplish it. (But\nthe rowsep/colsep annotations *do* work in PDF output, so I kept them;\nthat means we only need a CSS fix and not some kind of flow-object\nmagic for PDF.)\n\nTo allow direct comparison of these 9.10 images against the situation\nin HEAD, I've also attached an extract of 9.10 as rendered by my\nbrowser with \"STYLE=website\". As you can see this is *not* quite\nidentical to how it renders on postgresql.org, so there is still some\nunexplained differential in font or margins or something. But if you\nlook at those three PNGs you can see that either v1 or v2 has a pretty\nsubstantial advantage over HEAD in terms of the amount of space\nneeded. v2 would be even further ahead if we could eliminate some of\nthe vertical space around the intra-function row split, which again\nmight be doable with CSS magic.\n\nThe main disadvantage I can see to the v2 design is that we're back\nto having two <rows> per function, which is inevitably going to result\nin PDF builds putting page breaks between those rows. But you can't\nhave everything ... and maybe we could find a way to discourage such\nbreaks if we tried.\n\nAnother issue is that v2 won't adapt real well to operator tables;\nthe operator name won't be at the left. I don't have a lot of faith\nin the proposal to fix that with font tricks. Maybe we could stick\nto something close to the layout that table 9.30 has in HEAD (ie\nrepeating the operator name in column 1), since we won't have long\noperator names messing up the format. Again, CSS'ing our way\nout of the internal lines and extra vertical space within a single\nlogical table cell would make that layout look nicer.\n\nOn balance I quite like the v2 layout and would prefer to move forward\nwith that, assuming we can solve the remaining issues via CSS or style\nsheets.\n\nIn addition to screenshots, I've attached patches against HEAD that\nconvert both tables 9.10 and 9.33 into v1 and v2 styles.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 Apr 2020 18:18:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "At Wed, 15 Apr 2020 12:04:34 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, Apr 15, 2020 at 11:54 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 15. 4. 2020 v 17:43 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:\n> >> On Wed, 15 Apr 2020 at 11:26, Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n> >>> The best way to achieve this is to use some styling (font style and color).\n> >>>\n> >>> Attached you will find two different options I worked on very quickly.\n> >>\n> >> I really like the first. Just a couple of suggestions I would make:\n> >\n> > yes, it is very well readable\n> \n> +1.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 16 Apr 2020 14:18:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Le 16/04/2020 � 00:18, Tom Lane a �crit�:\n> As I threatened to do earlier, I made a pass at converting table 9.10\n> to a couple of the styles under discussion. (This is just a\n> draft-quality patch, so it might have some minor bugs --- the point\n> is just to see what these styles look like.)\n> \n> I've concluded after looking around that the ideas involving not having\n> a <table> at all, but just a <variablelist> or the like, are not very\n> well-advised. That would eliminate, or at least greatly degrade, the\n> visual distinction between the per-function material and the surrounding\n> commentary. Which does not seem like a winner to me; for example it\n> would make it quite hard to skip over the detailed material when you're\n> just trying to skim the docs.\n> \n> We did have a number of people suggesting that just reordering things as\n> \"description, signature, examples\" might be a good idea, so I gave that\n> a try; attached is a rendition of a portion of 9.10 in that style (the\n> \"v1\" image). It's not bad, but there's still going to be a lot of\n> wasted whitespace in tables that include even one long function name.\n> (9.10's longest is \"regexp_split_to_array\", so it's showing this problem\n> significantly.)\n> \n> I also experimented with Jonathan's idea of dropping the separate\n> function name and allowing the function signature to span left into\n> that column -- see \"v2\" images. This actually works really well,\n> and would work even better (IMO) if we could get rid of the inter-row\n> and inter-column rules within a function entry. I failed to\n> accomplish that with rowsep/colsep annotations, but from remarks\n> upthread I suppose there might be a CSS way to accomplish it. (But\n> the rowsep/colsep annotations *do* work in PDF output, so I kept them;\n> that means we only need a CSS fix and not some kind of flow-object\n> magic for PDF.)\n> \n> To allow direct comparison of these 9.10 images against the situation\n> in HEAD, I've also attached an extract of 9.10 as rendered by my\n> browser with \"STYLE=website\". As you can see this is *not* quite\n> identical to how it renders on postgresql.org, so there is still some\n> unexplained differential in font or margins or something. But if you\n> look at those three PNGs you can see that either v1 or v2 has a pretty\n> substantial advantage over HEAD in terms of the amount of space\n> needed. v2 would be even further ahead if we could eliminate some of\n> the vertical space around the intra-function row split, which again\n> might be doable with CSS magic.\n> \n> The main disadvantage I can see to the v2 design is that we're back\n> to having two <rows> per function, which is inevitably going to result\n> in PDF builds putting page breaks between those rows. But you can't\n> have everything ... and maybe we could find a way to discourage such\n> breaks if we tried.\n\nWhat about putting everything into one <table row> and use a block with\nsome left padding/margin for description + example.\nThis would solve the PDF page break issue as well as the column\nseparation border one.\n\nThe screenshot attached uses a <dl> tag for the descrition/example block.\n\n> \n> Another issue is that v2 won't adapt real well to operator tables;\n> the operator name won't be at the left. I don't have a lot of faith\n> in the proposal to fix that with font tricks. Maybe we could stick\n> to something close to the layout that table 9.30 has in HEAD (ie\n> repeating the operator name in column 1), since we won't have long\n> operator names messing up the format. Again, CSS'ing our way\n> out of the internal lines and extra vertical space within a single\n> logical table cell would make that layout look nicer.\n> \n> On balance I quite like the v2 layout and would prefer to move forward\n> with that, assuming we can solve the remaining issues via CSS or style\n> sheets.\n> \n> In addition to screenshots, I've attached patches against HEAD that\n> convert both tables 9.10 and 9.33 into v1 and v2 styles.\n> \n> \t\t\tregards, tom lane\n>", "msg_date": "Thu, 16 Apr 2020 08:26:54 +0200", "msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Pierre Giraud <pierre.giraud@dalibo.com> writes:\n> Le 16/04/2020 à 00:18, Tom Lane a écrit :\n>> The main disadvantage I can see to the v2 design is that we're back\n>> to having two <rows> per function, which is inevitably going to result\n>> in PDF builds putting page breaks between those rows. But you can't\n>> have everything ... and maybe we could find a way to discourage such\n>> breaks if we tried.\n\nFurther experimentation shows that the PDF toolchain is perfectly willing\nto put a page break *within* a multi-line <row>; if there is any\npreference to break between rows instead, it's pretty weak. So that\nargument is a red herring and we shouldn't waste time chasing it.\nHowever, there'd still be some advantage in not being dependent on CSS\nhackery to make it look nice in HTML.\n\nWhat we're down to wanting, at this point, is basically a para with\nhanging indent.\n\n> What about putting everything into one <table row> and use a block with\n> some left padding/margin for description + example.\n> This would solve the PDF page break issue as well as the column\n> separation border one.\n> The screenshot attached uses a <dl> tag for the descrition/example block.\n\nThat looks about right, perhaps, but could you be a little clearer about\nhow you accomplished that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:43:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Le 16/04/2020 à 16:43, Tom Lane a écrit :\n> Pierre Giraud <pierre.giraud@dalibo.com> writes:\n>> Le 16/04/2020 à 00:18, Tom Lane a écrit :\n>>> The main disadvantage I can see to the v2 design is that we're back\n>>> to having two <rows> per function, which is inevitably going to result\n>>> in PDF builds putting page breaks between those rows. But you can't\n>>> have everything ... and maybe we could find a way to discourage such\n>>> breaks if we tried.\n> \n> Further experimentation shows that the PDF toolchain is perfectly willing\n> to put a page break *within* a multi-line <row>; if there is any\n> preference to break between rows instead, it's pretty weak. So that\n> argument is a red herring and we shouldn't waste time chasing it.\n> However, there'd still be some advantage in not being dependent on CSS\n> hackery to make it look nice in HTML.\n> \n> What we're down to wanting, at this point, is basically a para with\n> hanging indent.\n> \n>> What about putting everything into one <table row> and use a block with\n>> some left padding/margin for description + example.\n>> This would solve the PDF page break issue as well as the column\n>> separation border one.\n>> The screenshot attached uses a <dl> tag for the descrition/example block.\n> \n> That looks about right, perhaps, but could you be a little clearer about\n> how you accomplished that?\n\nAttached you will find the HTML structure with associated styles.\nSorry I haven't tried to do this from the DocBook sources.\nI hope this helps though.\n\nRegards", "msg_date": "Thu, 16 Apr 2020 17:12:28 +0200", "msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Pierre Giraud <pierre.giraud@dalibo.com> writes:\n> Le 16/04/2020 à 16:43, Tom Lane a écrit :\n>> Pierre Giraud <pierre.giraud@dalibo.com> writes:\n>>> The screenshot attached uses a <dl> tag for the descrition/example block.\n\n>> That looks about right, perhaps, but could you be a little clearer about\n>> how you accomplished that?\n\n> Attached you will find the HTML structure with associated styles.\n> Sorry I haven't tried to do this from the DocBook sources.\n> I hope this helps though.\n\nAfter a bit of poking at it, I couldn't find another way to do that\nthan using a <variablelist> structure. Which is an annoying amount\nof markup to be adding to each table cell, but I guess we could live\nwith it. A bigger problem is that docbook applies styles to the\n<dl> structure that, at least by default, add a LOT of vertical space.\nDoesn't seem real workable unless we can undo that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 13:03:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/14/20 4:52 PM, Tom Lane wrote:\n> Andreas Karlsson <andreas@proxel.se> writes:\n>> That said, I agree with that quite many of our tables right now are\n>> ugly, but I prefer ugly to hard to read. For me the mix of having every\n>> third row split into two fields makes the tables very hard to read. I\n>> have a hard time seeing which rows belong to which function.\n> \n> Did you look at the variants without that discussed downthread?\n\nYeah, I did some of them are quite readable, for example your latest two \nscreenshots of table 9.10.\n\nAndreas\n\n\n", "msg_date": "Fri, 17 Apr 2020 00:05:33 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "v1 is good.\n\nI like your v2 even better. If it becomes possible to remove or soften\nthe \"inter-row\" horizontal line with CSS tricks afterwards, that would\nbe swell, but even without that, I cast my vote to using this table\nformat.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 19:03:04 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I like your v2 even better. If it becomes possible to remove or soften\n> the \"inter-row\" horizontal line with CSS tricks afterwards, that would\n> be swell, but even without that, I cast my vote to using this table\n> format.\n\nI eventually figured out that the approved way to do per-table-entry\ncustomization is to attach \"role\" properties to the DocBook elements,\nand then key off the role names in applying formatting changes in\nthe customization layer. So attached is a v3 that handles the desired\nformatting changes by applying a hanging indent to table <entry>\ncontents if the entry is marked with role=\"functableentry\". It may\nwell be possible to do this in a cleaner fashion, but this seems\ngood enough for discussion.\n\nI changed table 9.30 (Date/Time Operators) to this style, doing it\nexactly the same way as functions, just to see what it'd look like.\nI'm not sure if this is OK or if we want a separate column with\njust the operator name at the left --- it seems a little bit hard\nto spot the operator you want, but not impossible. Thoughts?\n\nAttached are screenshots of the same segment of table 9.10 as before\nand of the initial portion of 9.30, the patch against HEAD to produce\nthese, and a hacky patch on the website's main.css to get it to go\nalong. Without the last you just get all the subsidiary stuff\nleft-justified if you build with STYLE=website, which isn't impossibly\nunreadable but it's not the desired presentation.\n\nI didn't include any screenshots of the PDF rendering, but it looks\nfine.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 16 Apr 2020 20:25:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Thu, Apr 16, 2020 at 8:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Attached are screenshots of the same segment of table 9.10 as before\n> and of the initial portion of 9.30, the patch against HEAD to produce\n> these, and a hacky patch on the website's main.css to get it to go\n> along. Without the last you just get all the subsidiary stuff\n> left-justified if you build with STYLE=website, which isn't impossibly\n> unreadable but it's not the desired presentation.\n\nThese seem very nice, and way more readable than the version with\nwhich you started the thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 14:26:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 16, 2020 at 8:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attached are screenshots of the same segment of table 9.10 as before\n>> and of the initial portion of 9.30, the patch against HEAD to produce\n>> these, and a hacky patch on the website's main.css to get it to go\n>> along. Without the last you just get all the subsidiary stuff\n>> left-justified if you build with STYLE=website, which isn't impossibly\n>> unreadable but it's not the desired presentation.\n\n> These seem very nice, and way more readable than the version with\n> which you started the thread.\n\nGlad you like 'em ;-). Do you have an opinion about what to do\nwith the operator tables --- ie do we need a column with the operator\nname at the left?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 14:38:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Glad you like 'em ;-). Do you have an opinion about what to do\n> with the operator tables --- ie do we need a column with the operator\n> name at the left?\n\nWell, if the first row says date + date -> date, then I don't think we\nalso need another column to say that we're talking about +\n\nSeems redundant.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:16:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Glad you like 'em ;-). Do you have an opinion about what to do\n>> with the operator tables --- ie do we need a column with the operator\n>> name at the left?\n\n> Well, if the first row says date + date -> date, then I don't think we\n> also need another column to say that we're talking about +\n\n> Seems redundant.\n\nWell, sure it's redundant, the same way an index is redundant.\nQuestion is whether it makes it easier to find what you're after.\n\nComparing this to what is in table 9.30 as of HEAD [1], it does\nseem like the operator column in the latter is a bit busy/redundant.\nPerhaps it'd be less so if we used the morerows trick to have only\none occurrence of each operator name in the first column. But that\nwould be a little bit of a pain to maintain, so I'm not sure it's\nworth the trouble.\n\nAnother advantage of handling functions and operators in exactly\nthe same format is that we won't need to do something weird for\ntables 9.9 and 9.11, which include both.\n\nFor the moment I'll press on without including that column; we can\nadd it later without a huge amount of pain if we decide we want it.\n\nOn the other point of dispute about the operator tables: for the\nmoment I'm leaning towards keeping the text descriptions. Surveying\nthe existing tables, the *only* two that lack text descriptions now\nare this one and the as-yet-unnumbered table in 9.1 for AND/OR/NOT.\n(Actually, that one calls itself a truth table not an operator\ndefinition table, so maybe we should leave it alone.) While there\nis a reasonable argument that 9.1 Comparison Operators' descriptions\nare all obvious, it's hard to make that argument for any other tables.\nSo I think the fact that 9.30 lacked such up to now is an aberration\nnot a good principle to follow. Even in 9.30, the fact that, say,\ndate + integer interprets the integer as so-many-days isn't really\nso blindingly obvious that it doesn't need documented. In another\nuniverse we might've made that count as seconds and had the result\ntype be timestamp, the way it works for date + interval.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/functions-datetime.html\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:58:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On the other point of dispute about the operator tables: for the\n> moment I'm leaning towards keeping the text descriptions.\n\nI mostly suggested nuking them just to try to make the table more\nreadable. But since you've found another (and better) solution to that\nproblem, I withdraw that suggestion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:14:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On the other point of dispute about the operator tables: for the\n>> moment I'm leaning towards keeping the text descriptions.\n\n> I mostly suggested nuking them just to try to make the table more\n> readable. But since you've found another (and better) solution to that\n> problem, I withdraw that suggestion.\n\nCool, then we're all on the same page. I shall press forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:22:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Apr 16, 2020 at 8:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Attached are screenshots of the same segment of table 9.10 as before\n> >> and of the initial portion of 9.30, the patch against HEAD to produce\n> >> these, and a hacky patch on the website's main.css to get it to go\n> >> along. Without the last you just get all the subsidiary stuff\n> >> left-justified if you build with STYLE=website, which isn't impossibly\n> >> unreadable but it's not the desired presentation.\n>\n> > These seem very nice, and way more readable than the version with\n> > which you started the thread.\n>\n>\nI too like the layout result.\n\n> Glad you like 'em ;-). Do you have an opinion about what to do\n> with the operator tables --- ie do we need a column with the operator\n> name at the left?\n>\n>\nI feel like writing them as:\n\n+ (date, integer) -> date\n\nmakes more sense as they are mainly sorted on the operator symbol as\nopposed to the left operand.\n\nI think the description line is beneficial, and easy enough to skim over\nfor the trained eye just looking for a refresher on the example syntax.\n\nDavid J.\n\nOn Fri, Apr 17, 2020 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 16, 2020 at 8:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attached are screenshots of the same segment of table 9.10 as before\n>> and of the initial portion of 9.30, the patch against HEAD to produce\n>> these, and a hacky patch on the website's main.css to get it to go\n>> along.  Without the last you just get all the subsidiary stuff\n>> left-justified if you build with STYLE=website, which isn't impossibly\n>> unreadable but it's not the desired presentation.\n\n> These seem very nice, and way more readable than the version with\n> which you started the thread.\nI too like the layout result.\nGlad you like 'em ;-).  Do you have an opinion about what to do\nwith the operator tables --- ie do we need a column with the operator\nname at the left?I feel like writing them as:+ (date, integer) -> datemakes more sense as they are mainly sorted on the operator symbol as opposed to the left operand.I think the description line is beneficial, and easy enough to skim over for the trained eye just looking for a refresher on the example syntax.David J.", "msg_date": "Fri, 17 Apr 2020 15:30:33 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I feel like writing them as:\n> + (date, integer) -> date\n> makes more sense as they are mainly sorted on the operator symbol as\n> opposed to the left operand.\n\nHmm ... we do use that syntax in some fairly-obscure places like\nALTER OPERATOR, but I'm afraid that novice users would just be\ncompletely befuddled. Maybe the examples would be enough to clarify,\nbut I'm not convinced. Especially not for unary operators, where\nALTER OPERATOR would have us write \"- (NONE, integer)\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 19:04:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Especially not for unary operators, where\n> ALTER OPERATOR would have us write \"- (NONE, integer)\".\n>\n\nI'd drop the parens for unary and just write \"- integer\"\n\nIt is a bit geeky but then again SQL writers are not typically computer\nlanguage novices so operators should be comfortable for them and this isn't\nthat off-the-wall. But I agree with the concern.\n\nDavid J.\n\nOn Fri, Apr 17, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: Especially not for unary operators, where\nALTER OPERATOR would have us write \"- (NONE, integer)\".I'd drop the parens for unary and just write \"- integer\"It is a bit geeky but then again SQL writers are not typically computer language novices so operators should be comfortable for them and this isn't that off-the-wall.  But I agree with the concern.David J.", "msg_date": "Fri, 17 Apr 2020 16:08:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Especially not for unary operators, where\n>> ALTER OPERATOR would have us write \"- (NONE, integer)\".\n\n> I'd drop the parens for unary and just write \"- integer\"\n\nWe do have some postfix operators still ... although it looks like\nthere's only one in core. In any case, the signature line is *the*\nthing that is supposed to specify what the syntax is, so I'm not\ntoo pleased with using an ambiguous notation for it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 19:16:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Fri, Apr 17, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Especially not for unary operators, where\n> >> ALTER OPERATOR would have us write \"- (NONE, integer)\".\n>\n> > I'd drop the parens for unary and just write \"- integer\"\n>\n> We do have some postfix operators still ... although it looks like\n> there's only one in core. In any case, the signature line is *the*\n> thing that is supposed to specify what the syntax is, so I'm not\n> too pleased with using an ambiguous notation for it.\n>\n\nNeither:\n\n- (NONE, integer)\n\nnor\n\n! (integer, NONE)\n\nseem bad, and do make very obvious how they are different.\n\nThe left margin scanning ability for the symbol (hey, I have an expression\nhere that uses @>, what does that do?) seems worth the bit of novelty\nrequired.\n\nDavid J.\n\nOn Fri, Apr 17, 2020 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 4:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Especially not for unary operators, where\n>> ALTER OPERATOR would have us write \"- (NONE, integer)\".\n\n> I'd drop the parens for unary and just write \"- integer\"\n\nWe do have some postfix operators still ... although it looks like\nthere's only one in core.  In any case, the signature line is *the*\nthing that is supposed to specify what the syntax is, so I'm not\ntoo pleased with using an ambiguous notation for it.Neither:- (NONE, integer) nor ! (integer, NONE) seem bad, and do make very obvious how they are different.The left margin scanning ability for the symbol (hey, I have an expression here that uses @>, what does that do?) seems worth the bit of novelty required.David J.", "msg_date": "Fri, 17 Apr 2020 16:40:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We do have some postfix operators still ... although it looks like\n>> there's only one in core. In any case, the signature line is *the*\n>> thing that is supposed to specify what the syntax is, so I'm not\n>> too pleased with using an ambiguous notation for it.\n\n> Neither:\n> - (NONE, integer)\n> nor\n> ! (integer, NONE)\n> seem bad, and do make very obvious how they are different.\n\n> The left margin scanning ability for the symbol (hey, I have an expression\n> here that uses @>, what does that do?) seems worth the bit of novelty\n> required.\n\nMeh. If we're worried about that, personally I'd much rather put\nback the separate left-hand column with just the operator name.\n\nWe could also experiment with bold-facing the operator names,\nas somebody suggested upthread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 20:27:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Fri, Apr 17, 2020 at 6:30 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I feel like writing them as:\n>\n> + (date, integer) -> date\n>\n> makes more sense as they are mainly sorted on the operator symbol as opposed to the left operand.\n\nI thought about that, too, but I think the way Tom did it is better.\nIt's much more natural to see it using the syntax with which it will\nactually be invoked.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Apr 2020 08:27:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 6:30 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>> I feel like writing them as:\n>> + (date, integer) -> date\n>> makes more sense as they are mainly sorted on the operator symbol as opposed to the left operand.\n\n> I thought about that, too, but I think the way Tom did it is better.\n> It's much more natural to see it using the syntax with which it will\n> actually be invoked.\n\nJust for the record, I experimented with putting back an \"operator name\"\ncolumn, as attached. I think it could be argued either way whether this\nis an improvement or not.\n\nSome notes:\n\n* The column seems annoyingly wide, but the only way to make it narrower\nis to narrow or eliminate the column title, which could be confusing.\nAlso, if there's not a fair amount of whitespace, it looks as if the\ninitial name is part of the signature, which is *really* confusing,\ncf second screenshot. (I'm not sure why the vertical rule is rendered\nso much more weakly in this case, but it is.)\n\n* I also tried it with valign=\"middle\" to center the operator name among\nits entries. This was *not* an improvement, it largely breaks the\nability to see which entries belong to the name.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 18 Apr 2020 16:36:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "so 18. 4. 2020 v 22:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Apr 17, 2020 at 6:30 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> >> I feel like writing them as:\n> >> + (date, integer) -> date\n> >> makes more sense as they are mainly sorted on the operator symbol as\n> opposed to the left operand.\n>\n> > I thought about that, too, but I think the way Tom did it is better.\n> > It's much more natural to see it using the syntax with which it will\n> > actually be invoked.\n>\n> Just for the record, I experimented with putting back an \"operator name\"\n> column, as attached. I think it could be argued either way whether this\n> is an improvement or not.\n>\n> Some notes:\n>\n> * The column seems annoyingly wide, but the only way to make it narrower\n> is to narrow or eliminate the column title, which could be confusing.\n> Also, if there's not a fair amount of whitespace, it looks as if the\n> initial name is part of the signature, which is *really* confusing,\n> cf second screenshot. (I'm not sure why the vertical rule is rendered\n> so much more weakly in this case, but it is.)\n>\n> * I also tried it with valign=\"middle\" to center the operator name among\n> its entries. This was *not* an improvement, it largely breaks the\n> ability to see which entries belong to the name.\n>\n\nfirst variant looks better, because column with operator is wider.\n\nMaybe it can look better if a content will be places to mid point. In left\nupper corner it is less readable.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nso 18. 4. 2020 v 22:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 17, 2020 at 6:30 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>> I feel like writing them as:\n>> + (date, integer) -> date\n>> makes more sense as they are mainly sorted on the operator symbol as opposed to the left operand.\n\n> I thought about that, too, but I think the way Tom did it is better.\n> It's much more natural to see it using the syntax with which it will\n> actually be invoked.\n\nJust for the record, I experimented with putting back an \"operator name\"\ncolumn, as attached.  I think it could be argued either way whether this\nis an improvement or not.\n\nSome notes:\n\n* The column seems annoyingly wide, but the only way to make it narrower\nis to narrow or eliminate the column title, which could be confusing.\nAlso, if there's not a fair amount of whitespace, it looks as if the\ninitial name is part of the signature, which is *really* confusing,\ncf second screenshot.  (I'm not sure why the vertical rule is rendered\nso much more weakly in this case, but it is.)\n\n* I also tried it with valign=\"middle\" to center the operator name among\nits entries.  This was *not* an improvement, it largely breaks the\nability to see which entries belong to the name.first variant looks better, because column with operator is wider.Maybe it can look better if a content will be places to mid point. In left upper corner it is less readable.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Sun, 19 Apr 2020 05:40:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-04-13 22:33, Tom Lane wrote:\n>> Maybe we're just trying to shoehorn too much information into a single\n>> table.\n> Yeah, back at the beginning of this exercise, Alvaro wondered aloud\n> if we should go to something other than tables altogether. I dunno\n> what that'd look like though.\n\nYeah, after reading all this, my conclusion is also, probably tables are \nnot the right solution.\n\nA variablelist/definition list would be the next thing to try in my mind.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 12:29:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-04-16 08:26, Pierre Giraud wrote:\n> The screenshot attached uses a <dl> tag for the descrition/example block.\n\nI like this better, but then you don't really need the table because you \ncan just make the whole thing a definition list.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 12:36:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-04-17 02:25, Tom Lane wrote:\n> I eventually figured out that the approved way to do per-table-entry\n> customization is to attach \"role\" properties to the DocBook elements,\n> and then key off the role names in applying formatting changes in\n> the customization layer. So attached is a v3 that handles the desired\n> formatting changes by applying a hanging indent to table <entry>\n> contents if the entry is marked with role=\"functableentry\". It may\n> well be possible to do this in a cleaner fashion, but this seems\n> good enough for discussion.\n\nThis scares me in terms of maintainability of both the toolchain and the \nmarkup. Table formatting is already incredibly fragile, and here we \njust keep poking it until it looks a certain way instead of thinking \nabout semantic markup.\n\nA good old definition list of the kind\n\nsynopsis\n\n explanation\n\n example or two\n\nwould be much easier to maintain on all fronts. And we could for \nexample link directly to a function, which is currently not really possible.\n\nIf we want to draw a box around this and change the spacing, we can do \nthat with CSS.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Apr 2020 12:46:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This scares me in terms of maintainability of both the toolchain and the \n> markup. Table formatting is already incredibly fragile, and here we \n> just keep poking it until it looks a certain way instead of thinking \n> about semantic markup.\n\nThat's a fair criticism, but ...\n\n> A good old definition list of the kind\n> synopsis\n> explanation\n> example or two\n> would be much easier to maintain on all fronts. And we could for \n> example link directly to a function, which is currently not really possible.\n> If we want to draw a box around this and change the spacing, we can do \n> that with CSS.\n\n... \"we can fix it with CSS\" is just as much reliance on toolchain.\n\nIn any case, I reject the idea that we should just drop the table\nmarkup altogether and use inline variablelists. In most of these\nsections there is a very clear separation between the table contents\n(with per-function or per-operator details) and the surrounding\ncommentary, which deals with more general concerns. That's a useful\nseparation for both readers and authors, so we need to preserve it\nin some form, but the standard rendering of variablelists won't.\n(Our existing major use of variablelists, in the GUC chapter, works\naround this basically by not having any \"surrounding commentary\"\n... but that solution doesn't work here.)\n\nThere is also value in being able to say things like \"see Table m.n\nfor the available operators for type foo\".\n\nIf somebody's got an idea how to obtain this painfully-agreed-to\nvisual appearance from more robust markup, I'm all ears. This\nstuff is a bit outside my skill set, so I don't claim to have\nfound the best possible implementation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Apr 2020 09:23:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Sun, 19 Apr 2020 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> In any case, I reject the idea that we should just drop the table\n> markup altogether and use inline variablelists. In most of these\n> sections there is a very clear separation between the table contents\n> (with per-function or per-operator details) and the surrounding\n> commentary, which deals with more general concerns. That's a useful\n> separation for both readers and authors, so we need to preserve it\n> in some form, but the standard rendering of variablelists won't.\n> (Our existing major use of variablelists, in the GUC chapter, works\n> around this basically by not having any \"surrounding commentary\"\n> ... but that solution doesn't work here.)\n>\n> There is also value in being able to say things like \"see Table m.n\n> for the available operators for type foo\".\n>\n\nThe HTML definition list under discussion looks like this:\n\n<dl>\n <dt> term 1 </dt>\n <dd> description 1 </dd>\n <dt> term 2 </dt>\n <dd> description 2a </dd>\n <dd> description 2b </dd>\n</dl>\n\nSo the enclosing <dl> element has the same role in the overall document as\nthe <table>, and could be styled to set it apart from the main text and\nmake it clear that it is a single unit (and at least in principle could be\nincluded in the \"table\" numbering). In the function/operator listing use\ncase, there would be one <dd> for the description and a <dd> for each\nexample. See:\n\nhttps://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl\n\nIf we were only concerned with HTML output then based on the desired\nsemantics and appearance I would recommend <dl> without hesitation. Because\nof the need to produce PDF as well and my lack of knowledge of the Postgres\ndocumentation build process, I can't be so certain but I still suspect <dl>\nto be the best approach.\n\nOn Sun, 19 Apr 2020 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote: \nIn any case, I reject the idea that we should just drop the table\nmarkup altogether and use inline variablelists.  In most of these\nsections there is a very clear separation between the table contents\n(with per-function or per-operator details) and the surrounding\ncommentary, which deals with more general concerns.  That's a useful\nseparation for both readers and authors, so we need to preserve it\nin some form, but the standard rendering of variablelists won't.\n(Our existing major use of variablelists, in the GUC chapter, works\naround this basically by not having any \"surrounding commentary\"\n... but that solution doesn't work here.)\n\nThere is also value in being able to say things like \"see Table m.n\nfor the available operators for type foo\".The HTML definition list under discussion looks like this:<dl>    <dt> term 1 </dt>    <dd> description 1 </dd>    <dt> term 2 </dt>    <dd> description 2a </dd>    <dd> description 2b </dd></dl>So the enclosing <dl> element has the same role in the overall document as the <table>, and could be styled to set it apart from the main text and make it clear that it is a single unit (and at least in principle could be included in the \"table\" numbering). In the function/operator listing use case, there would be one <dd> for the description and a <dd> for each example. See:https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dlIf we were only concerned with HTML output then based on the desired semantics and appearance I would recommend <dl> without hesitation. Because of the need to produce PDF as well and my lack of knowledge of the Postgres documentation build process, I can't be so certain but I still suspect <dl> to be the best approach.", "msg_date": "Sun, 19 Apr 2020 12:39:37 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> If we were only concerned with HTML output then based on the desired\n> semantics and appearance I would recommend <dl> without hesitation. Because\n> of the need to produce PDF as well and my lack of knowledge of the Postgres\n> documentation build process, I can't be so certain but I still suspect <dl>\n> to be the best approach.\n\nYeah ... so a part of this problem is to persuade DocBook to generate\nthat.\n\nAs I mentioned upthread, I did experiment with putting a single-item\n<variablelist> in each table cell. That works out to an annoying amount\nof markup overhead, since variablelist is a rather overengineered\nconstruct, but I imagine we could live with it. The real problem was\nthe amount of whitespace it wanted to add. We could probably hack our\nway out of that with CSS for HTML output, but it was quite unclear whether\nthe PDF toolchain could be made to render it reasonably.\n\nA desirable solution, perhaps, would be a <variablelist> corresponding to\nthe entire table with rendering customization that produces table-like\ndividing lines around <varlistentry>s. I'm not volunteering to figure\nout how to do that though, especially not for PDF.\n\nIn the meantime I plan to push forward with the markup approach we've\ngot. The editorial content should still work if we find a better\nmarkup answer, and I'm willing to do the work of replacing the markup\nas long as somebody else figures out what it should be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 Apr 2020 12:59:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "вс, 19 апр. 2020 г. в 20:00, Tom Lane <tgl@sss.pgh.pa.us>:\n\n> In the meantime I plan to push forward with the markup approach we've\n> got. The editorial content should still work if we find a better\n> markup answer, and I'm willing to do the work of replacing the markup\n> as long as somebody else figures out what it should be.\n>\n\nI am following this thread as a frequent documentation user.\n\nWhile table 9.5 with functions looks quite nice, I quite dislike 9.4 with\noperators.\nPreviously, I could lookup operator in the leftmost column and read on.\nRight now I have to look through the whole table (well, not really, but\nstill) to find the operator.\n\n-- \nVictor Yegorov\n\nвс, 19 апр. 2020 г. в 20:00, Tom Lane <tgl@sss.pgh.pa.us>:\nIn the meantime I plan to push forward with the markup approach we've\ngot.  The editorial content should still work if we find a better\nmarkup answer, and I'm willing to do the work of replacing the markup\nas long as somebody else figures out what it should be.I am following this thread as a frequent documentation user.While table 9.5 with functions looks quite nice, I quite dislike 9.4 with operators.Previously, I could lookup operator in the leftmost column and read on. Right now I have to look through the whole table (well, not really, but still) to find the operator. -- Victor Yegorov", "msg_date": "Mon, 20 Apr 2020 13:38:46 +0300", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Victor Yegorov <vyegorov@gmail.com> writes:\n> While table 9.5 with functions looks quite nice, I quite dislike 9.4 with\n> operators.\n> Previously, I could lookup operator in the leftmost column and read on.\n> Right now I have to look through the whole table (well, not really, but\n> still) to find the operator.\n\nAside from the alternatives already discussed, the only other idea\nthat's come to my mind is to write operator entries in a style like\n\n\t|| as in: text || text → text\n\t\tConcatenates the two strings.\n\t\t'Post' || 'greSQL' → PostgreSQL\n\nNot sure that that's any better, but it is another alternative.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Apr 2020 10:21:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Victor Yegorov <vyegorov@gmail.com> writes:\n> While table 9.5 with functions looks quite nice, I quite dislike 9.4 with\n> operators.\n\nBTW, I think a big part of the problem with table 9.4 as it's being\nrendered in the web style right now is that the type placeholders\n(numeric_type etc) are being rendered in a ridiculously overemphasized\nfashion, causing them to overwhelm all else. Do we really want\n<replaceable> to be rendered that way? I'd think plain italic,\ncomparable to the rendering of <parameter>, would be more appropriate.\n\nI could make this page use <parameter> for that purpose of course,\nbut it seems like semantically the wrong thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Apr 2020 12:49:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-Apr-20, Tom Lane wrote:\n\n> Victor Yegorov <vyegorov@gmail.com> writes:\n> > While table 9.5 with functions looks quite nice, I quite dislike 9.4 with\n> > operators.\n> > Previously, I could lookup operator in the leftmost column and read on.\n> > Right now I have to look through the whole table (well, not really, but\n> > still) to find the operator.\n> \n> Aside from the alternatives already discussed,\n\nThere's one with a separate column for the operator, without types, at\nthe left (the \"with names\" example at\nhttps://postgr.es/m/14380.1587242177@sss.pgh.pa.us ). That seemed\npretty promising -- not sure why it was discarded.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 17:31:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> There's one with a separate column for the operator, without types, at\n> the left (the \"with names\" example at\n> https://postgr.es/m/14380.1587242177@sss.pgh.pa.us ). That seemed\n> pretty promising -- not sure why it was discarded.\n\nWell, I wouldn't say it was discarded --- but there sure wasn't\na groundswell of support.\n\nLooking at it again, I'd be inclined not to bother with the\nmorerows trick but just to have an operator name entry in each row.\nThis table is a bit of an outlier anyway, I'm finding --- very few\nof the operator tables have multiple entries per operator name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Apr 2020 17:50:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-Apr-20, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > There's one with a separate column for the operator, without types, at\n> > the left (the \"with names\" example at\n> > https://postgr.es/m/14380.1587242177@sss.pgh.pa.us ). That seemed\n> > pretty promising -- not sure why it was discarded.\n> \n> Well, I wouldn't say it was discarded --- but there sure wasn't\n> a groundswell of support.\n\nAh.\n\n> Looking at it again, I'd be inclined not to bother with the\n> morerows trick but just to have an operator name entry in each row.\n> This table is a bit of an outlier anyway, I'm finding --- very few\n> of the operator tables have multiple entries per operator name.\n\nNo disagreement here. 'morerows' attribs are always a messy business.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:14:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 2020-04-19 15:23, Tom Lane wrote:\n> If somebody's got an idea how to obtain this painfully-agreed-to\n> visual appearance from more robust markup, I'm all ears. This\n> stuff is a bit outside my skill set, so I don't claim to have\n> found the best possible implementation.\n\nI've played with this a bit, and there are certainly a lot of \ninteresting things that you can do with CSS nowadays that would preserve \nsome semblance of semantic markup on both the DocBook side and the HTML \nside. We haven't even considered what this new markup would do to \nnon-visual consumers.\n\nBut my conclusion is that this new direction is bad and the old way was \nmuch better. My vote is to keep what we had in PG12.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:02:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I've played with this a bit, and there are certainly a lot of \n> interesting things that you can do with CSS nowadays that would preserve \n> some semblance of semantic markup on both the DocBook side and the HTML \n> side.\n\nAs I said, I'm happy to do the legwork of improving the markup if someone\nwill point me in the right direction. But I know next to zip about CSS,\nso it would not be productive for me to do the basic design there ---\nit would take too long and there would probably still be lots to criticize\nin whatever I came up with.\n\n(I note ruefully that my original design in e894c6183 *was* pretty decent\nsemantic markup, especially if you're willing to accept spanspec\nidentifiers as semantic annotation. But people didn't like the visual\nresult, so now we have better visuals and uglier markup.)\n\n> But my conclusion is that this new direction is bad and the old way was \n> much better. My vote is to keep what we had in PG12.\n\nI'm not willing to accept that conclusion. Why are we even bothering\nto support PDF output, if lots of critical information is going to be\nillegible? (And even if you figure PDFs should go the way of the dodo,\nalmost any narrow-window presentation has got problems with these tables.)\nAlso, as I've been going through this, I've realized that there are many\nplaces in chapter 9 where the documentation is well south of adequate, if\nnot flat-out wrong. Some of it is just that nobody's gone through this\nmaterial in decades, and some of it is that the existing table layout is\nso unfriendly to writing more than a couple words of explanation per item.\nBut I'm not willing to abandon the work I've done so far and just hope\nthat in another twenty years somebody will be brave or foolish enough to\ntry again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:04:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Thu, Apr 23, 2020 at 12:04:01PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I've played with this a bit, and there are certainly a lot of \n> > interesting things that you can do with CSS nowadays that would preserve \n> > some semblance of semantic markup on both the DocBook side and the HTML \n> > side.\n> \n> As I said, I'm happy to do the legwork of improving the markup if someone\n> will point me in the right direction. But I know next to zip about CSS,\n> so it would not be productive for me to do the basic design there ---\n> it would take too long and there would probably still be lots to criticize\n> in whatever I came up with.\n\nI can do the CSS if you tell me what you want.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:23:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Apr 23, 2020 at 12:04:01PM -0400, Tom Lane wrote:\n>> As I said, I'm happy to do the legwork of improving the markup if someone\n>> will point me in the right direction. But I know next to zip about CSS,\n>> so it would not be productive for me to do the basic design there ---\n>> it would take too long and there would probably still be lots to criticize\n>> in whatever I came up with.\n\n> I can do the CSS if you tell me what you want.\n\nI think the existing visual appearance is more or less agreed to, so\nwhat we want is to reproduce that as closely as possible from some\nsaner markup. The first problem is to agree on what \"saner markup\"\nis exactly.\n\nWe could possibly use margin and vertical-space CSS adjustments starting\nfrom just using several <para>s within each table cell (one <para> for\nsignature, one for description, one for each example). I'm not sure\nwhether that meets Peter's desire for \"semantic\" markup though. It's not\nany worse than the old way with otherwise-unlabeled <entry>s, but it's not\nbetter either. Do we want, say, to distinguish descriptions from examples\nin the markup? If so, will paras with a role attribute do, or does it\nneed to be something else?\n\nI'm also not sure whether or not Peter is objecting to the way I used\n<returnvalue>. That seems reasonably semantically-based to me, but since\nhe hasn't stated what his criteria are, I don't know if he thinks so.\n(I'll admit that it's a bit of an abuse to use that for both function\nreturn types and example results.) If that's out then we need some other\ndesign for getting the right arrows into place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:43:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "If we're doing nicer markup+CSS for this, then it might make sense to\nfind a better solution for this kind of entry with multiple signatures\n(which was already an issue in the previous version):\n\ntext || anynonarray or anynonarray || text → text\n\tConverts the non-string input to text, then concatenates the two\n\tstrings. (The non-string input cannot be of an array type, because that\n\twould create ambiguity with the array || operators. If you want to\n\tconcatenate an array's text equivalent, cast it to text explicitly.)\n\t'Value: ' || 42 → Value: 42\n\nI think it would make sense to split the first line to put each of the\ntwo signatures on their own line. So it would look like this:\n\ntext || anynonarray\nanynonarray || text → text\n\tConverts the non-string input to text, then concatenates the two\n\tstrings. (The non-string input cannot be of an array type, because that\n\twould create ambiguity with the array || operators. If you want to\n\tconcatenate an array's text equivalent, cast it to text explicitly.)\n\t'Value: ' || 42 → Value: 42\n\n\nAnother example:\n\nto_ascii ( string text [, encoding name or integer ] ) → text\n\nshould be (I think):\n\nto_ascii ( string text [, encoding name ] ) → text\nto_ascii ( string text [, integer ] ) → text\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 13:43:43 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> If we're doing nicer markup+CSS for this, then it might make sense to\n> find a better solution for this kind of entry with multiple signatures\n> (which was already an issue in the previous version):\n\nYeah, agreed. I would like to be able to have multiple signature blocks\nin one table cell, which the current hack can't handle. There aren't\nquite enough cases to make this mandatory, but it would be nicer.\n\nIt seems do-able if we explicitly mark signature blocks with their\nown role, say\n\n <entry role=\"functableentry\">\n <para role=\"funcsignature\">\n text || anynonarray → text\n </para>\n <para role=\"funcsignature\">\n anynonarray || text → text\n </para>\n <para>\n description ...\n\nThen the CSS can key off of the role to decide what indentation to apply\nto the para. While I mostly see how that would work, I'm not very sure\nabout whether we can make it work in the PDF chain too.\n\nNot sure whether it'd be worth inventing additional roles to apply to\ndescription and example paras, or whether that's just inducing carpal\ntunnel syndrome to no purpose. We'd want to keep the role label on the\n<entry>s anyway I think, and that context should be enough as long as\nwe don't need different formatting for descriptions and examples.\nBut maybe Peter's notion of \"semantic markup\" requires it anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:10:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> If we're doing nicer markup+CSS for this, then it might make sense to\n>> find a better solution for this kind of entry with multiple signatures\n>> (which was already an issue in the previous version):\n\n> Yeah, agreed. I would like to be able to have multiple signature blocks\n> in one table cell, which the current hack can't handle. There aren't\n> quite enough cases to make this mandatory, but it would be nicer.\n> It seems do-able if we explicitly mark signature blocks with their\n> own role, say ...\n\nHearing no comments, I went ahead and experimented with that.\nAttached is a POC patch that changes just the header and first few\nentries in table 9.9, just to see what it'd look like. This does\nnicely reproduce the existing visual appearance. (With the margin\nparameters I used, there is a teensy bit more vertical space, but\nI think it looks better this way. That could be adjusted either\nway of course.)\n\nThere is a small problem with getting this to work in the webstyle\nHTML: somebody decided it would be a great idea to have a global\noverride on paragraph margin-bottom settings. For the purposes of\nthis test I just deleted that from main.css, but I suppose we want\nsome more-nuanced solution in reality.\n\n<digression>\n\nOne thing I couldn't help noticing while fooling with this is what\nseems to be a bug in the PDF toolchain: any place you try to put\nan <indexterm>, you get extra whitespace. Not a lot, but once you\nsee it, you can't un-see it. It's particularly obvious if one of\ntwo adjacent lines has the extra indentation and the other doesn't.\nIn the attached, I added an <indexterm> to one of the signature\nentries for \"text || anynonarray\", and you can see what I'm unhappy\nabout in the PDF screenshot. The problem already exists in our\nprevious markup, at least in places where people put indexterms\ninside function-table entries, but it'll be more obvious anyplace\nwe choose to have two signature entries in one table cell.\n\nI tried putting the <indexterm>s outside the <para> elements, but\nthat makes it worse not better: instead of a little bit of extra\nhorizontal whitespace, you get a lot of extra vertical whitespace.\n\nThe only \"fix\" I've found is to place the <indexterm> at the end\nof the signature <para> instead of the beginning. That's not included\nin the attached but it does hide the existence of the extra space\nquite effectively. I'm not sure though whether it might result in\nodd behavior of cross-reference links to the function entry.\nIn any case it feels like a hack.\n\n</digression>\n\nIt seems to me that this way is better than the markup I've been\nusing --- one thing I've observed is that Emacs' sgml mode is a\nbit confused by the <?br?> hacks, and it's happier with this.\nBut it's not clear to me whether this is sufficient to resolve\nPeter's unhappiness.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 26 Apr 2020 13:40:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/26/20 1:40 PM, Tom Lane wrote:\n> I wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> There is a small problem with getting this to work in the webstyle\n> HTML: somebody decided it would be a great idea to have a global\n> override on paragraph margin-bottom settings. For the purposes of\n> this test I just deleted that from main.css, but I suppose we want\n> some more-nuanced solution in reality.\n\nI have to see why that is. I traced it back to the original \"bring doc\nstyles up to modern website\" patch (66798351) and there is missing\ncontext. Anyway, I'd like to test it before a wholesale removal (there\nis often a strong correlation between \"!important\" and \"hack\", so I'll\nwant to further dive into it).\n\nI'll have some time to play around with the CSS tonight.\n\nJonathan", "msg_date": "Sun, 26 Apr 2020 15:21:38 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/26/20 3:21 PM, Jonathan S. Katz wrote:\n> On 4/26/20 1:40 PM, Tom Lane wrote:\n>> I wrote:\n>>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> \n>> There is a small problem with getting this to work in the webstyle\n>> HTML: somebody decided it would be a great idea to have a global\n>> override on paragraph margin-bottom settings. For the purposes of\n>> this test I just deleted that from main.css, but I suppose we want\n>> some more-nuanced solution in reality.\n> \n> I have to see why that is. I traced it back to the original \"bring doc\n> styles up to modern website\" patch (66798351) and there is missing\n> context. Anyway, I'd like to test it before a wholesale removal (there\n> is often a strong correlation between \"!important\" and \"hack\", so I'll\n> want to further dive into it).\n> \n> I'll have some time to play around with the CSS tonight.\n\nCan you try\n\n #docContent p {\n- margin-bottom: 1rem !important;\n+ margin-bottom: 1rem;\n }\n\nand see how it looks?\n\nJonathan", "msg_date": "Sun, 26 Apr 2020 21:23:54 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Can you try\n\n> #docContent p {\n> - margin-bottom: 1rem !important;\n> + margin-bottom: 1rem;\n> }\n\n> and see how it looks?\n\nIn some desultory looking around, I couldn't find anyplace in the\nexisting text that that changes at all. And it does make the\nrevised table markup render the way I want ... so +1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Apr 2020 21:44:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/26/20 9:44 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Can you try\n> \n>> #docContent p {\n>> - margin-bottom: 1rem !important;\n>> + margin-bottom: 1rem;\n>> }\n> \n>> and see how it looks?\n> \n> In some desultory looking around, I couldn't find anyplace in the\n> existing text that that changes at all. And it does make the\n> revised table markup render the way I want ... so +1.\n\nGreat. I do want to do a bit more desultory testing in the older\nversions of the docs, but it can be committed whenever the -docs side is\nready.\n\nThanks,\n\nJonathan", "msg_date": "Mon, 27 Apr 2020 08:34:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Great. I do want to do a bit more desultory testing in the older\n> versions of the docs, but it can be committed whenever the -docs side is\n> ready.\n\nOther than that point, the main.css patch as I presented it just adds\nsome rules that aren't used yet, so it could be pushed as soon as you're\nsatisfied about the !important change. It'd probably make sense to\npush it in advance of making the markup changes, so we don't have an\ninterval of near-unreadable devel docs.\n\nStill waiting to hear whether this markup approach satisfies\nPeter's concerns, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Apr 2020 08:49:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/27/20 8:49 AM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Great. I do want to do a bit more desultory testing in the older\n>> versions of the docs, but it can be committed whenever the -docs side is\n>> ready.\n> \n> Other than that point, the main.css patch as I presented it just adds\n> some rules that aren't used yet, so it could be pushed as soon as you're\n> satisfied about the !important change. It'd probably make sense to\n> push it in advance of making the markup changes, so we don't have an\n> interval of near-unreadable devel docs.\n\n*nods* I'll ensure to test again and hopefully commit later today.\n\nI forget what I was looking at, but I did see a similar pattern in some\nother modern software docs, so it seems like this is trending in the\nright direction. Looking forward to the rollout!\n\nJonathan", "msg_date": "Mon, 27 Apr 2020 09:17:23 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/27/20 8:49 AM, Tom Lane wrote:\n>> Other than that point, the main.css patch as I presented it just adds\n>> some rules that aren't used yet, so it could be pushed as soon as you're\n>> satisfied about the !important change. It'd probably make sense to\n>> push it in advance of making the markup changes, so we don't have an\n>> interval of near-unreadable devel docs.\n\n> *nods* I'll ensure to test again and hopefully commit later today.\n\nAfter looking at the JSON function tables, I've concluded that the\nability to have more than one function signature per table cell is\nreally rather essential not optional. So I'm going to go ahead and\nconvert all the existing markup to the <para>-based style I proposed\non Sunday. Please push the main.css change when you can.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Apr 2020 11:19:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "I wrote:\n> One thing I couldn't help noticing while fooling with this is what\n> seems to be a bug in the PDF toolchain: any place you try to put\n> an <indexterm>, you get extra whitespace. Not a lot, but once you\n> see it, you can't un-see it. It's particularly obvious if one of\n> two adjacent lines has the extra indentation and the other doesn't.\n> ...\n> The only \"fix\" I've found is to place the <indexterm> at the end\n> of the signature <para> instead of the beginning.\n\nI spent some more time experimenting with this today, and determined\nthat there's no way to fix it by messing with FO layout attributes.\nThe basic problem seems to be that if you write\n\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm>\n <primary>ceiling</primary>\n </indexterm>\n <function>ceiling</function> ( <type>numeric</type> )\n\nthen what you get in the .fo file is\n\n <fo:table-cell padding-start=\"2pt\" padding-end=\"2pt\" padding-top=\"2pt\" padding-bottom=\"2pt\" border-bottom-width=\"0.5pt\" border-bottom-style=\"solid\" border-bottom-color=\"black\"><fo:block><fo:block margin-left=\"4em\" text-align=\"left\" text-indent=\"-3.5em\">\n <fo:wrapper id=\"id-1.5.8.9.6.2.2.4.1.1.1\"><!--ceiling--></fo:wrapper>\n <fo:inline font-family=\"monospace\">ceiling</fo:inline> ( <fo:inline font-family=\"monospace\">numeric</fo:inline> )\n\nwhere the <fo:wrapper> apparently is used as a cross-reference anchor.\nThe trouble with this is that the rules for collapsing adjacent whitespace\ndon't work across the <fo:wrapper>, so no matter what you do you will end\nup with two spaces not one before the visible text \"ceiling\". The only\nway to hide the effects of that with layout attributes is to set\nwhitespace to be ignored altogether within the block, which is quite\nundesirable.\n\nThe fix I'm currently considering is to eliminate the extra whitespace\nrun(s) by formatting <indexterm>s within tables this way:\n\n <row>\n <entry role=\"func_table_entry\"><para role=\"func_signature\"><indexterm>\n <primary>char_length</primary>\n </indexterm><indexterm>\n <primary>character string</primary>\n <secondary>length</secondary>\n </indexterm><indexterm>\n <primary>length</primary>\n <secondary sortas=\"character string\">of a character string</secondary>\n <see>character string, length</see>\n </indexterm>\n <function>char_length</function> ( <type>text</type> )\n <returnvalue>integer</returnvalue>\n </para>\n\nPerhaps it's only worth being anal about this in table cells with multiple\nfunction signatures and/or multiple <indexterm>s; in other places the\nwhitespace variation just isn't that noticeable. On the other hand,\nthere's something to be said for having uniform layout of the XML source,\nwhich'd suggest having a uniform rule \"no whitespace before an <indexterm>\nwithin a table cell\".\n\nOr we could put the <indexterm>s at the end. Or just ignore it, reasoning\nthat the PDF output is never going to look all that great anyway.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Apr 2020 16:34:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "After further fooling with this issue, I've determined that\n\n(1) I need to be able to use <programlisting> environments within the\nfunc_table_entry cells and have them render more-or-less normally.\nThere doesn't seem to be any other good way to render multiline\nexample results for set-returning functions ... but marking such\nenvironments up to the extent that the website style normally does\nis very distracting.\n\n(2) I found that adding !important to the func_table_entry rules\nis enough to override less-general !important rules. So it'd be\npossible to leave all the existing CSS rules alone, if that makes\nyou feel more comfortable.\n\nThe attached updated patch reflects both of these conclusions.\nWe could take out some of the !important annotations here if\nyou're willing to delete !important annotations in more-global\nrules for <p> and/or <pre>, but maybe that's something to fool\nwith later. I'd like to get this done sooner ...\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 29 Apr 2020 19:29:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 7:29 PM, Tom Lane wrote:\n> After further fooling with this issue, I've determined that\n> \n> (1) I need to be able to use <programlisting> environments within the\n> func_table_entry cells and have them render more-or-less normally.\n> There doesn't seem to be any other good way to render multiline\n> example results for set-returning functions ... but marking such\n> environments up to the extent that the website style normally does\n> is very distracting.\n> \n> (2) I found that adding !important to the func_table_entry rules\n> is enough to override less-general !important rules. So it'd be\n> possible to leave all the existing CSS rules alone, if that makes\n> you feel more comfortable.\n> \n> The attached updated patch reflects both of these conclusions.\n> We could take out some of the !important annotations here if\n> you're willing to delete !important annotations in more-global\n> rules for <p> and/or <pre>, but maybe that's something to fool\n> with later. I'd like to get this done sooner ...\n\nMy preference would be to figure out the CSS rules that are causing you\nto rely on !important at the table level and just fix that up, rather\nthan hacking in too many !important.\n\nI'll compromise on the temporary importants, but first I want to see\nwhat's causing the need for it. Do you have a suggestion on a page to test?\n\nJonathan", "msg_date": "Wed, 29 Apr 2020 19:40:25 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 7:40 PM, Jonathan S. Katz wrote:\n> On 4/29/20 7:29 PM, Tom Lane wrote:\n>> After further fooling with this issue, I've determined that\n>>\n>> (1) I need to be able to use <programlisting> environments within the\n>> func_table_entry cells and have them render more-or-less normally.\n>> There doesn't seem to be any other good way to render multiline\n>> example results for set-returning functions ... but marking such\n>> environments up to the extent that the website style normally does\n>> is very distracting.\n>>\n>> (2) I found that adding !important to the func_table_entry rules\n>> is enough to override less-general !important rules. So it'd be\n>> possible to leave all the existing CSS rules alone, if that makes\n>> you feel more comfortable.\n>>\n>> The attached updated patch reflects both of these conclusions.\n>> We could take out some of the !important annotations here if\n>> you're willing to delete !important annotations in more-global\n>> rules for <p> and/or <pre>, but maybe that's something to fool\n>> with later. I'd like to get this done sooner ...\n> \n> My preference would be to figure out the CSS rules that are causing you\n> to rely on !important at the table level and just fix that up, rather\n> than hacking in too many !important.\n> \n> I'll compromise on the temporary importants, but first I want to see\n> what's causing the need for it. Do you have a suggestion on a page to test?\n\nFrom real quick I got it to here. With the latest copy of the doc builds\nit appears to still work as expected, but I need a section with the new\n\"pre\" block to test.\n\nI think the \"background-color: inherit !important\" is a bit odd, and\nwould like to trace that one down a bit more, but I did not see anything\nobvious on my glance through it.\n\nHow does it look on your end?\n\nJonathan", "msg_date": "Wed, 29 Apr 2020 19:55:16 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 4/29/20 7:40 PM, Jonathan S. Katz wrote:\n>> I'll compromise on the temporary importants, but first I want to see\n>> what's causing the need for it. Do you have a suggestion on a page to test?\n\nI haven't yet pushed anything dependent on the new markup, but\nattached is a draft revision for the JSON section; if you look at\nthe SRFs such as json_array_elements you'll see the issue.\n\n> From real quick I got it to here. With the latest copy of the doc builds\n> it appears to still work as expected, but I need a section with the new\n> \"pre\" block to test.\n\nYeah, I see you found the same <p> and <pre> settings I did.\n\n> I think the \"background-color: inherit !important\" is a bit odd, and\n> would like to trace that one down a bit more, but I did not see anything\n> obvious on my glance through it.\n\nI think it's coming from this bit at about main.css:660:\n\npre,\ncode,\n#docContent kbd,\n#docContent tt.LITERAL,\n#docContent tt.REPLACEABLE {\n font-size: 0.9rem !important;\n color: inherit !important;\n background-color: #f8f9fa !important;\n border-radius: .25rem;\n margin: .6rem 0;\n font-weight: 300;\n}\n\nI had to override most of that.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 29 Apr 2020 20:15:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 8:15 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 4/29/20 7:40 PM, Jonathan S. Katz wrote:\n>>> I'll compromise on the temporary importants, but first I want to see\n>>> what's causing the need for it. Do you have a suggestion on a page to test?\n> \n> I haven't yet pushed anything dependent on the new markup, but\n> attached is a draft revision for the JSON section; if you look at\n> the SRFs such as json_array_elements you'll see the issue.\n> \n>> From real quick I got it to here. With the latest copy of the doc builds\n>> it appears to still work as expected, but I need a section with the new\n>> \"pre\" block to test.\n> \n> Yeah, I see you found the same <p> and <pre> settings I did.\n> \n>> I think the \"background-color: inherit !important\" is a bit odd, and\n>> would like to trace that one down a bit more, but I did not see anything\n>> obvious on my glance through it.\n> \n> I think it's coming from this bit at about main.css:660:\n> \n> pre,\n> code,\n> #docContent kbd,\n> #docContent tt.LITERAL,\n> #docContent tt.REPLACEABLE {\n> font-size: 0.9rem !important;\n> color: inherit !important;\n> background-color: #f8f9fa !important;\n> border-radius: .25rem;\n> margin: .6rem 0;\n> font-weight: 300;\n> }\n> \n> I had to override most of that.\n\nYeah, I had started toying with that and saw no differences, but I would\nneed to test against anything in particular. I'm pretty confident we can\nremove those importants, based on my desultory testing.\n\nI'll try and get the patch built + docs loaded, and see if we can safely\nremove those.\n\nJonathan", "msg_date": "Wed, 29 Apr 2020 21:22:36 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 9:22 PM, Jonathan S. Katz wrote:\n> On 4/29/20 8:15 PM, Tom Lane wrote:\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> On 4/29/20 7:40 PM, Jonathan S. Katz wrote:\n>>>> I'll compromise on the temporary importants, but first I want to see\n>>>> what's causing the need for it. Do you have a suggestion on a page to test?\n>>\n>> I haven't yet pushed anything dependent on the new markup, but\n>> attached is a draft revision for the JSON section; if you look at\n>> the SRFs such as json_array_elements you'll see the issue.\n\n^ This was super helpful. Built locally, and made it really easy to\ntest. Thanks!\n\n>>> From real quick I got it to here. With the latest copy of the doc builds\n>>> it appears to still work as expected, but I need a section with the new\n>>> \"pre\" block to test.\n>>\n>> Yeah, I see you found the same <p> and <pre> settings I did.\n>>\n>>> I think the \"background-color: inherit !important\" is a bit odd, and\n>>> would like to trace that one down a bit more, but I did not see anything\n>>> obvious on my glance through it.\n>>\n>> I think it's coming from this bit at about main.css:660:\n>>\n>> pre,\n>> code,\n>> #docContent kbd,\n>> #docContent tt.LITERAL,\n>> #docContent tt.REPLACEABLE {\n>> font-size: 0.9rem !important;\n>> color: inherit !important;\n>> background-color: #f8f9fa !important;\n>> border-radius: .25rem;\n>> margin: .6rem 0;\n>> font-weight: 300;\n>> }\n>>\n>> I had to override most of that.\n> \n> Yeah, I had started toying with that and saw no differences, but I would\n> need to test against anything in particular. I'm pretty confident we can\n> remove those importants, based on my desultory testing.\n> \n> I'll try and get the patch built + docs loaded, and see if we can safely\n> remove those.\n\nPlease see latest attached. I've eliminated the !important, condensed\nthe CSS, and the desultory (yes, my word of the week) testing did not\nfind issues in devel or earlier versions.\n\nPlease let me know if this works for you. If it does, I'll push it up to\npgweb.\n\nJonathan", "msg_date": "Wed, 29 Apr 2020 22:04:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Please see latest attached. I've eliminated the !important, condensed\n> the CSS, and the desultory (yes, my word of the week) testing did not\n> find issues in devel or earlier versions.\n\n> Please let me know if this works for you. If it does, I'll push it up to\n> pgweb.\n\nNAK ... that does *not* work for me.\n\nIt looks to me like you are expecting that \"margin\" with four parameters\nwill override an outer-level setting of margin-bottom, but that is not\nhow my browser is responding. ISTM you need to explicitly set the very\nsame parameters in the more-specific rule as in the less-specific rule\nthat you want to override.\n\nI get reasonable results with these settings, but not with\nanything more abbreviated:\n\n#docContent table.table th.func_table_entry p,\n#docContent table.table td.func_table_entry p {\n margin-top: 0.1em;\n margin-bottom: 0.1em;\n padding-left: 4em;\n text-align: left;\n}\n\n#docContent table.table p.func_signature {\n text-indent: -3.5em;\n}\n\n#docContent table.table td.func_table_entry pre.programlisting {\n background-color: inherit;\n border: 0;\n margin-top: 0.1em;\n margin-bottom: 0.1em;\n padding: 0;\n padding-left: 4em;\n}\n\nIn particular, it might look like the multiple padding settings\nin the pre.programlisting rule are redundant ... but they are not, at\nleast not with Safari.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Apr 2020 22:38:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 10:38 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Please see latest attached. I've eliminated the !important, condensed\n>> the CSS, and the desultory (yes, my word of the week) testing did not\n>> find issues in devel or earlier versions.\n> \n>> Please let me know if this works for you. If it does, I'll push it up to\n>> pgweb.\n> \n> NAK ... that does *not* work for me.\n\nLearned a new acronym...\n\n> It looks to me like you are expecting that \"margin\" with four parameters\n> will override an outer-level setting of margin-bottom, but that is not\n> how my browser is responding. ISTM you need to explicitly set the very\n> same parameters in the more-specific rule as in the less-specific rule\n> that you want to override.\n> \n> I get reasonable results with these settings, but not with\n> anything more abbreviated:\n\n> In particular, it might look like the multiple padding settings\n> in the pre.programlisting rule are redundant ... but they are not, at\n> least not with Safari.\n\nClearly I was caught doing a single browser test (Chrome).\n\nReverted back to the verbose way sans !important, attached, which\nappears to be the consensus. If you can ACK this, I'll commit.\n\nThanks,\n\nJonathan", "msg_date": "Wed, 29 Apr 2020 22:45:16 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Clearly I was caught doing a single browser test (Chrome).\n\nWell, I've not tested anything but Safari, either ...\n\n> Reverted back to the verbose way sans !important, attached, which\n> appears to be the consensus. If you can ACK this, I'll commit.\n\nThis one works for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Apr 2020 23:26:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 4/29/20 11:26 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Clearly I was caught doing a single browser test (Chrome).\n> \n> Well, I've not tested anything but Safari, either ...\n> \n>> Reverted back to the verbose way sans !important, attached, which\n>> appears to be the consensus. If you can ACK this, I'll commit.\n> \n> This one works for me.\n\nPushed. Thanks!\n\nJonathan", "msg_date": "Thu, 30 Apr 2020 00:12:48 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "I've now completed updating chapter 9 for the new layout,\nand the results are visible at \nhttps://www.postgresql.org/docs/devel/functions.html\nThere is more to do --- for instance, various contrib modules\nhave function/operator tables that should be synced with this\ndesign. But this seemed like a good place to pause and reflect.\n\nAfter working through the whole chapter, the only aspect of the\nnew markup that really doesn't seem to work so well is the use\nof <returnvalue> for function result types and example results.\nWhile I don't think that that's broken in concept, DocBook has\nrestrictions on the contents of <returnvalue> that are problematic:\n\n* It won't let you put any verbatim-layout environment, such\nas <programlisting>, inside <returnvalue>. This is an issue for\nexamples for set-returning functions in particular. I've done\nthose like this:\n\n <para>\n <literal>regexp_matches('foobarbequebaz', 'ba.', 'g')</literal>\n <returnvalue></returnvalue>\n<programlisting>\n {bar}\n {baz}\n</programlisting>\n (2 rows in result)\n </para>\n\nwhere the empty <returnvalue> environment is just serving to generate a\nright arrow. It looks all right, but it's hardly semantically-based\nmarkup.\n\n* <returnvalue> is also quite sticky about inserting other sorts\nof font-changing environments inside it. As an example, it'll let\nyou include <replaceable> but not <type>, which seems pretty weird\nto me. This is problematic in some places where it's desirable to\nhave text rather than just a type name, for example\n\n <function>stddev</function> ( <replaceable>numeric_type</replaceable> )\n <returnvalue></returnvalue> <type>double precision</type>\n for <type>real</type> or <type>double precision</type>,\n otherwise <type>numeric</type>\n\nNow I could have done this example by spelling out all six varieties of\nstddev() separately, and maybe I should've, but it seemed overly bulky\nthat way. So again <returnvalue> is just generating the right arrow.\n\n* After experimenting with a few different ways to handle functions with\nmultiple OUT parameters, I settled on doing it like this:\n\n <function>pg_partition_tree</function> ( <type>regclass</type> )\n <returnvalue>setof record</returnvalue>\n ( <parameter>relid</parameter> <type>regclass</type>,\n <parameter>parentrelid</parameter> <type>regclass</type>,\n <parameter>isleaf</parameter> <type>boolean</type>,\n <parameter>level</parameter> <type>integer</type> )\n\nThis looks nice and I think it's much more intelligible than other\nthings I tried --- in particular, including the OUT parameters in\nthe function signature seems to me to be mostly confusing. But,\nonce again, it's abusing the concept that <returnvalue> contains\nthe result type. Ideally the output-column list would be inside\nthe <returnvalue> environment, but DocBook won't allow that\nbecause of the <type> tags.\n\nSo at this point I'm tempted to abandon <returnvalue> and go back\nto using a custom entity to generate the right arrow, so that\nthe markup would just look like, say,\n\n <function>stddev</function> ( <replaceable>numeric_type</replaceable> )\n &returns; <type>double precision</type>\n for <type>real</type> or <type>double precision</type>,\n otherwise <type>numeric</type>\n\nDoes anyone have a preference on that, or a better alternative?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 May 2020 17:22:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 5/4/20 5:22 PM, Tom Lane wrote:\n> I've now completed updating chapter 9 for the new layout,\n> and the results are visible at \n> https://www.postgresql.org/docs/devel/functions.html\n> There is more to do --- for instance, various contrib modules\n> have function/operator tables that should be synced with this\n> design. But this seemed like a good place to pause and reflect.\n\nThis is already much better. I've skimmed through a few of the pages, I\ncan say that the aggregates page[1] is WAY easier to read. Yay!\n\n> \n> After working through the whole chapter, the only aspect of the\n> new markup that really doesn't seem to work so well is the use\n> of <returnvalue> for function result types and example results.\n> While I don't think that that's broken in concept, DocBook has\n> restrictions on the contents of <returnvalue> that are problematic:\n> \n> * It won't let you put any verbatim-layout environment, such\n> as <programlisting>, inside <returnvalue>. This is an issue for\n> examples for set-returning functions in particular. I've done\n> those like this:\n> \n> <para>\n> <literal>regexp_matches('foobarbequebaz', 'ba.', 'g')</literal>\n> <returnvalue></returnvalue>\n> <programlisting>\n> {bar}\n> {baz}\n> </programlisting>\n> (2 rows in result)\n> </para>\n> \n> where the empty <returnvalue> environment is just serving to generate a\n> right arrow. It looks all right, but it's hardly semantically-based\n> markup.\n\nWe could apply some CSS on the pgweb front perhaps to help distinguish\nat least the results? For the above example, it would be great to\ncapture the program listing + \"2 rows in result\" output and format them\nsimilarly, though it appears the \"(2 rows in result)\" is in its own block.\n\nAnyway, likely not that hard to apply some CSS and make it appear a bit\nmore distinguished, if that's the general idea.\n\n> * <returnvalue> is also quite sticky about inserting other sorts\n> of font-changing environments inside it. As an example, it'll let\n> you include <replaceable> but not <type>, which seems pretty weird\n> to me. This is problematic in some places where it's desirable to\n> have text rather than just a type name, for example\n> \n> <function>stddev</function> ( <replaceable>numeric_type</replaceable> )\n> <returnvalue></returnvalue> <type>double precision</type>\n> for <type>real</type> or <type>double precision</type>,\n> otherwise <type>numeric</type>\n> \n> Now I could have done this example by spelling out all six varieties of\n> stddev() separately, and maybe I should've, but it seemed overly bulky\n> that way. So again <returnvalue> is just generating the right arrow.\n> \n> * After experimenting with a few different ways to handle functions with\n> multiple OUT parameters, I settled on doing it like this:\n> \n> <function>pg_partition_tree</function> ( <type>regclass</type> )\n> <returnvalue>setof record</returnvalue>\n> ( <parameter>relid</parameter> <type>regclass</type>,\n> <parameter>parentrelid</parameter> <type>regclass</type>,\n> <parameter>isleaf</parameter> <type>boolean</type>,\n> <parameter>level</parameter> <type>integer</type> )\n> \n> This looks nice and I think it's much more intelligible than other\n> things I tried --- in particular, including the OUT parameters in\n> the function signature seems to me to be mostly confusing. But,\n> once again, it's abusing the concept that <returnvalue> contains\n> the result type. Ideally the output-column list would be inside\n> the <returnvalue> environment, but DocBook won't allow that\n> because of the <type> tags.\n\nIt does look better, but things look a bit smushed together on the pgweb\nfront. It seems like there's enough structure where one can make some\nnot-too-zany CSS rules to put a bit more space between elements, but\nperhaps wait to hear the decision on the rest of the structural questions.\n\n> So at this point I'm tempted to abandon <returnvalue> and go back\n> to using a custom entity to generate the right arrow, so that\n> the markup would just look like, say,\n> \n> <function>stddev</function> ( <replaceable>numeric_type</replaceable> )\n> &returns; <type>double precision</type>\n> for <type>real</type> or <type>double precision</type>,\n> otherwise <type>numeric</type>\n> \n> Does anyone have a preference on that, or a better alternative?\n\nAs long as we can properly style without zany CSS rules, I'm +0 :)\n\nJonathan\n\n[1] https://www.postgresql.org/docs/devel/functions-aggregate.html", "msg_date": "Mon, 4 May 2020 17:38:55 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 5/4/20 5:22 PM, Tom Lane wrote:\n>> I've now completed updating chapter 9 for the new layout,\n>> and the results are visible at \n>> https://www.postgresql.org/docs/devel/functions.html\n\n> This is already much better. I've skimmed through a few of the pages, I\n> can say that the aggregates page[1] is WAY easier to read. Yay!\n\nThanks!\n\n>> * After experimenting with a few different ways to handle functions with\n>> multiple OUT parameters, I settled on doing it like this:\n>> <function>pg_partition_tree</function> ( <type>regclass</type> )\n>> <returnvalue>setof record</returnvalue>\n>> ( <parameter>relid</parameter> <type>regclass</type>,\n>> <parameter>parentrelid</parameter> <type>regclass</type>,\n>> <parameter>isleaf</parameter> <type>boolean</type>,\n>> <parameter>level</parameter> <type>integer</type> )\n>> \n>> This looks nice and I think it's much more intelligible than other\n>> things I tried --- in particular, including the OUT parameters in\n>> the function signature seems to me to be mostly confusing. But,\n>> once again, it's abusing the concept that <returnvalue> contains\n>> the result type. Ideally the output-column list would be inside\n>> the <returnvalue> environment, but DocBook won't allow that\n>> because of the <type> tags.\n\n> It does look better, but things look a bit smushed together on the pgweb\n> front.\n\nYeah. There's less smushing of function signatures when building the\ndocs without STYLE=website, so there's something specific to the\nwebsite style. I think you'd mentioned that we were intentionally\ncrimping the space and/or font size within tables? Maybe that could\nget un-done now. I hadn't bothered to worry about such details until\nwe had a reasonable sample of cases to look at, but now would be a\ngood time.\n\n\nAnother rendering oddity that I'd not bothered to chase down is\nthe appearance of <itemizedlist> environments within table cells.\nWe have a few of those now as a result of migration of material\nthat had been out-of-line into the table cells; one example is\nin json_populate_record, about halfway down this page:\n\nhttps://www.postgresql.org/docs/devel/functions-json.html\n\nThe text of the list items seems to be getting indented to the\nsame extent as a not-in-a-table <itemizedlist> list does ---\nbut the bullets aren't indented nearly as much, making for\nweird spacing. (There's a short <itemizedlist> at the top of\nthe same page that you can compare to.)\n\nThe same weird spacing is visible in a non STYLE=website build,\nso I think this might be less a CSS issue and more a DocBook\nissue. On the other hand, it looks fine in the PDF build.\nSo I'm not sure where to look for the cause.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 May 2020 18:39:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On 5/4/20 6:39 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 5/4/20 5:22 PM, Tom Lane wrote:\n\n>> It does look better, but things look a bit smushed together on the pgweb\n>> front.\n> \n> Yeah. There's less smushing of function signatures when building the\n> docs without STYLE=website, so there's something specific to the\n> website style. I think you'd mentioned that we were intentionally\n> crimping the space and/or font size within tables? Maybe that could\n> get un-done now. I hadn't bothered to worry about such details until\n> we had a reasonable sample of cases to look at, but now would be a\n> good time.\n\nIIRC this was the monospace issue[1], but there are some other things\nI'm seeing (e.g. the italics) that may be pushing things closer together\nhtan not. Now that round 1 of commits are in, I can take a whack at\ntightening it up this week.\n\n> Another rendering oddity that I'd not bothered to chase down is\n> the appearance of <itemizedlist> environments within table cells.\n> We have a few of those now as a result of migration of material\n> that had been out-of-line into the table cells; one example is\n> in json_populate_record, about halfway down this page:\n> \n> https://www.postgresql.org/docs/devel/functions-json.html\n> \n> The text of the list items seems to be getting indented to the\n> same extent as a not-in-a-table <itemizedlist> list does ---\n> but the bullets aren't indented nearly as much, making for\n> weird spacing. (There's a short <itemizedlist> at the top of\n> the same page that you can compare to.)\n\nLooking at the code, I believe this is a pretty straightforward\nadjustment. I can include it with the aforementioned changes.\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/3f8560a6-9044-bdb8-6b3b-68842570db18@postgresql.org", "msg_date": "Mon, 4 May 2020 22:18:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, 4 May 2020 at 22:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * <returnvalue> is also quite sticky about inserting other sorts\n> of font-changing environments inside it. As an example, it'll let\n> you include <replaceable> but not <type>, which seems pretty weird\n> to me. This is problematic in some places where it's desirable to\n> have text rather than just a type name, for example\n>\n> <function>stddev</function> ( <replaceable>numeric_type</replaceable> )\n> <returnvalue></returnvalue> <type>double precision</type>\n> for <type>real</type> or <type>double precision</type>,\n> otherwise <type>numeric</type>\n>\n> Now I could have done this example by spelling out all six varieties of\n> stddev() separately, and maybe I should've, but it seemed overly bulky\n> that way.\n\nFWIW, I prefer having each variety spelled out separately. For\nexample, I really like the new way that aggregates like sum() and\navg() are displayed. To me, having all the types listed like that is\nmuch more readable than having to read and interpret a piece of free\ntext.\n\nSimilarly, for other functions like gcd(), lcm() and mod(). I think it\nwould be better to get rid of numeric_type, and just list all the\nvariants of each function.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 5 May 2020 10:33:55 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "On Mon, May 4, 2020 at 11:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I've now completed updating chapter 9 for the new layout,\n> and the results are visible at\n> https://www.postgresql.org/docs/devel/functions.html\n> There is more to do --- for instance, various contrib modules\n> have function/operator tables that should be synced with this\n> design. But this seemed like a good place to pause and reflect.\n>\n\nWould it be premature to complain about the not-that-great look of Table\n9.1 now?\n\nCompare the two attached images: the screenshot from\nhttps://www.postgresql.org/docs/devel/functions-comparison.html\nvs the GIMP-assisted pipe dream of mine to align it to the right edge of\nthe table cell.\n\nI don't have the faintest idea how to achieve that using SGML at the\nmoment, but it just looks so much nicer to me. ;-)\n\nRegards,\n--\nAlex", "msg_date": "Tue, 5 May 2020 13:39:46 +0200", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Mon, 4 May 2020 at 22:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Now I could have done this example by spelling out all six varieties of\n>> stddev() separately, and maybe I should've, but it seemed overly bulky\n>> that way.\n\n> FWIW, I prefer having each variety spelled out separately. For\n> example, I really like the new way that aggregates like sum() and\n> avg() are displayed. To me, having all the types listed like that is\n> much more readable than having to read and interpret a piece of free\n> text.\n\n> Similarly, for other functions like gcd(), lcm() and mod(). I think it\n> would be better to get rid of numeric_type, and just list all the\n> variants of each function.\n\nI had had the same idea to start with, but it didn't survive first contact\nwith table 9.4 (Mathematical Operators). It's not really reasonable to\nspell out all the variants of + ... especially not if you want to be\nprecise, because then you'd have to list the cross-type variants too.\nIf I counted correctly, there are fourteen variants of binary + that\nwould have to be listed in that table, never mind the other common\noperators.\n\nmax() and min() have a similar sort of problem --- the list of variants\nis just dauntingly long, and it's not that interesting either.\nI wrote out sum() and avg() the way I did because they have a somewhat\nirregular mapping from input to output types, so it seemed better to\njust list the alternatives explicitly.\n\nI don't object too much to spelling out the variants of stddev()\nand variance(), if there's a consensus for that. But getting rid\nof \"numeric_type\" entirely seems impractical.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 May 2020 10:07:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" }, { "msg_contents": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de> writes:\n> Would it be premature to complain about the not-that-great look of Table\n> 9.1 now?\n> Compare the two attached images: the screenshot from\n> https://www.postgresql.org/docs/devel/functions-comparison.html\n> vs the GIMP-assisted pipe dream of mine to align it to the right edge of\n> the table cell.\n\nHmph. I experimented with the attached patch, but at least in my browser\nit only reduces the spacing inconsistency, it doesn't eliminate it.\nAnd from a semantic standpoint, this is not nice markup.\n\nDoing better would require substantial foolery with sub-columns and I'm\nnot even sure that it's possible to fix that way. (We don't have huge\ncontrol over inter-column spacing, I don't think.)\n\nOn the whole, if this is our worst table problem, I'm happy ;-)\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 05 May 2020 11:17:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Poll: are people okay with function/operator table redesign?" } ]
[ { "msg_contents": "PGSQL Communities,\n\n\nWe migrated Oracle 11.x Database to PostgreSQL 12.x Database on a RH Linux\n7.x server.\nOn a different RH Linux 7.x Server, I have Oracle Client installed. Since\nwe have many scripts developed in Oracle SQL, is it possible for the\nPostgreSQL 12.x DB to process the Oracle Scripts? Are there utilities or\ndrivers that could be installed on the PostgreSQL 12.x Database or server\nfor processing the Oracle SQL client commands? We are trying to avoid\nupdating our Oracle Client scripts on remote servers.\n\nThanks\nFred\n\nPGSQL    Communities,  We migrated Oracle 11.x Database to PostgreSQL 12.x Database on a RH Linux 7.x server.On a different RH Linux 7.x Server, I have Oracle Client installed.  Since we have many scripts developed in Oracle SQL, is it possible for the PostgreSQL 12.x DB to process the Oracle Scripts?  Are there utilities or drivers that could be installed on the PostgreSQL 12.x Database or server for processing the Oracle SQL client commands?  We are trying to avoid updating our Oracle Client scripts on remote servers.ThanksFred", "msg_date": "Mon, 13 Apr 2020 15:48:37 -0400", "msg_from": "Fred Richard <frichard@gmail.com>", "msg_from_op": true, "msg_subject": "Using Oracle SQL Client commands with PSQL 12.2 DB" }, { "msg_contents": "Hi Fred,\n\nLe lun. 13 avr. 2020 à 21:49, Fred Richard <frichard@gmail.com> a écrit :\n\n> PGSQL Communities,\n>\n>\n> We migrated Oracle 11.x Database to PostgreSQL 12.x Database on a RH Linux\n> 7.x server.\n> On a different RH Linux 7.x Server, I have Oracle Client installed. Since\n> we have many scripts developed in Oracle SQL, is it possible for the\n> PostgreSQL 12.x DB to process the Oracle Scripts? Are there utilities or\n> drivers that could be installed on the PostgreSQL 12.x Database or server\n> for processing the Oracle SQL client commands? We are trying to avoid\n> updating our Oracle Client scripts on remote servers.\n>\n> Thanks\n> Fred\n>\n\nI removed the hackers' list as this is a standard question. The question is\nreally how far from standard SQL. If you stuck to the standard, you\nshouldn't have a big effort. If you didn't, the case should be evaluated\nmore carefully and you may have a bigger challenge.\n\nA first evaluation would be to fire the requests and see how much breaks.\n\nYou can also look at\nhttp://www.enterprisedb.com/enterprise-postgres/database-compatibility-oracle\nas it seems to be a (possibly commercial) valid answer.\n\nOrafce could be another one https://github.com/orafce/orafce\n\nI haven't tried these so I can't confirm how mature or applicable they are\nto your problem.\n\nHope it helps\nOlivier\n\n>\n\nHi Fred,Le lun. 13 avr. 2020 à 21:49, Fred Richard <frichard@gmail.com> a écrit :PGSQL    Communities,  We migrated Oracle 11.x Database to PostgreSQL 12.x Database on a RH Linux 7.x server.On a different RH Linux 7.x Server, I have Oracle Client installed.  Since we have many scripts developed in Oracle SQL, is it possible for the PostgreSQL 12.x DB to process the Oracle Scripts?  Are there utilities or drivers that could be installed on the PostgreSQL 12.x Database or server for processing the Oracle SQL client commands?  We are trying to avoid updating our Oracle Client scripts on remote servers.ThanksFredI removed the hackers' list as this is a standard question. The question is really how far from standard SQL. If you stuck to the standard, you shouldn't have a big effort. If you didn't, the case should be evaluated more carefully and you may have a bigger challenge.A first evaluation would be to fire the requests and see how much breaks.You can also look at http://www.enterprisedb.com/enterprise-postgres/database-compatibility-oracle as it seems to be a (possibly commercial) valid answer.Orafce could be another one  https://github.com/orafce/orafce I haven't tried these so I can't confirm how mature or applicable they are to your problem.Hope it helpsOlivier", "msg_date": "Mon, 13 Apr 2020 22:15:03 +0200", "msg_from": "Olivier Gautherot <ogautherot@gautherot.net>", "msg_from_op": false, "msg_subject": "Re: Using Oracle SQL Client commands with PSQL 12.2 DB" } ]
[ { "msg_contents": "Hi,\n \n I find that most of the code does not check the return value of close(), When open a file for reading(O_RDONLY).\n\n But I find that it checks the return value of close() in code \"src/bin/pg_rewind/copy_fetch.c\" When open a file for reading(O_RDONLY).\n And it will call pg_fatal to cause premature exit. \n\n I think that when closing a read-only file fails, it shouid not exit the program early.It should ensure that the program execution is completed.\n Like below:\n\n・src/bin/pg_rewind/copy_fetch.c\n\nbefore\n--------------------------\nrewind_copy_file_range\n{\n...\nif (close(srcfd) != 0)\n\t\tpg_fatal(\"could not close file \\\"%s\\\": %m\", srcpath); }\n--------------------------\n\nafter\n--------------------------\nrewind_copy_file_range\n{\n...\n\tclose(srcfd);\n}\n-------------------------- \n \nRegards,\n--\nLin\n\n\n\n\n\n", "msg_date": "Tue, 14 Apr 2020 02:32:40 +0000", "msg_from": "\"Lin, Cuiping\" <lincuiping@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Should program exit, When close() failed for O_RDONLY mode" }, { "msg_contents": "On Tue, Apr 14, 2020 at 02:32:40AM +0000, Lin, Cuiping wrote:\n> I find that most of the code does not check the return value of close(), When open a file for reading(O_RDONLY).\n> \n> But I find that it checks the return value of close() in code \"src/bin/pg_rewind/copy_fetch.c\" When open a file for reading(O_RDONLY).\n\nI think ignoring the return value is a superior style. It is less code, and\nfailure \"can't happen.\"\n\n> And it will call pg_fatal to cause premature exit. \n> \n> I think that when closing a read-only file fails, it shouid not exit the program early.It should ensure that the program execution is completed.\n\nI would not say that. If close() does fail, something is badly wrong in the\nprogram or the system running it. Though I opt not to check the return value,\nif one does check it, exiting is a suitable response.\n\n\n", "msg_date": "Sun, 3 May 2020 10:18:27 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Should program exit, When close() failed for O_RDONLY mode" }, { "msg_contents": "On Sun, May 03, 2020 at 10:18:27AM -0700, Noah Misch wrote:\n> I would not say that. If close() does fail, something is badly wrong in the\n> program or the system running it. Though I opt not to check the return value,\n> if one does check it, exiting is a suitable response.\n\nFWIW, it seems to me that we have an argument for copy_fetch.c that it\ncan be an advantage to know if something wrong is going on\nbeforehand: let's remember that after running pg_rewind, the target\nwill be started to replay up to its consistent point.\n--\nMichael", "msg_date": "Mon, 4 May 2020 09:20:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should program exit, When close() failed for O_RDONLY mode" } ]
[ { "msg_contents": "Hi,\n\nI have observed row_number() is giving different results when query\nexecuted in parallel. is this expected w.r.t parallel execution.\n\nCREATE TABLE tbl1 (c1 INT) partition by list (c1);\nCREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);\nCREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);\nCREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);\n\nCREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);\nCREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);\nCREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);\nCREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);\nCREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);\nCREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);\n\nINSERT INTO tbl1 VALUES (10),(20),(30);\n\nINSERT INTO tbl2 VALUES\n(1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);\n\npostgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\nwhere d.c1=e.c3;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------\n WindowAgg (cost=1520.35..12287.73 rows=390150 width=12)\n -> Merge Join (cost=1520.35..7410.85 rows=390150 width=4)\n Merge Cond: (d.c1 = e.c3)\n -> Sort (cost=638.22..657.35 rows=7650 width=4)\n Sort Key: d.c1\n -> Append (cost=0.00..144.75 rows=7650 width=4)\n -> Seq Scan on tbl1_p1 d_1 (cost=0.00..35.50\nrows=2550 width=4)\n -> Seq Scan on tbl1_p2 d_2 (cost=0.00..35.50\nrows=2550 width=4)\n -> Seq Scan on tbl1_p3 d_3 (cost=0.00..35.50\nrows=2550 width=4)\n -> Sort (cost=882.13..907.63 rows=10200 width=8)\n Sort Key: e.c3\n -> Append (cost=0.00..203.00 rows=10200 width=8)\n -> Seq Scan on tbl2_p1 e_1 (cost=0.00..30.40\nrows=2040 width=8)\n -> Seq Scan on tbl2_p2 e_2 (cost=0.00..30.40\nrows=2040 width=8)\n -> Seq Scan on tbl2_p3 e_3 (cost=0.00..30.40\nrows=2040 width=8)\n -> Seq Scan on tbl2_p4 e_4 (cost=0.00..30.40\nrows=2040 width=8)\n -> Seq Scan on tbl2_p5 e_5 (cost=0.00..30.40\nrows=2040 width=8)\n(17 rows)\n\npostgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\nd.c1=e.c3;\n c2 | row_number\n-----+------------\n *200 | 1*\n 100 | 2\n 100 | 3\n 100 | 4\n 100 | 5\n(5 rows)\n\npostgres=#\npostgres=# set parallel_setup_cost = 0;\nSET\npostgres=# set parallel_tuple_cost = 0;\nSET\npostgres=#\npostgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\nwhere d.c1=e.c3;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------\n WindowAgg (cost=130.75..7521.21 rows=390150 width=12)\n -> Gather (cost=130.75..2644.34 rows=390150 width=4)\n Workers Planned: 2\n -> Parallel Hash Join (cost=130.75..2644.34 rows=162562 width=4)\n Hash Cond: (e.c3 = d.c1)\n -> Parallel Append (cost=0.00..131.25 rows=4250 width=8)\n -> Parallel Seq Scan on tbl2_p1 e_1\n (cost=0.00..22.00 rows=1200 width=8)\n -> Parallel Seq Scan on tbl2_p2 e_2\n (cost=0.00..22.00 rows=1200 width=8)\n -> Parallel Seq Scan on tbl2_p3 e_3\n (cost=0.00..22.00 rows=1200 width=8)\n -> Parallel Seq Scan on tbl2_p4 e_4\n (cost=0.00..22.00 rows=1200 width=8)\n -> Parallel Seq Scan on tbl2_p5 e_5\n (cost=0.00..22.00 rows=1200 width=8)\n -> Parallel Hash (cost=90.93..90.93 rows=3186 width=4)\n -> Parallel Append (cost=0.00..90.93 rows=3186\nwidth=4)\n -> Parallel Seq Scan on tbl1_p1 d_1\n (cost=0.00..25.00 rows=1500 width=4)\n -> Parallel Seq Scan on tbl1_p2 d_2\n (cost=0.00..25.00 rows=1500 width=4)\n -> Parallel Seq Scan on tbl1_p3 d_3\n (cost=0.00..25.00 rows=1500 width=4)\n(16 rows)\n\npostgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\nd.c1=e.c3;\n c2 | row_number\n-----+------------\n 100 | 1\n 100 | 2\n 100 | 3\n *200 | 4*\n 100 | 5\n(5 rows)\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\nHi,I have observed row_number() is giving different results when query executed in parallel. is this expected w.r.t parallel execution.CREATE TABLE tbl1 (c1 INT) partition by list (c1);CREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);CREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);CREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);CREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);CREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);CREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);CREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);CREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);CREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);INSERT INTO tbl1 VALUES (10),(20),(30);INSERT INTO tbl2 VALUES (1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                      QUERY PLAN                                       --------------------------------------------------------------------------------------- WindowAgg  (cost=1520.35..12287.73 rows=390150 width=12)   ->  Merge Join  (cost=1520.35..7410.85 rows=390150 width=4)         Merge Cond: (d.c1 = e.c3)         ->  Sort  (cost=638.22..657.35 rows=7650 width=4)               Sort Key: d.c1               ->  Append  (cost=0.00..144.75 rows=7650 width=4)                     ->  Seq Scan on tbl1_p1 d_1  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p2 d_2  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p3 d_3  (cost=0.00..35.50 rows=2550 width=4)         ->  Sort  (cost=882.13..907.63 rows=10200 width=8)               Sort Key: e.c3               ->  Append  (cost=0.00..203.00 rows=10200 width=8)                     ->  Seq Scan on tbl2_p1 e_1  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p2 e_2  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p3 e_3  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p4 e_4  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p5 e_5  (cost=0.00..30.40 rows=2040 width=8)(17 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 200 |          1 100 |          2 100 |          3 100 |          4 100 |          5(5 rows)postgres=# postgres=# set parallel_setup_cost = 0;SETpostgres=# set parallel_tuple_cost = 0;SETpostgres=# postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                              QUERY PLAN                                              ------------------------------------------------------------------------------------------------------ WindowAgg  (cost=130.75..7521.21 rows=390150 width=12)   ->  Gather  (cost=130.75..2644.34 rows=390150 width=4)         Workers Planned: 2         ->  Parallel Hash Join  (cost=130.75..2644.34 rows=162562 width=4)               Hash Cond: (e.c3 = d.c1)               ->  Parallel Append  (cost=0.00..131.25 rows=4250 width=8)                     ->  Parallel Seq Scan on tbl2_p1 e_1  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p2 e_2  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p3 e_3  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p4 e_4  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p5 e_5  (cost=0.00..22.00 rows=1200 width=8)               ->  Parallel Hash  (cost=90.93..90.93 rows=3186 width=4)                     ->  Parallel Append  (cost=0.00..90.93 rows=3186 width=4)                           ->  Parallel Seq Scan on tbl1_p1 d_1  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p2 d_2  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p3 d_3  (cost=0.00..25.00 rows=1500 width=4)(16 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 100 |          1 100 |          2 100 |          3 200 |          4 100 |          5(5 rows)Thanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Tue, 14 Apr 2020 09:28:50 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "variation of row_number with parallel" }, { "msg_contents": "út 14. 4. 2020 v 5:59 odesílatel Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> napsal:\n\n> Hi,\n>\n> I have observed row_number() is giving different results when query\n> executed in parallel. is this expected w.r.t parallel execution.\n>\n> CREATE TABLE tbl1 (c1 INT) partition by list (c1);\n> CREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);\n> CREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);\n> CREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);\n>\n> CREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);\n> CREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);\n> CREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);\n> CREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);\n> CREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);\n> CREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);\n>\n> INSERT INTO tbl1 VALUES (10),(20),(30);\n>\n> INSERT INTO tbl2 VALUES\n> (1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);\n>\n> postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\n> where d.c1=e.c3;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------\n> WindowAgg (cost=1520.35..12287.73 rows=390150 width=12)\n> -> Merge Join (cost=1520.35..7410.85 rows=390150 width=4)\n> Merge Cond: (d.c1 = e.c3)\n> -> Sort (cost=638.22..657.35 rows=7650 width=4)\n> Sort Key: d.c1\n> -> Append (cost=0.00..144.75 rows=7650 width=4)\n> -> Seq Scan on tbl1_p1 d_1 (cost=0.00..35.50\n> rows=2550 width=4)\n> -> Seq Scan on tbl1_p2 d_2 (cost=0.00..35.50\n> rows=2550 width=4)\n> -> Seq Scan on tbl1_p3 d_3 (cost=0.00..35.50\n> rows=2550 width=4)\n> -> Sort (cost=882.13..907.63 rows=10200 width=8)\n> Sort Key: e.c3\n> -> Append (cost=0.00..203.00 rows=10200 width=8)\n> -> Seq Scan on tbl2_p1 e_1 (cost=0.00..30.40\n> rows=2040 width=8)\n> -> Seq Scan on tbl2_p2 e_2 (cost=0.00..30.40\n> rows=2040 width=8)\n> -> Seq Scan on tbl2_p3 e_3 (cost=0.00..30.40\n> rows=2040 width=8)\n> -> Seq Scan on tbl2_p4 e_4 (cost=0.00..30.40\n> rows=2040 width=8)\n> -> Seq Scan on tbl2_p5 e_5 (cost=0.00..30.40\n> rows=2040 width=8)\n> (17 rows)\n>\n> postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\n> d.c1=e.c3;\n> c2 | row_number\n> -----+------------\n> *200 | 1*\n> 100 | 2\n> 100 | 3\n> 100 | 4\n> 100 | 5\n> (5 rows)\n>\n> postgres=#\n> postgres=# set parallel_setup_cost = 0;\n> SET\n> postgres=# set parallel_tuple_cost = 0;\n> SET\n> postgres=#\n> postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\n> where d.c1=e.c3;\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------\n> WindowAgg (cost=130.75..7521.21 rows=390150 width=12)\n> -> Gather (cost=130.75..2644.34 rows=390150 width=4)\n> Workers Planned: 2\n> -> Parallel Hash Join (cost=130.75..2644.34 rows=162562 width=4)\n> Hash Cond: (e.c3 = d.c1)\n> -> Parallel Append (cost=0.00..131.25 rows=4250 width=8)\n> -> Parallel Seq Scan on tbl2_p1 e_1\n> (cost=0.00..22.00 rows=1200 width=8)\n> -> Parallel Seq Scan on tbl2_p2 e_2\n> (cost=0.00..22.00 rows=1200 width=8)\n> -> Parallel Seq Scan on tbl2_p3 e_3\n> (cost=0.00..22.00 rows=1200 width=8)\n> -> Parallel Seq Scan on tbl2_p4 e_4\n> (cost=0.00..22.00 rows=1200 width=8)\n> -> Parallel Seq Scan on tbl2_p5 e_5\n> (cost=0.00..22.00 rows=1200 width=8)\n> -> Parallel Hash (cost=90.93..90.93 rows=3186 width=4)\n> -> Parallel Append (cost=0.00..90.93 rows=3186\n> width=4)\n> -> Parallel Seq Scan on tbl1_p1 d_1\n> (cost=0.00..25.00 rows=1500 width=4)\n> -> Parallel Seq Scan on tbl1_p2 d_2\n> (cost=0.00..25.00 rows=1500 width=4)\n> -> Parallel Seq Scan on tbl1_p3 d_3\n> (cost=0.00..25.00 rows=1500 width=4)\n> (16 rows)\n>\n> postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\n> d.c1=e.c3;\n> c2 | row_number\n> -----+------------\n> 100 | 1\n> 100 | 2\n> 100 | 3\n> *200 | 4*\n> 100 | 5\n> (5 rows)\n>\n\nthere are not ORDER BY clause, so order is not defined - paralel hash join\nsurely doesn't ensure a order.\n\nI think so this behave is expected.\n\nRegards\n\nPavel\n\n\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n\nút 14. 4. 2020 v 5:59 odesílatel Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> napsal:Hi,I have observed row_number() is giving different results when query executed in parallel. is this expected w.r.t parallel execution.CREATE TABLE tbl1 (c1 INT) partition by list (c1);CREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);CREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);CREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);CREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);CREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);CREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);CREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);CREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);CREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);INSERT INTO tbl1 VALUES (10),(20),(30);INSERT INTO tbl2 VALUES (1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                      QUERY PLAN                                       --------------------------------------------------------------------------------------- WindowAgg  (cost=1520.35..12287.73 rows=390150 width=12)   ->  Merge Join  (cost=1520.35..7410.85 rows=390150 width=4)         Merge Cond: (d.c1 = e.c3)         ->  Sort  (cost=638.22..657.35 rows=7650 width=4)               Sort Key: d.c1               ->  Append  (cost=0.00..144.75 rows=7650 width=4)                     ->  Seq Scan on tbl1_p1 d_1  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p2 d_2  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p3 d_3  (cost=0.00..35.50 rows=2550 width=4)         ->  Sort  (cost=882.13..907.63 rows=10200 width=8)               Sort Key: e.c3               ->  Append  (cost=0.00..203.00 rows=10200 width=8)                     ->  Seq Scan on tbl2_p1 e_1  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p2 e_2  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p3 e_3  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p4 e_4  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p5 e_5  (cost=0.00..30.40 rows=2040 width=8)(17 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 200 |          1 100 |          2 100 |          3 100 |          4 100 |          5(5 rows)postgres=# postgres=# set parallel_setup_cost = 0;SETpostgres=# set parallel_tuple_cost = 0;SETpostgres=# postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                              QUERY PLAN                                              ------------------------------------------------------------------------------------------------------ WindowAgg  (cost=130.75..7521.21 rows=390150 width=12)   ->  Gather  (cost=130.75..2644.34 rows=390150 width=4)         Workers Planned: 2         ->  Parallel Hash Join  (cost=130.75..2644.34 rows=162562 width=4)               Hash Cond: (e.c3 = d.c1)               ->  Parallel Append  (cost=0.00..131.25 rows=4250 width=8)                     ->  Parallel Seq Scan on tbl2_p1 e_1  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p2 e_2  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p3 e_3  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p4 e_4  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p5 e_5  (cost=0.00..22.00 rows=1200 width=8)               ->  Parallel Hash  (cost=90.93..90.93 rows=3186 width=4)                     ->  Parallel Append  (cost=0.00..90.93 rows=3186 width=4)                           ->  Parallel Seq Scan on tbl1_p1 d_1  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p2 d_2  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p3 d_3  (cost=0.00..25.00 rows=1500 width=4)(16 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 100 |          1 100 |          2 100 |          3 200 |          4 100 |          5(5 rows)there are not ORDER BY clause, so order is not defined - paralel hash join surely doesn't ensure a order.I think so this behave is expected.RegardsPavelThanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Tue, 14 Apr 2020 06:08:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: variation of row_number with parallel" }, { "msg_contents": "On Tue, Apr 14, 2020 at 9:39 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 14. 4. 2020 v 5:59 odesílatel Rajkumar Raghuwanshi <\n> rajkumar.raghuwanshi@enterprisedb.com> napsal:\n>\n>> Hi,\n>>\n>> I have observed row_number() is giving different results when query\n>> executed in parallel. is this expected w.r.t parallel execution.\n>>\n>> CREATE TABLE tbl1 (c1 INT) partition by list (c1);\n>> CREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);\n>> CREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);\n>> CREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);\n>>\n>> CREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);\n>> CREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);\n>> CREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);\n>> CREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);\n>> CREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);\n>> CREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);\n>>\n>> INSERT INTO tbl1 VALUES (10),(20),(30);\n>>\n>> INSERT INTO tbl2 VALUES\n>> (1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);\n>>\n>> postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\n>> where d.c1=e.c3;\n>> QUERY PLAN\n>>\n>>\n>> ---------------------------------------------------------------------------------------\n>> WindowAgg (cost=1520.35..12287.73 rows=390150 width=12)\n>> -> Merge Join (cost=1520.35..7410.85 rows=390150 width=4)\n>> Merge Cond: (d.c1 = e.c3)\n>> -> Sort (cost=638.22..657.35 rows=7650 width=4)\n>> Sort Key: d.c1\n>> -> Append (cost=0.00..144.75 rows=7650 width=4)\n>> -> Seq Scan on tbl1_p1 d_1 (cost=0.00..35.50\n>> rows=2550 width=4)\n>> -> Seq Scan on tbl1_p2 d_2 (cost=0.00..35.50\n>> rows=2550 width=4)\n>> -> Seq Scan on tbl1_p3 d_3 (cost=0.00..35.50\n>> rows=2550 width=4)\n>> -> Sort (cost=882.13..907.63 rows=10200 width=8)\n>> Sort Key: e.c3\n>> -> Append (cost=0.00..203.00 rows=10200 width=8)\n>> -> Seq Scan on tbl2_p1 e_1 (cost=0.00..30.40\n>> rows=2040 width=8)\n>> -> Seq Scan on tbl2_p2 e_2 (cost=0.00..30.40\n>> rows=2040 width=8)\n>> -> Seq Scan on tbl2_p3 e_3 (cost=0.00..30.40\n>> rows=2040 width=8)\n>> -> Seq Scan on tbl2_p4 e_4 (cost=0.00..30.40\n>> rows=2040 width=8)\n>> -> Seq Scan on tbl2_p5 e_5 (cost=0.00..30.40\n>> rows=2040 width=8)\n>> (17 rows)\n>>\n>> postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\n>> d.c1=e.c3;\n>> c2 | row_number\n>> -----+------------\n>> *200 | 1*\n>> 100 | 2\n>> 100 | 3\n>> 100 | 4\n>> 100 | 5\n>> (5 rows)\n>>\n>> postgres=#\n>> postgres=# set parallel_setup_cost = 0;\n>> SET\n>> postgres=# set parallel_tuple_cost = 0;\n>> SET\n>> postgres=#\n>> postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e\n>> where d.c1=e.c3;\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------\n>> WindowAgg (cost=130.75..7521.21 rows=390150 width=12)\n>> -> Gather (cost=130.75..2644.34 rows=390150 width=4)\n>> Workers Planned: 2\n>> -> Parallel Hash Join (cost=130.75..2644.34 rows=162562\n>> width=4)\n>> Hash Cond: (e.c3 = d.c1)\n>> -> Parallel Append (cost=0.00..131.25 rows=4250 width=8)\n>> -> Parallel Seq Scan on tbl2_p1 e_1\n>> (cost=0.00..22.00 rows=1200 width=8)\n>> -> Parallel Seq Scan on tbl2_p2 e_2\n>> (cost=0.00..22.00 rows=1200 width=8)\n>> -> Parallel Seq Scan on tbl2_p3 e_3\n>> (cost=0.00..22.00 rows=1200 width=8)\n>> -> Parallel Seq Scan on tbl2_p4 e_4\n>> (cost=0.00..22.00 rows=1200 width=8)\n>> -> Parallel Seq Scan on tbl2_p5 e_5\n>> (cost=0.00..22.00 rows=1200 width=8)\n>> -> Parallel Hash (cost=90.93..90.93 rows=3186 width=4)\n>> -> Parallel Append (cost=0.00..90.93 rows=3186\n>> width=4)\n>> -> Parallel Seq Scan on tbl1_p1 d_1\n>> (cost=0.00..25.00 rows=1500 width=4)\n>> -> Parallel Seq Scan on tbl1_p2 d_2\n>> (cost=0.00..25.00 rows=1500 width=4)\n>> -> Parallel Seq Scan on tbl1_p3 d_3\n>> (cost=0.00..25.00 rows=1500 width=4)\n>> (16 rows)\n>>\n>> postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where\n>> d.c1=e.c3;\n>> c2 | row_number\n>> -----+------------\n>> 100 | 1\n>> 100 | 2\n>> 100 | 3\n>> *200 | 4*\n>> 100 | 5\n>> (5 rows)\n>>\n>\n> there are not ORDER BY clause, so order is not defined - paralel hash join\n> surely doesn't ensure a order.\n> I think so this behave is expected.\n>\n thanks.\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Thanks & Regards,\n>> Rajkumar Raghuwanshi\n>>\n>\n\nOn Tue, Apr 14, 2020 at 9:39 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 14. 4. 2020 v 5:59 odesílatel Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> napsal:Hi,I have observed row_number() is giving different results when query executed in parallel. is this expected w.r.t parallel execution.CREATE TABLE tbl1 (c1 INT) partition by list (c1);CREATE TABLE tbl1_p1 partition of tbl1 FOR VALUES IN (10);CREATE TABLE tbl1_p2 partition of tbl1 FOR VALUES IN (20);CREATE TABLE tbl1_p3 partition of tbl1 FOR VALUES IN (30);CREATE TABLE tbl2 (c1 INT, c2 INT,c3 INT) partition by list (c1);CREATE TABLE tbl2_p1 partition of tbl2 FOR VALUES IN (1);CREATE TABLE tbl2_p2 partition of tbl2 FOR VALUES IN (2);CREATE TABLE tbl2_p3 partition of tbl2 FOR VALUES IN (3);CREATE TABLE tbl2_p4 partition of tbl2 FOR VALUES IN (4);CREATE TABLE tbl2_p5 partition of tbl2 FOR VALUES IN (5);INSERT INTO tbl1 VALUES (10),(20),(30);INSERT INTO tbl2 VALUES (1,100,20),(2,200,10),(3,100,20),(4,100,30),(5,100,10);postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                      QUERY PLAN                                       --------------------------------------------------------------------------------------- WindowAgg  (cost=1520.35..12287.73 rows=390150 width=12)   ->  Merge Join  (cost=1520.35..7410.85 rows=390150 width=4)         Merge Cond: (d.c1 = e.c3)         ->  Sort  (cost=638.22..657.35 rows=7650 width=4)               Sort Key: d.c1               ->  Append  (cost=0.00..144.75 rows=7650 width=4)                     ->  Seq Scan on tbl1_p1 d_1  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p2 d_2  (cost=0.00..35.50 rows=2550 width=4)                     ->  Seq Scan on tbl1_p3 d_3  (cost=0.00..35.50 rows=2550 width=4)         ->  Sort  (cost=882.13..907.63 rows=10200 width=8)               Sort Key: e.c3               ->  Append  (cost=0.00..203.00 rows=10200 width=8)                     ->  Seq Scan on tbl2_p1 e_1  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p2 e_2  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p3 e_3  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p4 e_4  (cost=0.00..30.40 rows=2040 width=8)                     ->  Seq Scan on tbl2_p5 e_5  (cost=0.00..30.40 rows=2040 width=8)(17 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 200 |          1 100 |          2 100 |          3 100 |          4 100 |          5(5 rows)postgres=# postgres=# set parallel_setup_cost = 0;SETpostgres=# set parallel_tuple_cost = 0;SETpostgres=# postgres=# explain select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3;                                              QUERY PLAN                                              ------------------------------------------------------------------------------------------------------ WindowAgg  (cost=130.75..7521.21 rows=390150 width=12)   ->  Gather  (cost=130.75..2644.34 rows=390150 width=4)         Workers Planned: 2         ->  Parallel Hash Join  (cost=130.75..2644.34 rows=162562 width=4)               Hash Cond: (e.c3 = d.c1)               ->  Parallel Append  (cost=0.00..131.25 rows=4250 width=8)                     ->  Parallel Seq Scan on tbl2_p1 e_1  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p2 e_2  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p3 e_3  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p4 e_4  (cost=0.00..22.00 rows=1200 width=8)                     ->  Parallel Seq Scan on tbl2_p5 e_5  (cost=0.00..22.00 rows=1200 width=8)               ->  Parallel Hash  (cost=90.93..90.93 rows=3186 width=4)                     ->  Parallel Append  (cost=0.00..90.93 rows=3186 width=4)                           ->  Parallel Seq Scan on tbl1_p1 d_1  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p2 d_2  (cost=0.00..25.00 rows=1500 width=4)                           ->  Parallel Seq Scan on tbl1_p3 d_3  (cost=0.00..25.00 rows=1500 width=4)(16 rows)postgres=# select e.c2, row_number() over () from tbl1 d, tbl2 e where d.c1=e.c3; c2  | row_number -----+------------ 100 |          1 100 |          2 100 |          3 200 |          4 100 |          5(5 rows)there are not ORDER BY clause, so order is not defined - paralel hash join surely doesn't ensure a order.I think so this behave is expected. thanks.RegardsPavelThanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Tue, 14 Apr 2020 09:48:42 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: variation of row_number with parallel" } ]
[ { "msg_contents": "Hi,\n\nMaybe I am missing something obvious, but is it intentional that\nenable_indexscan is checked by cost_index(), that is, *after* creating\nan index path? I was expecting that if enable_indexscan is off, then\nno index paths would be generated to begin with, because I thought\nthey are optional.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 15:43:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> Maybe I am missing something obvious, but is it intentional that\n> enable_indexscan is checked by cost_index(), that is, *after* creating\n> an index path? I was expecting that if enable_indexscan is off, then\n> no index paths would be generated to begin with, because I thought\n> they are optional.\n>\n\nI think the cost estimate of index paths is the same as other paths on\nthat setting enable_xxx to off only adds a penalty factor (disable_cost)\nto the path's cost. The path would be still generated and compete with\nother paths in add_path().\n\nThanks\nRichard\n\nOn Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nMaybe I am missing something obvious, but is it intentional that\nenable_indexscan is checked by cost_index(), that is, *after* creating\nan index path?  I was expecting that if enable_indexscan is off, then\nno index paths would be generated to begin with, because I thought\nthey are optional.I think the cost estimate of index paths is the same as other paths onthat setting enable_xxx to off only adds a penalty factor (disable_cost)to the path's cost. The path would be still generated and compete withother paths in add_path().ThanksRichard", "msg_date": "Tue, 14 Apr 2020 15:12:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Maybe I am missing something obvious, but is it intentional that\n>> enable_indexscan is checked by cost_index(), that is, *after* creating\n>> an index path? I was expecting that if enable_indexscan is off, then\n>> no index paths would be generated to begin with, because I thought\n>> they are optional.\n>\n>\n> I think the cost estimate of index paths is the same as other paths on\n> that setting enable_xxx to off only adds a penalty factor (disable_cost)\n> to the path's cost. The path would be still generated and compete with\n> other paths in add_path().\n\nYeah, but I am asking why build the path to begin with, as there will\nalways be seq scan path for base rels. Turning enable_hashjoin off,\nfor example, means that no hash join paths will be built at all.\n\nLooking into the archives, I see that the idea of \"not generating\ndisabled paths to begin with\" was discussed quite recently:\nhttps://www.postgresql.org/message-id/29821.1572706653%40sss.pgh.pa.us\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 16:39:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> Maybe I am missing something obvious, but is it intentional that\n> >> enable_indexscan is checked by cost_index(), that is, *after* creating\n> >> an index path? I was expecting that if enable_indexscan is off, then\n> >> no index paths would be generated to begin with, because I thought\n> >> they are optional.\n> >\n> >\n> > I think the cost estimate of index paths is the same as other paths on\n> > that setting enable_xxx to off only adds a penalty factor (disable_cost)\n> > to the path's cost. The path would be still generated and compete with\n> > other paths in add_path().\n>\n> Yeah, but I am asking why build the path to begin with, as there will\n> always be seq scan path for base rels.\n\n\nI guess that is because user may disable seqscan as well. If so, we\nstill need formula to decide with one to use, which requires index path\nhas to be calculated. but since disabling the two at the same time is rare,\nwe can ignore the index path build if user allow seqscan\n\n\n> Turning enable_hashjoin off,\n> for example, means that no hash join paths will be built at all.\n>\n>\nAs for join, the difference is even user allows a join method by setting,\nbut the planner may still not able to use it. so the disabled path still\nneed\nto be used. Consider query \"select * from t1, t2 where f(t1.a, t2.a) = 3\",\nand user setting is enable_nestloop = off, enable_hashjoin = on.\nBut I think it is still possible to ignore the path generating after\nsome extra checking.\n\n\n> Looking into the archives, I see that the idea of \"not generating\n> disabled paths to begin with\" was discussed quite recently:\n> https://www.postgresql.org/message-id/29821.1572706653%40sss.pgh.pa.us\n>\n> --\n>\n> Amit Langote\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nOn Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Maybe I am missing something obvious, but is it intentional that\n>> enable_indexscan is checked by cost_index(), that is, *after* creating\n>> an index path?  I was expecting that if enable_indexscan is off, then\n>> no index paths would be generated to begin with, because I thought\n>> they are optional.\n>\n>\n> I think the cost estimate of index paths is the same as other paths on\n> that setting enable_xxx to off only adds a penalty factor (disable_cost)\n> to the path's cost. The path would be still generated and compete with\n> other paths in add_path().\n\nYeah, but I am asking why build the path to begin with, as there will\nalways be seq scan path for base rels.  I guess that is because user may disable seqscan as well.  If so, westill need formula to decide with one to use, which requires index pathhas to be calculated.  but since disabling the two at the same time is rare,we can ignore the index path build  if user allow seqscan Turning enable_hashjoin off,\nfor example, means that no hash join paths will be built at all.\nAs for join,  the difference is even user allows a join method by setting, but the planner may still not able to use it.  so the disabled path still needto be used.  Consider query \"select * from t1, t2 where f(t1.a, t2.a) = 3\", and user setting is enable_nestloop = off, enable_hashjoin = on.But I think it is still possible to ignore the path generating after some extra checking.  \nLooking into the archives, I see that the idea of \"not generating\ndisabled paths to begin with\" was discussed quite recently:\nhttps://www.postgresql.org/message-id/29821.1572706653%40sss.pgh.pa.us\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Apr 2020 16:29:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 5:29 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> >> Maybe I am missing something obvious, but is it intentional that\n>> >> enable_indexscan is checked by cost_index(), that is, *after* creating\n>> >> an index path? I was expecting that if enable_indexscan is off, then\n>> >> no index paths would be generated to begin with, because I thought\n>> >> they are optional.\n>> >\n>> > I think the cost estimate of index paths is the same as other paths on\n>> > that setting enable_xxx to off only adds a penalty factor (disable_cost)\n>> > to the path's cost. The path would be still generated and compete with\n>> > other paths in add_path().\n>>\n>> Yeah, but I am asking why build the path to begin with, as there will\n>> always be seq scan path for base rels.\n>\n> I guess that is because user may disable seqscan as well. If so, we\n> still need formula to decide with one to use, which requires index path\n> has to be calculated. but since disabling the two at the same time is rare,\n> we can ignore the index path build if user allow seqscan\n\nI am saying that instead of building index path with disabled cost,\njust don't build it at all. A base rel will always have a sequetial\npath, even though with disabled cost if enable_seqscan = off.\n\n--\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 17:58:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 4:58 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Tue, Apr 14, 2020 at 5:29 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> >> Maybe I am missing something obvious, but is it intentional that\n> >> >> enable_indexscan is checked by cost_index(), that is, *after*\n> creating\n> >> >> an index path? I was expecting that if enable_indexscan is off, then\n> >> >> no index paths would be generated to begin with, because I thought\n> >> >> they are optional.\n> >> >\n> >> > I think the cost estimate of index paths is the same as other paths on\n> >> > that setting enable_xxx to off only adds a penalty factor\n> (disable_cost)\n> >> > to the path's cost. The path would be still generated and compete with\n> >> > other paths in add_path().\n> >>\n> >> Yeah, but I am asking why build the path to begin with, as there will\n> >> always be seq scan path for base rels.\n> >\n> > I guess that is because user may disable seqscan as well. If so, we\n> > still need formula to decide with one to use, which requires index path\n> > has to be calculated. but since disabling the two at the same time is\n> rare,\n> > we can ignore the index path build if user allow seqscan\n>\n> I am saying that instead of building index path with disabled cost,\n> just don't build it at all. A base rel will always have a sequetial\n> path, even though with disabled cost if enable_seqscan = off.\n>\n\nLet's say user set enable_seqscan=off and set enable_indexscan=off;\nwill you expect user to get seqscan at last? If so, why is seqscan\n(rather than index scan) since both are disabled by user equally?\n\n\n> Amit Langote\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Tue, Apr 14, 2020 at 4:58 PM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Apr 14, 2020 at 5:29 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> >> Maybe I am missing something obvious, but is it intentional that\n>> >> enable_indexscan is checked by cost_index(), that is, *after* creating\n>> >> an index path?  I was expecting that if enable_indexscan is off, then\n>> >> no index paths would be generated to begin with, because I thought\n>> >> they are optional.\n>> >\n>> > I think the cost estimate of index paths is the same as other paths on\n>> > that setting enable_xxx to off only adds a penalty factor (disable_cost)\n>> > to the path's cost. The path would be still generated and compete with\n>> > other paths in add_path().\n>>\n>> Yeah, but I am asking why build the path to begin with, as there will\n>> always be seq scan path for base rels.\n>\n> I guess that is because user may disable seqscan as well.  If so, we\n> still need formula to decide with one to use, which requires index path\n> has to be calculated.  but since disabling the two at the same time is rare,\n> we can ignore the index path build  if user allow seqscan\n\nI am saying that instead of building index path with disabled cost,\njust don't build it at all. A base rel will always have a sequetial\npath, even though with disabled cost if enable_seqscan = off.Let's say user set  enable_seqscan=off and set enable_indexscan=off;will you expect user to get seqscan at last?  If so, why is seqscan (rather than index scan) since both are disabled by user equally? \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Apr 2020 17:12:22 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 5:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Tue, Apr 14, 2020 at 4:58 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n>\n>> On Tue, Apr 14, 2020 at 5:29 PM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>> > On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>> >> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com>\n>> wrote:\n>> >> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <\n>> amitlangote09@gmail.com> wrote:\n>> >> >> Maybe I am missing something obvious, but is it intentional that\n>> >> >> enable_indexscan is checked by cost_index(), that is, *after*\n>> creating\n>> >> >> an index path? I was expecting that if enable_indexscan is off,\n>> then\n>> >> >> no index paths would be generated to begin with, because I thought\n>> >> >> they are optional.\n>> >> >\n>> >> > I think the cost estimate of index paths is the same as other paths\n>> on\n>> >> > that setting enable_xxx to off only adds a penalty factor\n>> (disable_cost)\n>> >> > to the path's cost. The path would be still generated and compete\n>> with\n>> >> > other paths in add_path().\n>> >>\n>> >> Yeah, but I am asking why build the path to begin with, as there will\n>> >> always be seq scan path for base rels.\n>> >\n>> > I guess that is because user may disable seqscan as well. If so, we\n>> > still need formula to decide with one to use, which requires index path\n>> > has to be calculated. but since disabling the two at the same time is\n>> rare,\n>> > we can ignore the index path build if user allow seqscan\n>>\n>> I am saying that instead of building index path with disabled cost,\n>> just don't build it at all. A base rel will always have a sequetial\n>> path, even though with disabled cost if enable_seqscan = off.\n>>\n>\n> Let's say user set enable_seqscan=off and set enable_indexscan=off;\n> will you expect user to get seqscan at last? If so, why is seqscan\n> (rather than index scan) since both are disabled by user equally?\n>\n>\nThe following test should demonstrate what I think.\n\ndemo=# create table t(a int);\nCREATE TABLE\ndemo=# insert into t select generate_series(1, 10000000);\nINSERT 0 10000000\ndemo=# create index t_a on t(a);\nCREATE INDEX\ndemo=# analyze t;\nANALYZE\ndemo=# set enable_seqscan to off;\nSET\ndemo=# set enable_indexscan to off;\nSET\ndemo=# set enable_bitmapscan to off;\nSET\ndemo=# set enable_indexonlyscan to off;\nSET\ndemo=# explain select * from t where a = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Index Scan using t_a on t (cost=10000000000.43..10000000008.45 rows=1\nwidth=4)\n Index Cond: (a = 1)\n(2 rows)\n\nIf we just disable index path, we will get seqscan at last.\n\nRegards\nAndy Fan\n\n\n>> Amit Langote\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n\nOn Tue, Apr 14, 2020 at 5:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Apr 14, 2020 at 4:58 PM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Apr 14, 2020 at 5:29 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 3:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Apr 14, 2020 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> > On Tue, Apr 14, 2020 at 2:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> >> Maybe I am missing something obvious, but is it intentional that\n>> >> enable_indexscan is checked by cost_index(), that is, *after* creating\n>> >> an index path?  I was expecting that if enable_indexscan is off, then\n>> >> no index paths would be generated to begin with, because I thought\n>> >> they are optional.\n>> >\n>> > I think the cost estimate of index paths is the same as other paths on\n>> > that setting enable_xxx to off only adds a penalty factor (disable_cost)\n>> > to the path's cost. The path would be still generated and compete with\n>> > other paths in add_path().\n>>\n>> Yeah, but I am asking why build the path to begin with, as there will\n>> always be seq scan path for base rels.\n>\n> I guess that is because user may disable seqscan as well.  If so, we\n> still need formula to decide with one to use, which requires index path\n> has to be calculated.  but since disabling the two at the same time is rare,\n> we can ignore the index path build  if user allow seqscan\n\nI am saying that instead of building index path with disabled cost,\njust don't build it at all. A base rel will always have a sequetial\npath, even though with disabled cost if enable_seqscan = off.Let's say user set  enable_seqscan=off and set enable_indexscan=off;will you expect user to get seqscan at last?  If so, why is seqscan (rather than index scan) since both are disabled by user equally? The following test should demonstrate what I think. demo=# create table t(a int);CREATE TABLEdemo=# insert into t select generate_series(1, 10000000);INSERT 0 10000000demo=# create index t_a on t(a);CREATE INDEXdemo=# analyze t;ANALYZEdemo=# set enable_seqscan to off;SETdemo=# set enable_indexscan to off;SETdemo=# set enable_bitmapscan to off;SETdemo=# set enable_indexonlyscan to off;SETdemo=# explain select * from t where a = 1;                                   QUERY PLAN--------------------------------------------------------------------------------- Index Scan using t_a on t  (cost=10000000000.43..10000000008.45 rows=1 width=4)   Index Cond: (a = 1)(2 rows)If we just disable index path,  we will get seqscan at last. RegardsAndy Fan\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Apr 2020 17:20:32 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 6:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Tue, Apr 14, 2020 at 4:58 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I am saying that instead of building index path with disabled cost,\n>> just don't build it at all. A base rel will always have a sequetial\n>> path, even though with disabled cost if enable_seqscan = off.\n>\n> Let's say user set enable_seqscan=off and set enable_indexscan=off;\n> will you expect user to get seqscan at last? If so, why is seqscan\n> (rather than index scan) since both are disabled by user equally?\n\nI was really thinking of this in terms of planner effort, which for\ncreating an index path is more than creating sequential path, although\nsure the payoff can be great. That is, I want the planner to avoid\ncreating index paths *to save cycles*, but see no way of making that\nhappen. I was thinking disabling enable_indexscan would do the trick.\n\n--\n\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 23:16:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I am saying that instead of building index path with disabled cost,\n> just don't build it at all. A base rel will always have a sequetial\n> path, even though with disabled cost if enable_seqscan = off.\n\nAwhile back I'd looked into getting rid of disable_cost altogether\nby dint of not generating disabled paths. It's harder than it\nsounds. We could perhaps change this particular case, but it's\nnot clear that there's any real benefit of making this one change\nin isolation.\n\nNote that you can't just put a big OFF switch at the start of indxpath.c,\nbecause enable_indexscan and enable_bitmapscan are distinct switches,\nbut the code to generate those path types is inextricably intertwined.\nSkipping individual paths further down on the basis of the appropriate\nswitch would be fairly subtle and perhaps bug-prone. The existing\nimplementation of those switches has the advantages of being trivially\nsimple and clearly correct (for some value of \"correct\").\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:20:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I was really thinking of this in terms of planner effort, which for\n> creating an index path is more than creating sequential path, although\n> sure the payoff can be great. That is, I want the planner to avoid\n> creating index paths *to save cycles*, but see no way of making that\n> happen. I was thinking disabling enable_indexscan would do the trick.\n\nI think that's completely misguided, because in point of fact nobody\nis going to care about the planner's performance with enable_indexscan\nturned off. It's not an interesting production case.\n\nAll of these enable_xxx switches exist just for debug purposes, and so\nthe right way to think about them is \"what's the simplest, least\nbug-prone, lowest-maintenance way to get the effect?\".\n\nLikewise, I don't actually much care what results you get if you turn\noff *all* of them. It's not a useful case to spend our effort on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 10:34:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" }, { "msg_contents": "On Tue, Apr 14, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Awhile back I'd looked into getting rid of disable_cost altogether\n> by dint of not generating disabled paths. It's harder than it\n> sounds. We could perhaps change this particular case, but it's\n> not clear that there's any real benefit of making this one change\n> in isolation.\n\nI like the idea and have had the same thought before. I wondered\nwhether we could arrange to generate paths for a rel and then if we\nend up with no paths, do it again ignoring the disable flags. It\ndidn't seem entirely easy to rearrange things to work that way,\nthough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 14:43:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: index paths and enable_indexscan" } ]
[ { "msg_contents": "Hi,\n\nWhen initializing an incremental sort node, we have the following as\nof ExecInitIncrementalSort():\n /*\n * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n * batches in the current sort state.\n */\n Assert((eflags & (EXEC_FLAG_BACKWARD |\n EXEC_FLAG_MARK)) == 0);\nWhile I don't quite follow why EXEC_FLAG_REWIND should be allowed here\nto begin with (because incremental sorts don't support rescans without\nparameter changes, right?), the comment and the assertion are telling\na different story. And I can see that child nodes of an\nIncrementalSort one use a set of eflags where these three are removed:\n /*\n * Initialize child nodes.\n *\n * We shield the child node from the need to support REWIND, BACKWARD, or\n * MARK/RESTORE.\n */\n eflags &= ~(EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK);\n\nI can also spot one case in the regression tests where we actually pass\ndown a REWIND flag (see incremental_sort.sql) when initializing an\nIncrementalSort node:\n-- We force the planner to choose a plan with incremental sort on the right side\n-- of a nested loop join node. That way we trigger the rescan code path.\nset local enable_hashjoin = off;\nset local enable_mergejoin = off;\nset local enable_material = off;\nset local enable_sort = off;\nexplain (costs off) select * from t left join (select * from (select *\nfrom t order by a) v order by a, b) s on s.a = t.a where t.a in (1,\n2);\nselect * from t left join (select * from (select * from t order by a)\nv order by a, b) s on s.a = t.a where t.a in (1, 2);\n\nAlexander, Tomas, any thoughts?\n--\nMichael", "msg_date": "Tue, 14 Apr 2020 15:53:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi,\n>\n> When initializing an incremental sort node, we have the following as\n> of ExecInitIncrementalSort():\n> /*\n> * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> * batches in the current sort state.\n> */\n> Assert((eflags & (EXEC_FLAG_BACKWARD |\n> EXEC_FLAG_MARK)) == 0);\n> While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n> to begin with (because incremental sorts don't support rescans without\n> parameter changes, right?), the comment and the assertion are telling\n> a different story.\n\nI remember changing this assertion in response to an issue I'd found\nwhich led to rewriting the rescan implementation, but I must have\nmissed updating the comment.\n\n> And I can see that child nodes of an\n> IncrementalSort one use a set of eflags where these three are removed:\n> /*\n> * Initialize child nodes.\n> *\n> * We shield the child node from the need to support REWIND, BACKWARD, or\n> * MARK/RESTORE.\n> */\n> eflags &= ~(EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK);\n>\n> I can also spot one case in the regression tests where we actually pass\n> down a REWIND flag (see incremental_sort.sql) when initializing an\n> IncrementalSort node:\n> -- We force the planner to choose a plan with incremental sort on the right side\n> -- of a nested loop join node. That way we trigger the rescan code path.\n> set local enable_hashjoin = off;\n> set local enable_mergejoin = off;\n> set local enable_material = off;\n> set local enable_sort = off;\n> explain (costs off) select * from t left join (select * from (select *\n> from t order by a) v order by a, b) s on s.a = t.a where t.a in (1,\n> 2);\n> select * from t left join (select * from (select * from t order by a)\n> v order by a, b) s on s.a = t.a where t.a in (1, 2);\n>\n> Alexander, Tomas, any thoughts?\n\nI'll try to respond more fully later today when I can dig up the\nspecific change.\n\nIn the meantime, your question is primarily about making sure the\ncode/comments/etc. are consistent and not a behavioral problem or\nfailure you've seen in testing?\n\nJames\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:02:41 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Hi,\n> >\n> > When initializing an incremental sort node, we have the following as\n> > of ExecInitIncrementalSort():\n> > /*\n> > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> > * batches in the current sort state.\n> > */\n> > Assert((eflags & (EXEC_FLAG_BACKWARD |\n> > EXEC_FLAG_MARK)) == 0);\n> > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n> > to begin with (because incremental sorts don't support rescans without\n> > parameter changes, right?), the comment and the assertion are telling\n> > a different story.\n>\n> I remember changing this assertion in response to an issue I'd found\n> which led to rewriting the rescan implementation, but I must have\n> missed updating the comment.\n\nAll right, here are the most relevant messages:\n\n[1]: Here I'd said:\n----------\nWhile working on finding a test case to show rescan isn't implemented\nproperly yet, I came across a bug. At the top of\nExecInitIncrementalSort, we assert that eflags does not contain\nEXEC_FLAG_REWIND. But the following query (with merge and hash joins\ndisabled) breaks that assertion:\n\nselect * from t join (select * from t order by a, b) s on s.a = t.a\nwhere t.a in (1,2);\n\nThe comments about this flag in src/include/executor/executor.h say:\n\n* REWIND indicates that the plan node should try to efficiently support\n* rescans without parameter changes. (Nodes must support ExecReScan calls\n* in any case, but if this flag was not given, they are at liberty to do it\n* through complete recalculation. Note that a parameter change forces a\n* full recalculation in any case.)\n\nNow we know that except in rare cases (as just discussed recently up\nthread) we can't implement rescan efficiently.\n\nSo is this a planner bug (i.e., should we try not to generate\nincremental sort plans that require efficient rewind)? Or can we just\nremove that part of the assertion and know that we'll implement the\nrescan, albeit inefficiently? We already explicitly declare that we\ndon't support backwards scanning, but I don't see a way to declare the\nsame for rewind.\n----------\n\nSo it seems to me that we can't disallow REWIND, and we have to\nsupport rescan, but, we could try to mitigate the effects (without a\nparam change) with a materialize node, as noted below.\n\n[2]: Here, in response to my questioning above if this was a planner\nbug, I'd said:\n----------\nOther nodes seem to get a materialization node placed above them to\nsupport this case \"better\". Is that something we should be doing?\n----------\n\nI never got any reply on this point; if we _did_ introduce a\nmaterialize node here, then it would mean we could start disallowing\nREWIND again. See the email for full details of a specific plan that I\nencountered that reproduced this.\n\nThoughts?\n\n> In the meantime, your question is primarily about making sure the\n> code/comments/etc. are consistent and not a behavioral problem or\n> failure you've seen in testing?\n\nStill want to confirm this is the case.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n\n\n", "msg_date": "Wed, 15 Apr 2020 14:04:23 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > Hi,\n> > >\n> > > When initializing an incremental sort node, we have the following as\n> > > of ExecInitIncrementalSort():\n> > > /*\n> > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> > > * batches in the current sort state.\n> > > */\n> > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n> > > EXEC_FLAG_MARK)) == 0);\n> > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n> > > to begin with (because incremental sorts don't support rescans without\n> > > parameter changes, right?), the comment and the assertion are telling\n> > > a different story.\n> >\n> > I remember changing this assertion in response to an issue I'd found\n> > which led to rewriting the rescan implementation, but I must have\n> > missed updating the comment.\n>\n> All right, here are the most relevant messages:\n>\n> [1]: Here I'd said:\n> ----------\n> While working on finding a test case to show rescan isn't implemented\n> properly yet, I came across a bug. At the top of\n> ExecInitIncrementalSort, we assert that eflags does not contain\n> EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n> disabled) breaks that assertion:\n>\n> select * from t join (select * from t order by a, b) s on s.a = t.a\n> where t.a in (1,2);\n>\n> The comments about this flag in src/include/executor/executor.h say:\n>\n> * REWIND indicates that the plan node should try to efficiently support\n> * rescans without parameter changes. (Nodes must support ExecReScan calls\n> * in any case, but if this flag was not given, they are at liberty to do it\n> * through complete recalculation. Note that a parameter change forces a\n> * full recalculation in any case.)\n>\n> Now we know that except in rare cases (as just discussed recently up\n> thread) we can't implement rescan efficiently.\n>\n> So is this a planner bug (i.e., should we try not to generate\n> incremental sort plans that require efficient rewind)? Or can we just\n> remove that part of the assertion and know that we'll implement the\n> rescan, albeit inefficiently? We already explicitly declare that we\n> don't support backwards scanning, but I don't see a way to declare the\n> same for rewind.\n> ----------\n>\n> So it seems to me that we can't disallow REWIND, and we have to\n> support rescan, but, we could try to mitigate the effects (without a\n> param change) with a materialize node, as noted below.\n>\n> [2]: Here, in response to my questioning above if this was a planner\n> bug, I'd said:\n> ----------\n> Other nodes seem to get a materialization node placed above them to\n> support this case \"better\". Is that something we should be doing?\n> ----------\n>\n> I never got any reply on this point; if we _did_ introduce a\n> materialize node here, then it would mean we could start disallowing\n> REWIND again. See the email for full details of a specific plan that I\n> encountered that reproduced this.\n>\n> Thoughts?\n>\n> > In the meantime, your question is primarily about making sure the\n> > code/comments/etc. are consistent and not a behavioral problem or\n> > failure you've seen in testing?\n>\n> Still want to confirm this is the case.\n>\n> James\n>\n> [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n> [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n\nLooking at this more, I think this is definitely suspect. The current\ncode shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\nthe former is definitely fine because we declare that we don't support\nbackwards scans. The latter seems like the same reasoning would apply,\nbut unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\nattached a patch to do that.\n\nThe EXEC_FLAG_REWIND situation though I'm still not clear on -- given\nthe comments/docs seem to suggest it's a hint for efficiency rather\nthan something that has to work or be declared as not implemented, so\nit seems like one of the following should be the outcome:\n\n1. \"Disallow\" it by only generating materialize nodes above the\nincremental sort node if REWIND will be required. I'm not sure if this\nwould mean that incremental sort just wouldn't be useful in that case?\n2. Keep the existing implementation where we basically ignore REWIND\nand use our more inefficient implementation. In this case, I believe\nwe need to stop shielding child nodes from REWIND, though, since we we\naren't actually storing the full result set and will instead be\nre-executing the child nodes.\n\nI've attached a patch to take course (2), since it's the easiest to\nimplement. But I'd still like feedback on what we should do here,\nbecause I don't feel like I actually know what the semantics expected\nof the executor/planner are on this point. If we do go with this\napproach, someone should verify my comments additions about\nmaterialize nodes is correct.\n\nJames", "msg_date": "Sun, 19 Apr 2020 12:14:39 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > When initializing an incremental sort node, we have the following as\n> > > > of ExecInitIncrementalSort():\n> > > > /*\n> > > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> > > > * batches in the current sort state.\n> > > > */\n> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n> > > > EXEC_FLAG_MARK)) == 0);\n> > > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n> > > > to begin with (because incremental sorts don't support rescans without\n> > > > parameter changes, right?), the comment and the assertion are telling\n> > > > a different story.\n> > >\n> > > I remember changing this assertion in response to an issue I'd found\n> > > which led to rewriting the rescan implementation, but I must have\n> > > missed updating the comment.\n> >\n> > All right, here are the most relevant messages:\n> >\n> > [1]: Here I'd said:\n> > ----------\n> > While working on finding a test case to show rescan isn't implemented\n> > properly yet, I came across a bug. At the top of\n> > ExecInitIncrementalSort, we assert that eflags does not contain\n> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n> > disabled) breaks that assertion:\n> >\n> > select * from t join (select * from t order by a, b) s on s.a = t.a\n> > where t.a in (1,2);\n> >\n> > The comments about this flag in src/include/executor/executor.h say:\n> >\n> > * REWIND indicates that the plan node should try to efficiently support\n> > * rescans without parameter changes. (Nodes must support ExecReScan calls\n> > * in any case, but if this flag was not given, they are at liberty to do it\n> > * through complete recalculation. Note that a parameter change forces a\n> > * full recalculation in any case.)\n> >\n> > Now we know that except in rare cases (as just discussed recently up\n> > thread) we can't implement rescan efficiently.\n> >\n> > So is this a planner bug (i.e., should we try not to generate\n> > incremental sort plans that require efficient rewind)? Or can we just\n> > remove that part of the assertion and know that we'll implement the\n> > rescan, albeit inefficiently? We already explicitly declare that we\n> > don't support backwards scanning, but I don't see a way to declare the\n> > same for rewind.\n> > ----------\n> >\n> > So it seems to me that we can't disallow REWIND, and we have to\n> > support rescan, but, we could try to mitigate the effects (without a\n> > param change) with a materialize node, as noted below.\n> >\n> > [2]: Here, in response to my questioning above if this was a planner\n> > bug, I'd said:\n> > ----------\n> > Other nodes seem to get a materialization node placed above them to\n> > support this case \"better\". Is that something we should be doing?\n> > ----------\n> >\n> > I never got any reply on this point; if we _did_ introduce a\n> > materialize node here, then it would mean we could start disallowing\n> > REWIND again. See the email for full details of a specific plan that I\n> > encountered that reproduced this.\n> >\n> > Thoughts?\n> >\n> > > In the meantime, your question is primarily about making sure the\n> > > code/comments/etc. are consistent and not a behavioral problem or\n> > > failure you've seen in testing?\n> >\n> > Still want to confirm this is the case.\n> >\n> > James\n> >\n> > [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n> > [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n>\n> Looking at this more, I think this is definitely suspect. The current\n> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n> the former is definitely fine because we declare that we don't support\n> backwards scans. The latter seems like the same reasoning would apply,\n> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n> attached a patch to do that.\n>\n> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n> the comments/docs seem to suggest it's a hint for efficiency rather\n> than something that has to work or be declared as not implemented, so\n> it seems like one of the following should be the outcome:\n>\n> 1. \"Disallow\" it by only generating materialize nodes above the\n> incremental sort node if REWIND will be required. I'm not sure if this\n> would mean that incremental sort just wouldn't be useful in that case?\n> 2. Keep the existing implementation where we basically ignore REWIND\n> and use our more inefficient implementation. In this case, I believe\n> we need to stop shielding child nodes from REWIND, though, since we we\n> aren't actually storing the full result set and will instead be\n> re-executing the child nodes.\n>\n> I've attached a patch to take course (2), since it's the easiest to\n> implement. But I'd still like feedback on what we should do here,\n> because I don't feel like I actually know what the semantics expected\n> of the executor/planner are on this point. If we do go with this\n> approach, someone should verify my comments additions about\n> materialize nodes is correct.\n\nI also happened to noticed that in rescan we are always setting\nnode->bounded = false. I was under the impression that\nExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\nbut looking at both ExecSetTupleBound and ExecReScanSort, but it seems\nthat the inverse is true. Therefore if we set this to false each time,\nthen we lose any possibility of using the bounded optimization for all\nrescans.\n\nI've added a tiny patch (minus one line) to the earlier patch series\nto fix that.\n\n\nJames", "msg_date": "Fri, 24 Apr 2020 16:35:02 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n>On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n>> >\n>> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n>> > >\n>> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> > > >\n>> > > > Hi,\n>> > > >\n>> > > > When initializing an incremental sort node, we have the following as\n>> > > > of ExecInitIncrementalSort():\n>> > > > /*\n>> > > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n>> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n>> > > > * batches in the current sort state.\n>> > > > */\n>> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n>> > > > EXEC_FLAG_MARK)) == 0);\n>> > > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n>> > > > to begin with (because incremental sorts don't support rescans without\n>> > > > parameter changes, right?), the comment and the assertion are telling\n>> > > > a different story.\n>> > >\n>> > > I remember changing this assertion in response to an issue I'd found\n>> > > which led to rewriting the rescan implementation, but I must have\n>> > > missed updating the comment.\n>> >\n>> > All right, here are the most relevant messages:\n>> >\n>> > [1]: Here I'd said:\n>> > ----------\n>> > While working on finding a test case to show rescan isn't implemented\n>> > properly yet, I came across a bug. At the top of\n>> > ExecInitIncrementalSort, we assert that eflags does not contain\n>> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n>> > disabled) breaks that assertion:\n>> >\n>> > select * from t join (select * from t order by a, b) s on s.a = t.a\n>> > where t.a in (1,2);\n>> >\n>> > The comments about this flag in src/include/executor/executor.h say:\n>> >\n>> > * REWIND indicates that the plan node should try to efficiently support\n>> > * rescans without parameter changes. (Nodes must support ExecReScan calls\n>> > * in any case, but if this flag was not given, they are at liberty to do it\n>> > * through complete recalculation. Note that a parameter change forces a\n>> > * full recalculation in any case.)\n>> >\n>> > Now we know that except in rare cases (as just discussed recently up\n>> > thread) we can't implement rescan efficiently.\n>> >\n>> > So is this a planner bug (i.e., should we try not to generate\n>> > incremental sort plans that require efficient rewind)? Or can we just\n>> > remove that part of the assertion and know that we'll implement the\n>> > rescan, albeit inefficiently? We already explicitly declare that we\n>> > don't support backwards scanning, but I don't see a way to declare the\n>> > same for rewind.\n>> > ----------\n>> >\n>> > So it seems to me that we can't disallow REWIND, and we have to\n>> > support rescan, but, we could try to mitigate the effects (without a\n>> > param change) with a materialize node, as noted below.\n>> >\n>> > [2]: Here, in response to my questioning above if this was a planner\n>> > bug, I'd said:\n>> > ----------\n>> > Other nodes seem to get a materialization node placed above them to\n>> > support this case \"better\". Is that something we should be doing?\n>> > ----------\n>> >\n>> > I never got any reply on this point; if we _did_ introduce a\n>> > materialize node here, then it would mean we could start disallowing\n>> > REWIND again. See the email for full details of a specific plan that I\n>> > encountered that reproduced this.\n>> >\n>> > Thoughts?\n>> >\n>> > > In the meantime, your question is primarily about making sure the\n>> > > code/comments/etc. are consistent and not a behavioral problem or\n>> > > failure you've seen in testing?\n>> >\n>> > Still want to confirm this is the case.\n>> >\n>> > James\n>> >\n>> > [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n>> > [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n>>\n>> Looking at this more, I think this is definitely suspect. The current\n>> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n>> the former is definitely fine because we declare that we don't support\n>> backwards scans. The latter seems like the same reasoning would apply,\n>> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n>> attached a patch to do that.\n>>\n>> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n>> the comments/docs seem to suggest it's a hint for efficiency rather\n>> than something that has to work or be declared as not implemented, so\n>> it seems like one of the following should be the outcome:\n>>\n>> 1. \"Disallow\" it by only generating materialize nodes above the\n>> incremental sort node if REWIND will be required. I'm not sure if this\n>> would mean that incremental sort just wouldn't be useful in that case?\n>> 2. Keep the existing implementation where we basically ignore REWIND\n>> and use our more inefficient implementation. In this case, I believe\n>> we need to stop shielding child nodes from REWIND, though, since we we\n>> aren't actually storing the full result set and will instead be\n>> re-executing the child nodes.\n>>\n>> I've attached a patch to take course (2), since it's the easiest to\n>> implement. But I'd still like feedback on what we should do here,\n>> because I don't feel like I actually know what the semantics expected\n>> of the executor/planner are on this point. If we do go with this\n>> approach, someone should verify my comments additions about\n>> materialize nodes is correct.\n>\n>I also happened to noticed that in rescan we are always setting\n>node->bounded = false. I was under the impression that\n>ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n>but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n>that the inverse is true. Therefore if we set this to false each time,\n>then we lose any possibility of using the bounded optimization for all\n>rescans.\n>\n>I've added a tiny patch (minus one line) to the earlier patch series\n>to fix that.\n>\n\nThanks. I'll take a look at the issue and fixes.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 25 Apr 2020 00:57:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On 4/24/20 6:57 PM, Tomas Vondra wrote:\n> On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n>> On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>>\n>>> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n>>> >\n>>> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com>\n>>> wrote:\n>>> > >\n>>> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier\n>>> <michael@paquier.xyz> wrote:\n>>> > > >\n>>> > > > Hi,\n>>> > > >\n>>> > > > When initializing an incremental sort node, we have the\n>>> following as\n>>> > > > of ExecInitIncrementalSort():\n>>> > > >     /*\n>>> > > >      * Incremental sort can't be used with either\n>>> EXEC_FLAG_REWIND,\n>>> > > >      * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only\n>>> one of many sort\n>>> > > >      * batches in the current sort state.\n>>> > > >      */\n>>> > > >      Assert((eflags & (EXEC_FLAG_BACKWARD |\n>>> > > >                        EXEC_FLAG_MARK)) == 0);\n>>> > > > While I don't quite follow why EXEC_FLAG_REWIND should be\n>>> allowed here\n>>> > > > to begin with (because incremental sorts don't support rescans\n>>> without\n>>> > > > parameter changes, right?), the comment and the assertion are\n>>> telling\n>>> > > > a different story.\n>>> > >\n>>> > > I remember changing this assertion in response to an issue I'd found\n>>> > > which led to rewriting the rescan implementation, but I must have\n>>> > > missed updating the comment.\n>>> >\n>>> > All right, here are the most relevant messages:\n>>> >\n>>> > [1]: Here I'd said:\n>>> > ----------\n>>> > While working on finding a test case to show rescan isn't implemented\n>>> > properly yet, I came across a bug. At the top of\n>>> > ExecInitIncrementalSort, we assert that eflags does not contain\n>>> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n>>> > disabled) breaks that assertion:\n>>> >\n>>> > select * from t join (select * from t order by a, b) s on s.a = t.a\n>>> > where t.a in (1,2);\n>>> >\n>>> > The comments about this flag in src/include/executor/executor.h say:\n>>> >\n>>> > * REWIND indicates that the plan node should try to efficiently\n>>> support\n>>> > * rescans without parameter changes. (Nodes must support ExecReScan\n>>> calls\n>>> > * in any case, but if this flag was not given, they are at liberty\n>>> to do it\n>>> > * through complete recalculation. Note that a parameter change\n>>> forces a\n>>> > * full recalculation in any case.)\n>>> >\n>>> > Now we know that except in rare cases (as just discussed recently up\n>>> > thread) we can't implement rescan efficiently.\n>>> >\n>>> > So is this a planner bug (i.e., should we try not to generate\n>>> > incremental sort plans that require efficient rewind)? Or can we just\n>>> > remove that part of the assertion and know that we'll implement the\n>>> > rescan, albeit inefficiently? We already explicitly declare that we\n>>> > don't support backwards scanning, but I don't see a way to declare the\n>>> > same for rewind.\n>>> > ----------\n>>> >\n>>> > So it seems to me that we can't disallow REWIND, and we have to\n>>> > support rescan, but, we could try to mitigate the effects (without a\n>>> > param change) with a materialize node, as noted below.\n>>> >\n>>> > [2]: Here, in response to my questioning above if this was a planner\n>>> > bug, I'd said:\n>>> > ----------\n>>> > Other nodes seem to get a materialization node placed above them to\n>>> > support this case \"better\". Is that something we should be doing?\n>>> > ----------\n>>> >\n>>> > I never got any reply on this point; if we _did_ introduce a\n>>> > materialize node here, then it would mean we could start disallowing\n>>> > REWIND again. See the email for full details of a specific plan that I\n>>> > encountered that reproduced this.\n>>> >\n>>> > Thoughts?\n>>> >\n>>> > > In the meantime, your question is primarily about making sure the\n>>> > > code/comments/etc. are consistent and not a behavioral problem or\n>>> > > failure you've seen in testing?\n>>> >\n>>> > Still want to confirm this is the case.\n>>> >\n>>> > James\n>>> >\n>>> > [1]:\n>>> https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n>>>\n>>> > [2]:\n>>> https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n>>>\n>>>\n>>> Looking at this more, I think this is definitely suspect. The current\n>>> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n>>> the former is definitely fine because we declare that we don't support\n>>> backwards scans. The latter seems like the same reasoning would apply,\n>>> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n>>> attached a patch to do that.\n>>>\n>>> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n>>> the comments/docs seem to suggest it's a hint for efficiency rather\n>>> than something that has to work or be declared as not implemented, so\n>>> it seems like one of the following should be the outcome:\n>>>\n>>> 1. \"Disallow\" it by only generating materialize nodes above the\n>>> incremental sort node if REWIND will be required. I'm not sure if this\n>>> would mean that incremental sort just wouldn't be useful in that case?\n>>> 2. Keep the existing implementation where we basically ignore REWIND\n>>> and use our more inefficient implementation. In this case, I believe\n>>> we need to stop shielding child nodes from REWIND, though, since we we\n>>> aren't actually storing the full result set and will instead be\n>>> re-executing the child nodes.\n>>>\n>>> I've attached a patch to take course (2), since it's the easiest to\n>>> implement. But I'd still like feedback on what we should do here,\n>>> because I don't feel like I actually know what the semantics expected\n>>> of the executor/planner are on this point. If we do go with this\n>>> approach, someone should verify my comments additions about\n>>> materialize nodes is correct.\n>>\n>> I also happened to noticed that in rescan we are always setting\n>> node->bounded = false. I was under the impression that\n>> ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n>> but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n>> that the inverse is true. Therefore if we set this to false each time,\n>> then we lose any possibility of using the bounded optimization for all\n>> rescans.\n>>\n>> I've added a tiny patch (minus one line) to the earlier patch series\n>> to fix that.\n>>\n> \n> Thanks. I'll take a look at the issue and fixes.\n\nWith Beta 1 just around the corner[1], I wanted to check in to see if\nthis was closer to being committed so we could close off the open\nitem[2] prior the beta release.\n\nThanks!\n\nJonathan\n\n[1]\nhttps://www.postgresql.org/message-id/b782a4ec-5e8e-21a7-f628-624be683e6d6@postgresql.org\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items", "msg_date": "Thu, 7 May 2020 14:56:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Thu, May 7, 2020 at 2:57 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 4/24/20 6:57 PM, Tomas Vondra wrote:\n> > On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n> >> On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n> >>>\n> >>> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n> >>> >\n> >>> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com>\n> >>> wrote:\n> >>> > >\n> >>> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier\n> >>> <michael@paquier.xyz> wrote:\n> >>> > > >\n> >>> > > > Hi,\n> >>> > > >\n> >>> > > > When initializing an incremental sort node, we have the\n> >>> following as\n> >>> > > > of ExecInitIncrementalSort():\n> >>> > > > /*\n> >>> > > > * Incremental sort can't be used with either\n> >>> EXEC_FLAG_REWIND,\n> >>> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only\n> >>> one of many sort\n> >>> > > > * batches in the current sort state.\n> >>> > > > */\n> >>> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n> >>> > > > EXEC_FLAG_MARK)) == 0);\n> >>> > > > While I don't quite follow why EXEC_FLAG_REWIND should be\n> >>> allowed here\n> >>> > > > to begin with (because incremental sorts don't support rescans\n> >>> without\n> >>> > > > parameter changes, right?), the comment and the assertion are\n> >>> telling\n> >>> > > > a different story.\n> >>> > >\n> >>> > > I remember changing this assertion in response to an issue I'd found\n> >>> > > which led to rewriting the rescan implementation, but I must have\n> >>> > > missed updating the comment.\n> >>> >\n> >>> > All right, here are the most relevant messages:\n> >>> >\n> >>> > [1]: Here I'd said:\n> >>> > ----------\n> >>> > While working on finding a test case to show rescan isn't implemented\n> >>> > properly yet, I came across a bug. At the top of\n> >>> > ExecInitIncrementalSort, we assert that eflags does not contain\n> >>> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n> >>> > disabled) breaks that assertion:\n> >>> >\n> >>> > select * from t join (select * from t order by a, b) s on s.a = t.a\n> >>> > where t.a in (1,2);\n> >>> >\n> >>> > The comments about this flag in src/include/executor/executor.h say:\n> >>> >\n> >>> > * REWIND indicates that the plan node should try to efficiently\n> >>> support\n> >>> > * rescans without parameter changes. (Nodes must support ExecReScan\n> >>> calls\n> >>> > * in any case, but if this flag was not given, they are at liberty\n> >>> to do it\n> >>> > * through complete recalculation. Note that a parameter change\n> >>> forces a\n> >>> > * full recalculation in any case.)\n> >>> >\n> >>> > Now we know that except in rare cases (as just discussed recently up\n> >>> > thread) we can't implement rescan efficiently.\n> >>> >\n> >>> > So is this a planner bug (i.e., should we try not to generate\n> >>> > incremental sort plans that require efficient rewind)? Or can we just\n> >>> > remove that part of the assertion and know that we'll implement the\n> >>> > rescan, albeit inefficiently? We already explicitly declare that we\n> >>> > don't support backwards scanning, but I don't see a way to declare the\n> >>> > same for rewind.\n> >>> > ----------\n> >>> >\n> >>> > So it seems to me that we can't disallow REWIND, and we have to\n> >>> > support rescan, but, we could try to mitigate the effects (without a\n> >>> > param change) with a materialize node, as noted below.\n> >>> >\n> >>> > [2]: Here, in response to my questioning above if this was a planner\n> >>> > bug, I'd said:\n> >>> > ----------\n> >>> > Other nodes seem to get a materialization node placed above them to\n> >>> > support this case \"better\". Is that something we should be doing?\n> >>> > ----------\n> >>> >\n> >>> > I never got any reply on this point; if we _did_ introduce a\n> >>> > materialize node here, then it would mean we could start disallowing\n> >>> > REWIND again. See the email for full details of a specific plan that I\n> >>> > encountered that reproduced this.\n> >>> >\n> >>> > Thoughts?\n> >>> >\n> >>> > > In the meantime, your question is primarily about making sure the\n> >>> > > code/comments/etc. are consistent and not a behavioral problem or\n> >>> > > failure you've seen in testing?\n> >>> >\n> >>> > Still want to confirm this is the case.\n> >>> >\n> >>> > James\n> >>> >\n> >>> > [1]:\n> >>> https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n> >>>\n> >>> > [2]:\n> >>> https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n> >>>\n> >>>\n> >>> Looking at this more, I think this is definitely suspect. The current\n> >>> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n> >>> the former is definitely fine because we declare that we don't support\n> >>> backwards scans. The latter seems like the same reasoning would apply,\n> >>> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n> >>> attached a patch to do that.\n> >>>\n> >>> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n> >>> the comments/docs seem to suggest it's a hint for efficiency rather\n> >>> than something that has to work or be declared as not implemented, so\n> >>> it seems like one of the following should be the outcome:\n> >>>\n> >>> 1. \"Disallow\" it by only generating materialize nodes above the\n> >>> incremental sort node if REWIND will be required. I'm not sure if this\n> >>> would mean that incremental sort just wouldn't be useful in that case?\n> >>> 2. Keep the existing implementation where we basically ignore REWIND\n> >>> and use our more inefficient implementation. In this case, I believe\n> >>> we need to stop shielding child nodes from REWIND, though, since we we\n> >>> aren't actually storing the full result set and will instead be\n> >>> re-executing the child nodes.\n> >>>\n> >>> I've attached a patch to take course (2), since it's the easiest to\n> >>> implement. But I'd still like feedback on what we should do here,\n> >>> because I don't feel like I actually know what the semantics expected\n> >>> of the executor/planner are on this point. If we do go with this\n> >>> approach, someone should verify my comments additions about\n> >>> materialize nodes is correct.\n> >>\n> >> I also happened to noticed that in rescan we are always setting\n> >> node->bounded = false. I was under the impression that\n> >> ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n> >> but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n> >> that the inverse is true. Therefore if we set this to false each time,\n> >> then we lose any possibility of using the bounded optimization for all\n> >> rescans.\n> >>\n> >> I've added a tiny patch (minus one line) to the earlier patch series\n> >> to fix that.\n> >>\n> >\n> > Thanks. I'll take a look at the issue and fixes.\n>\n> With Beta 1 just around the corner[1], I wanted to check in to see if\n> this was closer to being committed so we could close off the open\n> item[2] prior the beta release.\n>\n> Thanks!\n>\n> Jonathan\n>\n> [1]\n> https://www.postgresql.org/message-id/b782a4ec-5e8e-21a7-f628-624be683e6d6@postgresql.org\n> [2] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\n\nTangential, I know, but I think we should consider [1] (includes a\nvery minor patch) part of the open items on incremental sort also:\n\nJames\n\n[1]: https://www.postgresql.org/message-id/CAEudQAqxf+mbirkO7pAdL61Qw8U8cF_QnEaL101L0tbBUocoQg@mail.gmail.com\n\n\n", "msg_date": "Thu, 7 May 2020 15:58:47 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Thu, May 07, 2020 at 03:58:47PM -0400, James Coleman wrote:\n>On Thu, May 7, 2020 at 2:57 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>>\n>> ...\n>>\n>> With Beta 1 just around the corner[1], I wanted to check in to see if\n>> this was closer to being committed so we could close off the open\n>> item[2] prior the beta release.\n>>\n>> Thanks!\n>>\n>> Jonathan\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/b782a4ec-5e8e-21a7-f628-624be683e6d6@postgresql.org\n>> [2] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n>\n>\n>Tangential, I know, but I think we should consider [1] (includes a\n>very minor patch) part of the open items on incremental sort also:\n>\n\nYes, I do plan to get both those issues fixed in the next coupld of days\n(certainly in time for beta 1).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 May 2020 23:07:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On 5/7/20 5:07 PM, Tomas Vondra wrote:\n> On Thu, May 07, 2020 at 03:58:47PM -0400, James Coleman wrote:\n>> On Thu, May 7, 2020 at 2:57 PM Jonathan S. Katz <jkatz@postgresql.org>\n>> wrote:\n>>>\n>>> ...\n>>>\n>>> With Beta 1 just around the corner[1], I wanted to check in to see if\n>>> this was closer to being committed so we could close off the open\n>>> item[2] prior the beta release.\n>>>\n>>> Thanks!\n>>>\n>>> Jonathan\n>>>\n>>> [1]\n>>> https://www.postgresql.org/message-id/b782a4ec-5e8e-21a7-f628-624be683e6d6@postgresql.org\n>>>\n>>> [2] https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n>>\n>>\n>> Tangential, I know, but I think we should consider [1] (includes a\n>> very minor patch) part of the open items on incremental sort also:\n>>\n> \n> Yes, I do plan to get both those issues fixed in the next coupld of days\n> (certainly in time for beta 1).\n\nExcellent - thank you!\n\nJonathan", "msg_date": "Thu, 7 May 2020 18:54:17 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n>On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n>> >\n>> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n>> > >\n>> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> > > >\n>> > > > Hi,\n>> > > >\n>> > > > When initializing an incremental sort node, we have the following as\n>> > > > of ExecInitIncrementalSort():\n>> > > > /*\n>> > > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n>> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n>> > > > * batches in the current sort state.\n>> > > > */\n>> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n>> > > > EXEC_FLAG_MARK)) == 0);\n>> > > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n>> > > > to begin with (because incremental sorts don't support rescans without\n>> > > > parameter changes, right?), the comment and the assertion are telling\n>> > > > a different story.\n>> > >\n>> > > I remember changing this assertion in response to an issue I'd found\n>> > > which led to rewriting the rescan implementation, but I must have\n>> > > missed updating the comment.\n>> >\n>> > All right, here are the most relevant messages:\n>> >\n>> > [1]: Here I'd said:\n>> > ----------\n>> > While working on finding a test case to show rescan isn't implemented\n>> > properly yet, I came across a bug. At the top of\n>> > ExecInitIncrementalSort, we assert that eflags does not contain\n>> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n>> > disabled) breaks that assertion:\n>> >\n>> > select * from t join (select * from t order by a, b) s on s.a = t.a\n>> > where t.a in (1,2);\n>> >\n>> > The comments about this flag in src/include/executor/executor.h say:\n>> >\n>> > * REWIND indicates that the plan node should try to efficiently support\n>> > * rescans without parameter changes. (Nodes must support ExecReScan calls\n>> > * in any case, but if this flag was not given, they are at liberty to do it\n>> > * through complete recalculation. Note that a parameter change forces a\n>> > * full recalculation in any case.)\n>> >\n>> > Now we know that except in rare cases (as just discussed recently up\n>> > thread) we can't implement rescan efficiently.\n>> >\n>> > So is this a planner bug (i.e., should we try not to generate\n>> > incremental sort plans that require efficient rewind)? Or can we just\n>> > remove that part of the assertion and know that we'll implement the\n>> > rescan, albeit inefficiently? We already explicitly declare that we\n>> > don't support backwards scanning, but I don't see a way to declare the\n>> > same for rewind.\n>> > ----------\n>> >\n>> > So it seems to me that we can't disallow REWIND, and we have to\n>> > support rescan, but, we could try to mitigate the effects (without a\n>> > param change) with a materialize node, as noted below.\n>> >\n>> > [2]: Here, in response to my questioning above if this was a planner\n>> > bug, I'd said:\n>> > ----------\n>> > Other nodes seem to get a materialization node placed above them to\n>> > support this case \"better\". Is that something we should be doing?\n>> > ----------\n>> >\n>> > I never got any reply on this point; if we _did_ introduce a\n>> > materialize node here, then it would mean we could start disallowing\n>> > REWIND again. See the email for full details of a specific plan that I\n>> > encountered that reproduced this.\n>> >\n>> > Thoughts?\n>> >\n>> > > In the meantime, your question is primarily about making sure the\n>> > > code/comments/etc. are consistent and not a behavioral problem or\n>> > > failure you've seen in testing?\n>> >\n>> > Still want to confirm this is the case.\n>> >\n>> > James\n>> >\n>> > [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n>> > [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n>>\n>> Looking at this more, I think this is definitely suspect. The current\n>> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n>> the former is definitely fine because we declare that we don't support\n>> backwards scans. The latter seems like the same reasoning would apply,\n>> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n>> attached a patch to do that.\n>>\n>> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n>> the comments/docs seem to suggest it's a hint for efficiency rather\n>> than something that has to work or be declared as not implemented, so\n>> it seems like one of the following should be the outcome:\n>>\n>> 1. \"Disallow\" it by only generating materialize nodes above the\n>> incremental sort node if REWIND will be required. I'm not sure if this\n>> would mean that incremental sort just wouldn't be useful in that case?\n>> 2. Keep the existing implementation where we basically ignore REWIND\n>> and use our more inefficient implementation. In this case, I believe\n>> we need to stop shielding child nodes from REWIND, though, since we we\n>> aren't actually storing the full result set and will instead be\n>> re-executing the child nodes.\n>>\n>> I've attached a patch to take course (2), since it's the easiest to\n>> implement. But I'd still like feedback on what we should do here,\n>> because I don't feel like I actually know what the semantics expected\n>> of the executor/planner are on this point. If we do go with this\n>> approach, someone should verify my comments additions about\n>> materialize nodes is correct.\n>\n\nIMO this is more a comment issue than a code issue, i.e. the comment in\nExecInitIncrementalSort should not mention the REWIND flag at all, as\nit's merely a suggestion that cheaper rescans would be nice, but it's\njust that - AFAICS it's entirely legal to just ignore the flag and do\nfull recalc. That's exactly what various other nodes do, I think.\n\nThe BACKWARD/MARK flags are different, because we explicitly check if a\nnode supports that (ExecSupportsBackwardScan/ExecSupportsMarkRestore)\nand we say 'no' in both cases for incremental sort.\n\nSo I think it's OK to leave the assert as it is and just remove the\nREWIND flag from the comment.\n\nRegarding child nodes, I think it's perfectly fine to continue passing\nthe REWIND flag to them, even if incremental sort has to start from\nscratch - we'll still have to read all the input from scratch, but if\nthe child node can make that cheaper, why not?\n\nI plan to apply something along the lines of v2-0002, with some comment\ntweaks (e.g. the comment still says we're shielding child nodes from\nREWIND, but that's no longer the case).\n\n>I also happened to noticed that in rescan we are always setting\n>node->bounded = false. I was under the impression that\n>ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n>but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n>that the inverse is true. Therefore if we set this to false each time,\n>then we lose any possibility of using the bounded optimization for all\n>rescans.\n>\n>I've added a tiny patch (minus one line) to the earlier patch series\n>to fix that.\n>\n\nYeah, that seems like a legit bug.\n\n\nAs for v2-0001, I don't quite understand why we needs this? AFAICS the\nExecSupportsMarkRestore function already returns \"false\" for incremental\nsort, and we only explicitly list nodes that may return \"true\" in some\ncases.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 01:14:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Fri, May 8, 2020 at 7:14 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n> >On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n> >> >\n> >> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n> >> > >\n> >> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> > > >\n> >> > > > Hi,\n> >> > > >\n> >> > > > When initializing an incremental sort node, we have the following as\n> >> > > > of ExecInitIncrementalSort():\n> >> > > > /*\n> >> > > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n> >> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n> >> > > > * batches in the current sort state.\n> >> > > > */\n> >> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n> >> > > > EXEC_FLAG_MARK)) == 0);\n> >> > > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n> >> > > > to begin with (because incremental sorts don't support rescans without\n> >> > > > parameter changes, right?), the comment and the assertion are telling\n> >> > > > a different story.\n> >> > >\n> >> > > I remember changing this assertion in response to an issue I'd found\n> >> > > which led to rewriting the rescan implementation, but I must have\n> >> > > missed updating the comment.\n> >> >\n> >> > All right, here are the most relevant messages:\n> >> >\n> >> > [1]: Here I'd said:\n> >> > ----------\n> >> > While working on finding a test case to show rescan isn't implemented\n> >> > properly yet, I came across a bug. At the top of\n> >> > ExecInitIncrementalSort, we assert that eflags does not contain\n> >> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n> >> > disabled) breaks that assertion:\n> >> >\n> >> > select * from t join (select * from t order by a, b) s on s.a = t.a\n> >> > where t.a in (1,2);\n> >> >\n> >> > The comments about this flag in src/include/executor/executor.h say:\n> >> >\n> >> > * REWIND indicates that the plan node should try to efficiently support\n> >> > * rescans without parameter changes. (Nodes must support ExecReScan calls\n> >> > * in any case, but if this flag was not given, they are at liberty to do it\n> >> > * through complete recalculation. Note that a parameter change forces a\n> >> > * full recalculation in any case.)\n> >> >\n> >> > Now we know that except in rare cases (as just discussed recently up\n> >> > thread) we can't implement rescan efficiently.\n> >> >\n> >> > So is this a planner bug (i.e., should we try not to generate\n> >> > incremental sort plans that require efficient rewind)? Or can we just\n> >> > remove that part of the assertion and know that we'll implement the\n> >> > rescan, albeit inefficiently? We already explicitly declare that we\n> >> > don't support backwards scanning, but I don't see a way to declare the\n> >> > same for rewind.\n> >> > ----------\n> >> >\n> >> > So it seems to me that we can't disallow REWIND, and we have to\n> >> > support rescan, but, we could try to mitigate the effects (without a\n> >> > param change) with a materialize node, as noted below.\n> >> >\n> >> > [2]: Here, in response to my questioning above if this was a planner\n> >> > bug, I'd said:\n> >> > ----------\n> >> > Other nodes seem to get a materialization node placed above them to\n> >> > support this case \"better\". Is that something we should be doing?\n> >> > ----------\n> >> >\n> >> > I never got any reply on this point; if we _did_ introduce a\n> >> > materialize node here, then it would mean we could start disallowing\n> >> > REWIND again. See the email for full details of a specific plan that I\n> >> > encountered that reproduced this.\n> >> >\n> >> > Thoughts?\n> >> >\n> >> > > In the meantime, your question is primarily about making sure the\n> >> > > code/comments/etc. are consistent and not a behavioral problem or\n> >> > > failure you've seen in testing?\n> >> >\n> >> > Still want to confirm this is the case.\n> >> >\n> >> > James\n> >> >\n> >> > [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n> >> > [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n> >>\n> >> Looking at this more, I think this is definitely suspect. The current\n> >> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n> >> the former is definitely fine because we declare that we don't support\n> >> backwards scans. The latter seems like the same reasoning would apply,\n> >> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n> >> attached a patch to do that.\n> >>\n> >> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n> >> the comments/docs seem to suggest it's a hint for efficiency rather\n> >> than something that has to work or be declared as not implemented, so\n> >> it seems like one of the following should be the outcome:\n> >>\n> >> 1. \"Disallow\" it by only generating materialize nodes above the\n> >> incremental sort node if REWIND will be required. I'm not sure if this\n> >> would mean that incremental sort just wouldn't be useful in that case?\n> >> 2. Keep the existing implementation where we basically ignore REWIND\n> >> and use our more inefficient implementation. In this case, I believe\n> >> we need to stop shielding child nodes from REWIND, though, since we we\n> >> aren't actually storing the full result set and will instead be\n> >> re-executing the child nodes.\n> >>\n> >> I've attached a patch to take course (2), since it's the easiest to\n> >> implement. But I'd still like feedback on what we should do here,\n> >> because I don't feel like I actually know what the semantics expected\n> >> of the executor/planner are on this point. If we do go with this\n> >> approach, someone should verify my comments additions about\n> >> materialize nodes is correct.\n> >\n>\n> IMO this is more a comment issue than a code issue, i.e. the comment in\n> ExecInitIncrementalSort should not mention the REWIND flag at all, as\n> it's merely a suggestion that cheaper rescans would be nice, but it's\n> just that - AFAICS it's entirely legal to just ignore the flag and do\n> full recalc. That's exactly what various other nodes do, I think.\n>\n> The BACKWARD/MARK flags are different, because we explicitly check if a\n> node supports that (ExecSupportsBackwardScan/ExecSupportsMarkRestore)\n> and we say 'no' in both cases for incremental sort.\n>\n> So I think it's OK to leave the assert as it is and just remove the\n> REWIND flag from the comment.\n>\n> Regarding child nodes, I think it's perfectly fine to continue passing\n> the REWIND flag to them, even if incremental sort has to start from\n> scratch - we'll still have to read all the input from scratch, but if\n> the child node can make that cheaper, why not?\n>\n> I plan to apply something along the lines of v2-0002, with some comment\n> tweaks (e.g. the comment still says we're shielding child nodes from\n> REWIND, but that's no longer the case).\n\nThanks.\n\n> >I also happened to noticed that in rescan we are always setting\n> >node->bounded = false. I was under the impression that\n> >ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n> >but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n> >that the inverse is true. Therefore if we set this to false each time,\n> >then we lose any possibility of using the bounded optimization for all\n> >rescans.\n> >\n> >I've added a tiny patch (minus one line) to the earlier patch series\n> >to fix that.\n> >\n>\n> Yeah, that seems like a legit bug.\n\nYep.\n\n> As for v2-0001, I don't quite understand why we needs this? AFAICS the\n> ExecSupportsMarkRestore function already returns \"false\" for incremental\n> sort, and we only explicitly list nodes that may return \"true\" in some\n> cases.\n\nAh, yes, the default is already false, so we don't need to explicitly do that.\n\nThanks,\nJames\n\n\n", "msg_date": "Fri, 8 May 2020 19:36:38 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" }, { "msg_contents": "On Fri, May 08, 2020 at 07:36:38PM -0400, James Coleman wrote:\n>On Fri, May 8, 2020 at 7:14 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Fri, Apr 24, 2020 at 04:35:02PM -0400, James Coleman wrote:\n>> >On Sun, Apr 19, 2020 at 12:14 PM James Coleman <jtc331@gmail.com> wrote:\n>> >>\n>> >> On Wed, Apr 15, 2020 at 2:04 PM James Coleman <jtc331@gmail.com> wrote:\n>> >> >\n>> >> > On Wed, Apr 15, 2020 at 11:02 AM James Coleman <jtc331@gmail.com> wrote:\n>> >> > >\n>> >> > > On Tue, Apr 14, 2020 at 2:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> >> > > >\n>> >> > > > Hi,\n>> >> > > >\n>> >> > > > When initializing an incremental sort node, we have the following as\n>> >> > > > of ExecInitIncrementalSort():\n>> >> > > > /*\n>> >> > > > * Incremental sort can't be used with either EXEC_FLAG_REWIND,\n>> >> > > > * EXEC_FLAG_BACKWARD or EXEC_FLAG_MARK, because we only one of many sort\n>> >> > > > * batches in the current sort state.\n>> >> > > > */\n>> >> > > > Assert((eflags & (EXEC_FLAG_BACKWARD |\n>> >> > > > EXEC_FLAG_MARK)) == 0);\n>> >> > > > While I don't quite follow why EXEC_FLAG_REWIND should be allowed here\n>> >> > > > to begin with (because incremental sorts don't support rescans without\n>> >> > > > parameter changes, right?), the comment and the assertion are telling\n>> >> > > > a different story.\n>> >> > >\n>> >> > > I remember changing this assertion in response to an issue I'd found\n>> >> > > which led to rewriting the rescan implementation, but I must have\n>> >> > > missed updating the comment.\n>> >> >\n>> >> > All right, here are the most relevant messages:\n>> >> >\n>> >> > [1]: Here I'd said:\n>> >> > ----------\n>> >> > While working on finding a test case to show rescan isn't implemented\n>> >> > properly yet, I came across a bug. At the top of\n>> >> > ExecInitIncrementalSort, we assert that eflags does not contain\n>> >> > EXEC_FLAG_REWIND. But the following query (with merge and hash joins\n>> >> > disabled) breaks that assertion:\n>> >> >\n>> >> > select * from t join (select * from t order by a, b) s on s.a = t.a\n>> >> > where t.a in (1,2);\n>> >> >\n>> >> > The comments about this flag in src/include/executor/executor.h say:\n>> >> >\n>> >> > * REWIND indicates that the plan node should try to efficiently support\n>> >> > * rescans without parameter changes. (Nodes must support ExecReScan calls\n>> >> > * in any case, but if this flag was not given, they are at liberty to do it\n>> >> > * through complete recalculation. Note that a parameter change forces a\n>> >> > * full recalculation in any case.)\n>> >> >\n>> >> > Now we know that except in rare cases (as just discussed recently up\n>> >> > thread) we can't implement rescan efficiently.\n>> >> >\n>> >> > So is this a planner bug (i.e., should we try not to generate\n>> >> > incremental sort plans that require efficient rewind)? Or can we just\n>> >> > remove that part of the assertion and know that we'll implement the\n>> >> > rescan, albeit inefficiently? We already explicitly declare that we\n>> >> > don't support backwards scanning, but I don't see a way to declare the\n>> >> > same for rewind.\n>> >> > ----------\n>> >> >\n>> >> > So it seems to me that we can't disallow REWIND, and we have to\n>> >> > support rescan, but, we could try to mitigate the effects (without a\n>> >> > param change) with a materialize node, as noted below.\n>> >> >\n>> >> > [2]: Here, in response to my questioning above if this was a planner\n>> >> > bug, I'd said:\n>> >> > ----------\n>> >> > Other nodes seem to get a materialization node placed above them to\n>> >> > support this case \"better\". Is that something we should be doing?\n>> >> > ----------\n>> >> >\n>> >> > I never got any reply on this point; if we _did_ introduce a\n>> >> > materialize node here, then it would mean we could start disallowing\n>> >> > REWIND again. See the email for full details of a specific plan that I\n>> >> > encountered that reproduced this.\n>> >> >\n>> >> > Thoughts?\n>> >> >\n>> >> > > In the meantime, your question is primarily about making sure the\n>> >> > > code/comments/etc. are consistent and not a behavioral problem or\n>> >> > > failure you've seen in testing?\n>> >> >\n>> >> > Still want to confirm this is the case.\n>> >> >\n>> >> > James\n>> >> >\n>> >> > [1]: https://www.postgresql.org/message-id/CAAaqYe9%2Bap2SbU_E2WaC4F9ZMF4oa%3DpJZ1NBwaKDMP6GFUA77g%40mail.gmail.com\n>> >> > [2]: https://www.postgresql.org/message-id/CAAaqYe-sOp2o%3DL7nvGZDJ6GsL9%3Db_ztrGE1rhyi%2BF82p3my2bQ%40mail.gmail.com\n>> >>\n>> >> Looking at this more, I think this is definitely suspect. The current\n>> >> code shields lower nodes from EXEC_FLAG_BACKWARD and EXEC_FLAG_MARK --\n>> >> the former is definitely fine because we declare that we don't support\n>> >> backwards scans. The latter seems like the same reasoning would apply,\n>> >> but unfortunately we didn't add it to ExecSupportsMarkRestore, so I've\n>> >> attached a patch to do that.\n>> >>\n>> >> The EXEC_FLAG_REWIND situation though I'm still not clear on -- given\n>> >> the comments/docs seem to suggest it's a hint for efficiency rather\n>> >> than something that has to work or be declared as not implemented, so\n>> >> it seems like one of the following should be the outcome:\n>> >>\n>> >> 1. \"Disallow\" it by only generating materialize nodes above the\n>> >> incremental sort node if REWIND will be required. I'm not sure if this\n>> >> would mean that incremental sort just wouldn't be useful in that case?\n>> >> 2. Keep the existing implementation where we basically ignore REWIND\n>> >> and use our more inefficient implementation. In this case, I believe\n>> >> we need to stop shielding child nodes from REWIND, though, since we we\n>> >> aren't actually storing the full result set and will instead be\n>> >> re-executing the child nodes.\n>> >>\n>> >> I've attached a patch to take course (2), since it's the easiest to\n>> >> implement. But I'd still like feedback on what we should do here,\n>> >> because I don't feel like I actually know what the semantics expected\n>> >> of the executor/planner are on this point. If we do go with this\n>> >> approach, someone should verify my comments additions about\n>> >> materialize nodes is correct.\n>> >\n>>\n>> IMO this is more a comment issue than a code issue, i.e. the comment in\n>> ExecInitIncrementalSort should not mention the REWIND flag at all, as\n>> it's merely a suggestion that cheaper rescans would be nice, but it's\n>> just that - AFAICS it's entirely legal to just ignore the flag and do\n>> full recalc. That's exactly what various other nodes do, I think.\n>>\n>> The BACKWARD/MARK flags are different, because we explicitly check if a\n>> node supports that (ExecSupportsBackwardScan/ExecSupportsMarkRestore)\n>> and we say 'no' in both cases for incremental sort.\n>>\n>> So I think it's OK to leave the assert as it is and just remove the\n>> REWIND flag from the comment.\n>>\n>> Regarding child nodes, I think it's perfectly fine to continue passing\n>> the REWIND flag to them, even if incremental sort has to start from\n>> scratch - we'll still have to read all the input from scratch, but if\n>> the child node can make that cheaper, why not?\n>>\n>> I plan to apply something along the lines of v2-0002, with some comment\n>> tweaks (e.g. the comment still says we're shielding child nodes from\n>> REWIND, but that's no longer the case).\n>\n>Thanks.\n>\n>> >I also happened to noticed that in rescan we are always setting\n>> >node->bounded = false. I was under the impression that\n>> >ExecSetTupleBound would be called *after* ExecReScanIncrementalSort,\n>> >but looking at both ExecSetTupleBound and ExecReScanSort, but it seems\n>> >that the inverse is true. Therefore if we set this to false each time,\n>> >then we lose any possibility of using the bounded optimization for all\n>> >rescans.\n>> >\n>> >I've added a tiny patch (minus one line) to the earlier patch series\n>> >to fix that.\n>> >\n>>\n>> Yeah, that seems like a legit bug.\n>\n>Yep.\n>\n>> As for v2-0001, I don't quite understand why we needs this? AFAICS the\n>> ExecSupportsMarkRestore function already returns \"false\" for incremental\n>> sort, and we only explicitly list nodes that may return \"true\" in some\n>> cases.\n>\n>Ah, yes, the default is already false, so we don't need to explicitly do that.\n>\n\nI've pushed the fixes, with some minor tweaks. Most importantly I think\nwe don't need to worry about removing the flags before initializing\nchild nodes, because (a) we want to pass REWIND and (b) we should not\nsee the other flags in incremental sort. \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 May 2020 19:46:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Incremental sorts and EXEC_FLAG_REWIND" } ]
[ { "msg_contents": "Hi\r\n\r\nI am testing some features from Postgres 13, and I am not sure if I\r\nunderstand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n\r\nWhen I run following statement first time in session I get\r\n\r\npostgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id =\r\n'CZ0201';\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114\r\nwidth=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n│ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n │\r\n│ Buffers: shared hit=4\r\n │\r\n│ Planning Time: 0.539 ms\r\n │\r\n│ Buffers: shared hit=13\r\n │\r\n│ Execution Time: 0.287 ms\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(6 rows)\r\n\r\nAnd I see share hit 13 in planning time.\r\n\r\nFor second run I get\r\n\r\npostgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id =\r\n'CZ0201';\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114\r\nwidth=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n│ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n │\r\n│ Buffers: shared hit=4\r\n │\r\n│ Planning Time: 0.159 ms\r\n │\r\n│ Execution Time: 0.155 ms\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(5 rows)\r\n\r\nNow, there is not any touch in planning time. Does it mean so this all\r\nthese data are cached somewhere in session memory?\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI am testing some features from Postgres 13, and I am not sure if I  understand well to behave of EXPLAIN(ANALYZE, BUFFERS)When I run following statement first time in session I getpostgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│                                                          QUERY PLAN                                                          │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) ││   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            ││   Buffers: shared hit=4                                                                                                      ││ Planning Time: 0.539 ms                                                                                                      ││   Buffers: shared hit=13                                                                                                     ││ Execution Time: 0.287 ms                                                                                                     │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(6 rows)And I see share hit 13 in planning time. For second run I getpostgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│                                                          QUERY PLAN                                                          │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) ││   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            ││   Buffers: shared hit=4                                                                                                      ││ Planning Time: 0.159 ms                                                                                                      ││ Execution Time: 0.155 ms                                                                                                     │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(5 rows)Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?RegardsPavel", "msg_date": "Tue, 14 Apr 2020 10:17:35 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Display of buffers for planning time show nothing for second run" }, { "msg_contents": "Hi,\r\n\r\nOn Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> Hi\r\n>\r\n> I am testing some features from Postgres 13, and I am not sure if I understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n>\r\n> When I run following statement first time in session I get\r\n>\r\n> postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN │\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n> │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n> │ Buffers: shared hit=4 │\r\n> │ Planning Time: 0.539 ms │\r\n> │ Buffers: shared hit=13 │\r\n> │ Execution Time: 0.287 ms │\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (6 rows)\r\n>\r\n> And I see share hit 13 in planning time.\r\n>\r\n> For second run I get\r\n>\r\n> postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │ QUERY PLAN │\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n> │ Buffers: shared hit=4 │\r\n> │ Planning Time: 0.159 ms │\r\n> │ Execution Time: 0.155 ms │\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (5 rows)\r\n>\r\n> Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n\r\nThe planning time is definitely shorter the 2nd time. And yes, what\r\nyou see are all the catcache accesses that are initially performed on\r\na fresh new backend.\r\n", "msg_date": "Tue, 14 Apr 2020 10:27:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "út 14. 4. 2020 v 10:27 odesílatel Julien Rouhaud <rjuju123@gmail.com>\r\nnapsal:\r\n\r\n> Hi,\r\n>\r\n> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> >\r\n> > Hi\r\n> >\r\n> > I am testing some features from Postgres 13, and I am not sure if I\r\n> understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n> >\r\n> > When I run following statement first time in session I get\r\n> >\r\n> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id\r\n> = 'CZ0201';\r\n> >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > │ QUERY PLAN\r\n> │\r\n> >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114\r\n> width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> > │ Buffers: shared hit=4\r\n> │\r\n> > │ Planning Time: 0.539 ms\r\n> │\r\n> > │ Buffers: shared hit=13\r\n> │\r\n> > │ Execution Time: 0.287 ms\r\n> │\r\n> >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > (6 rows)\r\n> >\r\n> > And I see share hit 13 in planning time.\r\n> >\r\n> > For second run I get\r\n> >\r\n> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id\r\n> = 'CZ0201';\r\n> >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > │ QUERY PLAN\r\n> │\r\n> >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114\r\n> width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> > │ Buffers: shared hit=4\r\n> │\r\n> > │ Planning Time: 0.159 ms\r\n> │\r\n> > │ Execution Time: 0.155 ms\r\n> │\r\n> >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > (5 rows)\r\n> >\r\n> > Now, there is not any touch in planning time. Does it mean so this all\r\n> these data are cached somewhere in session memory?\r\n>\r\n> The planning time is definitely shorter the 2nd time. And yes, what\r\n> you see are all the catcache accesses that are initially performed on\r\n> a fresh new backend.\r\n>\r\n\r\nOne time Tom Lane mentioned using index in planning time for getting\r\nminimum and maximum. I expected so these values are not cached. But I\r\ncannot to reproduce it, and then I am little bit surprised so I don't see\r\nany hit in second, and other executions.\r\n\r\nRegards\r\n\r\nPavel\r\n\nút 14. 4. 2020 v 10:27 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\r\nOn Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> Hi\r\n>\r\n> I am testing some features from Postgres 13, and I am not sure if I  understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n>\r\n> When I run following statement first time in session I get\r\n>\r\n> postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │                                                          QUERY PLAN                                                          │\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n> │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n> │   Buffers: shared hit=4                                                                                                      │\r\n> │ Planning Time: 0.539 ms                                                                                                      │\r\n> │   Buffers: shared hit=13                                                                                                     │\r\n> │ Execution Time: 0.287 ms                                                                                                     │\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (6 rows)\r\n>\r\n> And I see share hit 13 in planning time.\r\n>\r\n> For second run I get\r\n>\r\n> postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │                                                          QUERY PLAN                                                          │\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n> │   Buffers: shared hit=4                                                                                                      │\r\n> │ Planning Time: 0.159 ms                                                                                                      │\r\n> │ Execution Time: 0.155 ms                                                                                                     │\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (5 rows)\r\n>\r\n> Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\n\r\nThe planning time is definitely shorter the 2nd time.  And yes, what\r\nyou see are all the catcache accesses that are initially performed on\r\na fresh new backend.One time Tom Lane mentioned using index in planning time for getting minimum and maximum. I expected so these values are not cached. But I cannot to reproduce it, and then I am little bit surprised so I don't see any hit in second, and other executions.RegardsPavel", "msg_date": "Tue, 14 Apr 2020 10:35:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> > For second run I get\r\n> >\r\n> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > │ QUERY PLAN │\r\n> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n> > │ Buffers: shared hit=4 │\r\n> > │ Planning Time: 0.159 ms │\r\n> > │ Execution Time: 0.155 ms │\r\n> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > (5 rows)\r\n> >\r\n> > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>\r\n> The planning time is definitely shorter the 2nd time. And yes, what\r\n> you see are all the catcache accesses that are initially performed on\r\n> a fresh new backend.\r\n\r\nBy the way, even with all catcaches served from local memory, one may\r\nstill see shared buffers being hit during planning. For example:\r\n\r\nexplain (buffers, analyze) select * from foo where a = 1;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------\r\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\nwidth=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n Index Cond: (a = 1)\r\n Heap Fetches: 0\r\n Buffers: shared hit=2\r\n Planning Time: 0.775 ms\r\n Buffers: shared hit=72\r\n Execution Time: 0.086 ms\r\n(7 rows)\r\n\r\nTime: 2.477 ms\r\npostgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------\r\n Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\nwidth=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n Index Cond: (a = 1)\r\n Heap Fetches: 0\r\n Buffers: shared hit=2\r\n Planning Time: 0.102 ms\r\n Buffers: shared hit=1\r\n Execution Time: 0.047 ms\r\n(7 rows)\r\n\r\nIt seems that 1 Buffer hit comes from get_relation_info() doing\r\n_bt_getrootheight() for that index on foo.\r\n\r\n-- \r\n\r\nAmit Langote\r\nEnterpriseDB: http://www.enterprisedb.com\r\n", "msg_date": "Tue, 14 Apr 2020 17:40:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "On Tue, Apr 14, 2020 at 10:36 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> út 14. 4. 2020 v 10:27 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\r\n>>\r\n>> Hi,\r\n>>\r\n>> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>> >\r\n>> > Hi\r\n>> >\r\n>> > I am testing some features from Postgres 13, and I am not sure if I understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n>> >\r\n>> > When I run following statement first time in session I get\r\n>> >\r\n>> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > │ QUERY PLAN │\r\n>> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n>> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n>> > │ Buffers: shared hit=4 │\r\n>> > │ Planning Time: 0.539 ms │\r\n>> > │ Buffers: shared hit=13 │\r\n>> > │ Execution Time: 0.287 ms │\r\n>> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > (6 rows)\r\n>> >\r\n>> > And I see share hit 13 in planning time.\r\n>> >\r\n>> > For second run I get\r\n>> >\r\n>> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > │ QUERY PLAN │\r\n>> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n>> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n>> > │ Buffers: shared hit=4 │\r\n>> > │ Planning Time: 0.159 ms │\r\n>> > │ Execution Time: 0.155 ms │\r\n>> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > (5 rows)\r\n>> >\r\n>> > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>>\r\n>> The planning time is definitely shorter the 2nd time. And yes, what\r\n>> you see are all the catcache accesses that are initially performed on\r\n>> a fresh new backend.\r\n>\r\n>\r\n> One time Tom Lane mentioned using index in planning time for getting minimum and maximum. I expected so these values are not cached. But I cannot to reproduce it, and then I am little bit surprised so I don't see any hit in second, and other executions.\r\n\r\nIsn't that get_actual_variable_range() purpose? If you use a plan\r\nthat hit this function you'll definitely see consistent buffer usage\r\nduring planning:\r\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.393..5.989 rows=2863 loops=1)\r\n Hash Cond: (a.attrelid = c.oid)\r\n Buffers: shared hit=40 read=29\r\n -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.010..0.773 rows=2863 loops=1)\r\n Buffers: shared hit=28 read=25\r\n -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.333..0.334 rows=386 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n Buffers: shared hit=9 read=4\r\n -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.004..0.123 rows=386 loops=1)\r\n Buffers: shared hit=9 read=4\r\n Planning Time: 2.709 ms\r\n Buffers: shared hit=225 read=33\r\n Execution Time: 6.529 ms\r\n(13 rows)\r\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.385..5.613 rows=2863 loops=1)\r\n Hash Cond: (a.attrelid = c.oid)\r\n Buffers: shared hit=66\r\n -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.012..0.541 rows=2863 loops=1)\r\n Buffers: shared hit=53\r\n -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.352..0.352 rows=386 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n Buffers: shared hit=13\r\n -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.003..0.092 rows=386 loops=1)\r\n Buffers: shared hit=13\r\n Planning Time: 0.575 ms\r\n Buffers: shared hit=12\r\n Execution Time: 5.985 ms\r\n(13 rows)\r\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.287..5.612 rows=2863 loops=1)\r\n Hash Cond: (a.attrelid = c.oid)\r\n Buffers: shared hit=66\r\n -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.008..0.553 rows=2863 loops=1)\r\n Buffers: shared hit=53\r\n -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.261..0.262 rows=386 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n Buffers: shared hit=13\r\n -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.003..0.075 rows=386 loops=1)\r\n Buffers: shared hit=13\r\n Planning Time: 0.483 ms\r\n Buffers: shared hit=12\r\n Execution Time: 5.971 ms\r\n(13 rows)\r\n", "msg_date": "Tue, 14 Apr 2020 10:49:36 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "On Tue, Apr 14, 2020 at 10:40 AM Amit Langote <amitlangote09@gmail.com> wrote:\r\n>\r\n> On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> > On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> > > For second run I get\r\n> > >\r\n> > > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> > > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > > │ QUERY PLAN │\r\n> > > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> > > │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n> > > │ Buffers: shared hit=4 │\r\n> > > │ Planning Time: 0.159 ms │\r\n> > > │ Execution Time: 0.155 ms │\r\n> > > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > > (5 rows)\r\n> > >\r\n> > > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n> >\r\n> > The planning time is definitely shorter the 2nd time. And yes, what\r\n> > you see are all the catcache accesses that are initially performed on\r\n> > a fresh new backend.\r\n>\r\n> By the way, even with all catcaches served from local memory, one may\r\n> still see shared buffers being hit during planning. For example:\r\n>\r\n> explain (buffers, analyze) select * from foo where a = 1;\r\n> QUERY PLAN\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> width=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n> Index Cond: (a = 1)\r\n> Heap Fetches: 0\r\n> Buffers: shared hit=2\r\n> Planning Time: 0.775 ms\r\n> Buffers: shared hit=72\r\n> Execution Time: 0.086 ms\r\n> (7 rows)\r\n>\r\n> Time: 2.477 ms\r\n> postgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n> QUERY PLAN\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> width=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n> Index Cond: (a = 1)\r\n> Heap Fetches: 0\r\n> Buffers: shared hit=2\r\n> Planning Time: 0.102 ms\r\n> Buffers: shared hit=1\r\n> Execution Time: 0.047 ms\r\n> (7 rows)\r\n>\r\n> It seems that 1 Buffer hit comes from get_relation_info() doing\r\n> _bt_getrootheight() for that index on foo.\r\n\r\nIndeed. Having received some relcache invalidation should also lead\r\nto similar effect. Having those counters can help to quantify all of\r\nthose interactions.\r\n", "msg_date": "Tue, 14 Apr 2020 10:54:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "út 14. 4. 2020 v 10:40 odesílatel Amit Langote <amitlangote09@gmail.com>\r\nnapsal:\r\n\r\n> On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> > On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> > > For second run I get\r\n> > >\r\n> > > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE\r\n> okres_id = 'CZ0201';\r\n> > >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > > │ QUERY PLAN\r\n> │\r\n> > >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49\r\n> rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> > > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> > > │ Buffers: shared hit=4\r\n> │\r\n> > > │ Planning Time: 0.159 ms\r\n> │\r\n> > > │ Execution Time: 0.155 ms\r\n> │\r\n> > >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > > (5 rows)\r\n> > >\r\n> > > Now, there is not any touch in planning time. Does it mean so this all\r\n> these data are cached somewhere in session memory?\r\n> >\r\n> > The planning time is definitely shorter the 2nd time. And yes, what\r\n> > you see are all the catcache accesses that are initially performed on\r\n> > a fresh new backend.\r\n>\r\n> By the way, even with all catcaches served from local memory, one may\r\n> still see shared buffers being hit during planning. For example:\r\n>\r\n> explain (buffers, analyze) select * from foo where a = 1;\r\n> QUERY PLAN\r\n>\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> width=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n> Index Cond: (a = 1)\r\n> Heap Fetches: 0\r\n> Buffers: shared hit=2\r\n> Planning Time: 0.775 ms\r\n> Buffers: shared hit=72\r\n> Execution Time: 0.086 ms\r\n> (7 rows)\r\n>\r\n> Time: 2.477 ms\r\n> postgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n> QUERY PLAN\r\n>\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> width=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n> Index Cond: (a = 1)\r\n> Heap Fetches: 0\r\n> Buffers: shared hit=2\r\n> Planning Time: 0.102 ms\r\n> Buffers: shared hit=1\r\n> Execution Time: 0.047 ms\r\n> (7 rows)\r\n>\r\n> It seems that 1 Buffer hit comes from get_relation_info() doing\r\n> _bt_getrootheight() for that index on foo.\r\n>\r\n\r\nunfortunatelly, I cannot to repeat it.\r\n\r\ncreate table foo(a int);\r\ncreate index on foo(a);\r\ninsert into foo values(1);\r\nanalyze foo;\r\n\r\nfor this case any second EXPLAIN is without buffer on my comp\r\n\r\n\r\n> --\r\n>\r\n> Amit Langote\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n>\r\n\nút 14. 4. 2020 v 10:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> > For second run I get\r\n> >\r\n> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> > │                                                          QUERY PLAN                                                          │\r\n> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> > │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> > │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n> > │   Buffers: shared hit=4                                                                                                      │\r\n> > │ Planning Time: 0.159 ms                                                                                                      │\r\n> > │ Execution Time: 0.155 ms                                                                                                     │\r\n> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> > (5 rows)\r\n> >\r\n> > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>\r\n> The planning time is definitely shorter the 2nd time.  And yes, what\r\n> you see are all the catcache accesses that are initially performed on\r\n> a fresh new backend.\n\r\nBy the way, even with all catcaches served from local memory, one may\r\nstill see shared buffers being hit during planning.  For example:\n\r\nexplain (buffers, analyze) select * from foo where a = 1;\r\n                                                    QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------\r\n Index Only Scan using foo_pkey on foo  (cost=0.15..8.17 rows=1\r\nwidth=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n   Index Cond: (a = 1)\r\n   Heap Fetches: 0\r\n   Buffers: shared hit=2\r\n Planning Time: 0.775 ms\r\n   Buffers: shared hit=72\r\n Execution Time: 0.086 ms\r\n(7 rows)\n\r\nTime: 2.477 ms\r\npostgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n                                                    QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------\r\n Index Only Scan using foo_pkey on foo  (cost=0.15..8.17 rows=1\r\nwidth=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n   Index Cond: (a = 1)\r\n   Heap Fetches: 0\r\n   Buffers: shared hit=2\r\n Planning Time: 0.102 ms\r\n   Buffers: shared hit=1\r\n Execution Time: 0.047 ms\r\n(7 rows)\n\r\nIt seems that 1 Buffer hit comes from get_relation_info() doing\r\n_bt_getrootheight() for that index on foo.unfortunatelly, I cannot to repeat it. create table foo(a int);create index on foo(a);insert into foo values(1);analyze foo;for this case any second EXPLAIN is without buffer on my comp\n\r\n-- \n\r\nAmit Langote\r\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Apr 2020 11:24:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "út 14. 4. 2020 v 10:49 odesílatel Julien Rouhaud <rjuju123@gmail.com>\r\nnapsal:\r\n\r\n> On Tue, Apr 14, 2020 at 10:36 AM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> >\r\n> > út 14. 4. 2020 v 10:27 odesílatel Julien Rouhaud <rjuju123@gmail.com>\r\n> napsal:\r\n> >>\r\n> >> Hi,\r\n> >>\r\n> >> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> >> >\r\n> >> > Hi\r\n> >> >\r\n> >> > I am testing some features from Postgres 13, and I am not sure if I\r\n> understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n> >> >\r\n> >> > When I run following statement first time in session I get\r\n> >> >\r\n> >> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE\r\n> okres_id = 'CZ0201';\r\n> >> >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> >> > │ QUERY\r\n> PLAN │\r\n> >> >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> >> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49\r\n> rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n> >> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> >> > │ Buffers: shared hit=4\r\n> │\r\n> >> > │ Planning Time: 0.539 ms\r\n> │\r\n> >> > │ Buffers: shared hit=13\r\n> │\r\n> >> > │ Execution Time: 0.287 ms\r\n> │\r\n> >> >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> >> > (6 rows)\r\n> >> >\r\n> >> > And I see share hit 13 in planning time.\r\n> >> >\r\n> >> > For second run I get\r\n> >> >\r\n> >> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE\r\n> okres_id = 'CZ0201';\r\n> >> >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> >> > │ QUERY\r\n> PLAN │\r\n> >> >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> >> > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49\r\n> rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> >> > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> >> > │ Buffers: shared hit=4\r\n> │\r\n> >> > │ Planning Time: 0.159 ms\r\n> │\r\n> >> > │ Execution Time: 0.155 ms\r\n> │\r\n> >> >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> >> > (5 rows)\r\n> >> >\r\n> >> > Now, there is not any touch in planning time. Does it mean so this\r\n> all these data are cached somewhere in session memory?\r\n> >>\r\n> >> The planning time is definitely shorter the 2nd time. And yes, what\r\n> >> you see are all the catcache accesses that are initially performed on\r\n> >> a fresh new backend.\r\n> >\r\n> >\r\n> > One time Tom Lane mentioned using index in planning time for getting\r\n> minimum and maximum. I expected so these values are not cached. But I\r\n> cannot to reproduce it, and then I am little bit surprised so I don't see\r\n> any hit in second, and other executions.\r\n>\r\n> Isn't that get_actual_variable_range() purpose? If you use a plan\r\n> that hit this function you'll definitely see consistent buffer usage\r\n> during planning:\r\n>\r\n> rjuju=# explain (buffers, analyze) select * from pg_class c join\r\n> pg_attribute a on a.attrelid = c.oid;\r\n> QUERY PLAN\r\n>\r\n> -----------------------------------------------------------------------------------------------------------------------\r\n> Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\n> time=0.393..5.989 rows=2863 loops=1)\r\n> Hash Cond: (a.attrelid = c.oid)\r\n> Buffers: shared hit=40 read=29\r\n> -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\n> width=239) (actual time=0.010..0.773 rows=2863 loops=1)\r\n> Buffers: shared hit=28 read=25\r\n> -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\n> time=0.333..0.334 rows=386 loops=1)\r\n> Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n> Buffers: shared hit=9 read=4\r\n> -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\n> width=265) (actual time=0.004..0.123 rows=386 loops=1)\r\n> Buffers: shared hit=9 read=4\r\n> Planning Time: 2.709 ms\r\n> Buffers: shared hit=225 read=33\r\n> Execution Time: 6.529 ms\r\n> (13 rows)\r\n>\r\n> rjuju=# explain (buffers, analyze) select * from pg_class c join\r\n> pg_attribute a on a.attrelid = c.oid;\r\n> QUERY PLAN\r\n>\r\n> -----------------------------------------------------------------------------------------------------------------------\r\n> Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\n> time=0.385..5.613 rows=2863 loops=1)\r\n> Hash Cond: (a.attrelid = c.oid)\r\n> Buffers: shared hit=66\r\n> -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\n> width=239) (actual time=0.012..0.541 rows=2863 loops=1)\r\n> Buffers: shared hit=53\r\n> -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\n> time=0.352..0.352 rows=386 loops=1)\r\n> Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n> Buffers: shared hit=13\r\n> -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\n> width=265) (actual time=0.003..0.092 rows=386 loops=1)\r\n> Buffers: shared hit=13\r\n> Planning Time: 0.575 ms\r\n> Buffers: shared hit=12\r\n> Execution Time: 5.985 ms\r\n> (13 rows)\r\n>\r\n> rjuju=# explain (buffers, analyze) select * from pg_class c join\r\n> pg_attribute a on a.attrelid = c.oid;\r\n> QUERY PLAN\r\n>\r\n> -----------------------------------------------------------------------------------------------------------------------\r\n> Hash Join (cost=21.68..110.91 rows=2863 width=504) (actual\r\n> time=0.287..5.612 rows=2863 loops=1)\r\n> Hash Cond: (a.attrelid = c.oid)\r\n> Buffers: shared hit=66\r\n> -> Seq Scan on pg_attribute a (cost=0.00..81.63 rows=2863\r\n> width=239) (actual time=0.008..0.553 rows=2863 loops=1)\r\n> Buffers: shared hit=53\r\n> -> Hash (cost=16.86..16.86 rows=386 width=265) (actual\r\n> time=0.261..0.262 rows=386 loops=1)\r\n> Buckets: 1024 Batches: 1 Memory Usage: 85kB\r\n> Buffers: shared hit=13\r\n> -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\r\n> width=265) (actual time=0.003..0.075 rows=386 loops=1)\r\n> Buffers: shared hit=13\r\n> Planning Time: 0.483 ms\r\n> Buffers: shared hit=12\r\n> Execution Time: 5.971 ms\r\n> (13 rows)\r\n>\r\n\r\nthis example is working on my comp\r\n\r\nThank you\r\n\r\nPavel\r\n\nút 14. 4. 2020 v 10:49 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Apr 14, 2020 at 10:36 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> út 14. 4. 2020 v 10:27 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\r\n>>\r\n>> Hi,\r\n>>\r\n>> On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>> >\r\n>> > Hi\r\n>> >\r\n>> > I am testing some features from Postgres 13, and I am not sure if I  understand well to behave of EXPLAIN(ANALYZE, BUFFERS)\r\n>> >\r\n>> > When I run following statement first time in session I get\r\n>> >\r\n>> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > │                                                          QUERY PLAN                                                          │\r\n>> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.072..0.168 rows=114 loops=1) │\r\n>> > │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n>> > │   Buffers: shared hit=4                                                                                                      │\r\n>> > │ Planning Time: 0.539 ms                                                                                                      │\r\n>> > │   Buffers: shared hit=13                                                                                                     │\r\n>> > │ Execution Time: 0.287 ms                                                                                                     │\r\n>> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > (6 rows)\r\n>> >\r\n>> > And I see share hit 13 in planning time.\r\n>> >\r\n>> > For second run I get\r\n>> >\r\n>> > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > │                                                          QUERY PLAN                                                          │\r\n>> > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n>> > │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n>> > │   Buffers: shared hit=4                                                                                                      │\r\n>> > │ Planning Time: 0.159 ms                                                                                                      │\r\n>> > │ Execution Time: 0.155 ms                                                                                                     │\r\n>> > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > (5 rows)\r\n>> >\r\n>> > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>>\r\n>> The planning time is definitely shorter the 2nd time.  And yes, what\r\n>> you see are all the catcache accesses that are initially performed on\r\n>> a fresh new backend.\r\n>\r\n>\r\n> One time Tom Lane mentioned using index in planning time for getting minimum and maximum. I expected so these values are not cached. But I cannot to reproduce it, and then I am little bit surprised so I don't see any hit in second, and other executions.\n\r\nIsn't that get_actual_variable_range() purpose?  If you use a plan\r\nthat hit this function you'll definitely see consistent buffer usage\r\nduring planning:\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n                                                      QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join  (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.393..5.989 rows=2863 loops=1)\r\n   Hash Cond: (a.attrelid = c.oid)\r\n   Buffers: shared hit=40 read=29\r\n   ->  Seq Scan on pg_attribute a  (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.010..0.773 rows=2863 loops=1)\r\n         Buffers: shared hit=28 read=25\r\n   ->  Hash  (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.333..0.334 rows=386 loops=1)\r\n         Buckets: 1024  Batches: 1  Memory Usage: 85kB\r\n         Buffers: shared hit=9 read=4\r\n         ->  Seq Scan on pg_class c  (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.004..0.123 rows=386 loops=1)\r\n               Buffers: shared hit=9 read=4\r\n Planning Time: 2.709 ms\r\n   Buffers: shared hit=225 read=33\r\n Execution Time: 6.529 ms\r\n(13 rows)\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n                                                      QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join  (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.385..5.613 rows=2863 loops=1)\r\n   Hash Cond: (a.attrelid = c.oid)\r\n   Buffers: shared hit=66\r\n   ->  Seq Scan on pg_attribute a  (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.012..0.541 rows=2863 loops=1)\r\n         Buffers: shared hit=53\r\n   ->  Hash  (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.352..0.352 rows=386 loops=1)\r\n         Buckets: 1024  Batches: 1  Memory Usage: 85kB\r\n         Buffers: shared hit=13\r\n         ->  Seq Scan on pg_class c  (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.003..0.092 rows=386 loops=1)\r\n               Buffers: shared hit=13\r\n Planning Time: 0.575 ms\r\n   Buffers: shared hit=12\r\n Execution Time: 5.985 ms\r\n(13 rows)\n\r\nrjuju=# explain (buffers, analyze) select * from pg_class c join\r\npg_attribute a on a.attrelid = c.oid;\r\n                                                      QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------\r\n Hash Join  (cost=21.68..110.91 rows=2863 width=504) (actual\r\ntime=0.287..5.612 rows=2863 loops=1)\r\n   Hash Cond: (a.attrelid = c.oid)\r\n   Buffers: shared hit=66\r\n   ->  Seq Scan on pg_attribute a  (cost=0.00..81.63 rows=2863\r\nwidth=239) (actual time=0.008..0.553 rows=2863 loops=1)\r\n         Buffers: shared hit=53\r\n   ->  Hash  (cost=16.86..16.86 rows=386 width=265) (actual\r\ntime=0.261..0.262 rows=386 loops=1)\r\n         Buckets: 1024  Batches: 1  Memory Usage: 85kB\r\n         Buffers: shared hit=13\r\n         ->  Seq Scan on pg_class c  (cost=0.00..16.86 rows=386\r\nwidth=265) (actual time=0.003..0.075 rows=386 loops=1)\r\n               Buffers: shared hit=13\r\n Planning Time: 0.483 ms\r\n   Buffers: shared hit=12\r\n Execution Time: 5.971 ms\r\n(13 rows)this example is working on my compThank youPavel", "msg_date": "Tue, 14 Apr 2020 11:26:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "On Tue, Apr 14, 2020 at 11:25 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> út 14. 4. 2020 v 10:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\r\n>>\r\n>> On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> > On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>> > > For second run I get\r\n>> > >\r\n>> > > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > > │ QUERY PLAN │\r\n>> > > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n>> > > │ Index Cond: ((okres_id)::text = 'CZ0201'::text) │\r\n>> > > │ Buffers: shared hit=4 │\r\n>> > > │ Planning Time: 0.159 ms │\r\n>> > > │ Execution Time: 0.155 ms │\r\n>> > > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > > (5 rows)\r\n>> > >\r\n>> > > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>> >\r\n>> > The planning time is definitely shorter the 2nd time. And yes, what\r\n>> > you see are all the catcache accesses that are initially performed on\r\n>> > a fresh new backend.\r\n>>\r\n>> By the way, even with all catcaches served from local memory, one may\r\n>> still see shared buffers being hit during planning. For example:\r\n>>\r\n>> explain (buffers, analyze) select * from foo where a = 1;\r\n>> QUERY PLAN\r\n>> -------------------------------------------------------------------------------------------------------------------\r\n>> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n>> width=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n>> Index Cond: (a = 1)\r\n>> Heap Fetches: 0\r\n>> Buffers: shared hit=2\r\n>> Planning Time: 0.775 ms\r\n>> Buffers: shared hit=72\r\n>> Execution Time: 0.086 ms\r\n>> (7 rows)\r\n>>\r\n>> Time: 2.477 ms\r\n>> postgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n>> QUERY PLAN\r\n>> -------------------------------------------------------------------------------------------------------------------\r\n>> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n>> width=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n>> Index Cond: (a = 1)\r\n>> Heap Fetches: 0\r\n>> Buffers: shared hit=2\r\n>> Planning Time: 0.102 ms\r\n>> Buffers: shared hit=1\r\n>> Execution Time: 0.047 ms\r\n>> (7 rows)\r\n>>\r\n>> It seems that 1 Buffer hit comes from get_relation_info() doing\r\n>> _bt_getrootheight() for that index on foo.\r\n>\r\n>\r\n> unfortunatelly, I cannot to repeat it.\r\n>\r\n> create table foo(a int);\r\n> create index on foo(a);\r\n> insert into foo values(1);\r\n> analyze foo;\r\n>\r\n> for this case any second EXPLAIN is without buffer on my comp\r\n\r\n_bt_getrootheight() won't cache any value if the index is totally\r\nempty. Removing the INSERT in your example should lead to Amit's\r\nbehavior.\r\n", "msg_date": "Tue, 14 Apr 2020 11:34:56 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" }, { "msg_contents": "út 14. 4. 2020 v 11:35 odesílatel Julien Rouhaud <rjuju123@gmail.com>\r\nnapsal:\r\n\r\n> On Tue, Apr 14, 2020 at 11:25 AM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> >\r\n> > út 14. 4. 2020 v 10:40 odesílatel Amit Langote <amitlangote09@gmail.com>\r\n> napsal:\r\n> >>\r\n> >> On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com>\r\n> wrote:\r\n> >> > On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <\r\n> pavel.stehule@gmail.com> wrote:\r\n> >> > > For second run I get\r\n> >> > >\r\n> >> > > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE\r\n> okres_id = 'CZ0201';\r\n> >> > >\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> >> > > │ QUERY\r\n> PLAN │\r\n> >> > >\r\n> ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n> >> > > │ Index Scan using obce_okres_id_idx on obce (cost=0.28..14.49\r\n> rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n> >> > > │ Index Cond: ((okres_id)::text = 'CZ0201'::text)\r\n> │\r\n> >> > > │ Buffers: shared hit=4\r\n> │\r\n> >> > > │ Planning Time: 0.159 ms\r\n> │\r\n> >> > > │ Execution Time: 0.155 ms\r\n> │\r\n> >> > >\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> >> > > (5 rows)\r\n> >> > >\r\n> >> > > Now, there is not any touch in planning time. Does it mean so this\r\n> all these data are cached somewhere in session memory?\r\n> >> >\r\n> >> > The planning time is definitely shorter the 2nd time. And yes, what\r\n> >> > you see are all the catcache accesses that are initially performed on\r\n> >> > a fresh new backend.\r\n> >>\r\n> >> By the way, even with all catcaches served from local memory, one may\r\n> >> still see shared buffers being hit during planning. For example:\r\n> >>\r\n> >> explain (buffers, analyze) select * from foo where a = 1;\r\n> >> QUERY PLAN\r\n> >>\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> >> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> >> width=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n> >> Index Cond: (a = 1)\r\n> >> Heap Fetches: 0\r\n> >> Buffers: shared hit=2\r\n> >> Planning Time: 0.775 ms\r\n> >> Buffers: shared hit=72\r\n> >> Execution Time: 0.086 ms\r\n> >> (7 rows)\r\n> >>\r\n> >> Time: 2.477 ms\r\n> >> postgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n> >> QUERY PLAN\r\n> >>\r\n> -------------------------------------------------------------------------------------------------------------------\r\n> >> Index Only Scan using foo_pkey on foo (cost=0.15..8.17 rows=1\r\n> >> width=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n> >> Index Cond: (a = 1)\r\n> >> Heap Fetches: 0\r\n> >> Buffers: shared hit=2\r\n> >> Planning Time: 0.102 ms\r\n> >> Buffers: shared hit=1\r\n> >> Execution Time: 0.047 ms\r\n> >> (7 rows)\r\n> >>\r\n> >> It seems that 1 Buffer hit comes from get_relation_info() doing\r\n> >> _bt_getrootheight() for that index on foo.\r\n> >\r\n> >\r\n> > unfortunatelly, I cannot to repeat it.\r\n> >\r\n> > create table foo(a int);\r\n> > create index on foo(a);\r\n> > insert into foo values(1);\r\n> > analyze foo;\r\n> >\r\n> > for this case any second EXPLAIN is without buffer on my comp\r\n>\r\n> _bt_getrootheight() won't cache any value if the index is totally\r\n> empty. Removing the INSERT in your example should lead to Amit's\r\n> behavior.\r\n>\r\n\r\naha. good to know it.\r\n\r\nThank you\r\n\r\nPavel\r\n\nút 14. 4. 2020 v 11:35 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Apr 14, 2020 at 11:25 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> út 14. 4. 2020 v 10:40 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\r\n>>\r\n>> On Tue, Apr 14, 2020 at 5:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> > On Tue, Apr 14, 2020 at 10:18 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>> > > For second run I get\r\n>> > >\r\n>> > > postgres=# EXPLAIN (BUFFERS, ANALYZE) SELECT * FROM obce WHERE okres_id = 'CZ0201';\r\n>> > > ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n>> > > │                                                          QUERY PLAN                                                          │\r\n>> > > ╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n>> > > │ Index Scan using obce_okres_id_idx on obce  (cost=0.28..14.49 rows=114 width=41) (actual time=0.044..0.101 rows=114 loops=1) │\r\n>> > > │   Index Cond: ((okres_id)::text = 'CZ0201'::text)                                                                            │\r\n>> > > │   Buffers: shared hit=4                                                                                                      │\r\n>> > > │ Planning Time: 0.159 ms                                                                                                      │\r\n>> > > │ Execution Time: 0.155 ms                                                                                                     │\r\n>> > > └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n>> > > (5 rows)\r\n>> > >\r\n>> > > Now, there is not any touch in planning time. Does it mean so this all these data are cached somewhere in session memory?\r\n>> >\r\n>> > The planning time is definitely shorter the 2nd time.  And yes, what\r\n>> > you see are all the catcache accesses that are initially performed on\r\n>> > a fresh new backend.\r\n>>\r\n>> By the way, even with all catcaches served from local memory, one may\r\n>> still see shared buffers being hit during planning.  For example:\r\n>>\r\n>> explain (buffers, analyze) select * from foo where a = 1;\r\n>>                                                     QUERY PLAN\r\n>> -------------------------------------------------------------------------------------------------------------------\r\n>>  Index Only Scan using foo_pkey on foo  (cost=0.15..8.17 rows=1\r\n>> width=4) (actual time=0.010..0.011 rows=0 loops=1)\r\n>>    Index Cond: (a = 1)\r\n>>    Heap Fetches: 0\r\n>>    Buffers: shared hit=2\r\n>>  Planning Time: 0.775 ms\r\n>>    Buffers: shared hit=72\r\n>>  Execution Time: 0.086 ms\r\n>> (7 rows)\r\n>>\r\n>> Time: 2.477 ms\r\n>> postgres=# explain (buffers, analyze) select * from foo where a = 1;\r\n>>                                                     QUERY PLAN\r\n>> -------------------------------------------------------------------------------------------------------------------\r\n>>  Index Only Scan using foo_pkey on foo  (cost=0.15..8.17 rows=1\r\n>> width=4) (actual time=0.012..0.012 rows=0 loops=1)\r\n>>    Index Cond: (a = 1)\r\n>>    Heap Fetches: 0\r\n>>    Buffers: shared hit=2\r\n>>  Planning Time: 0.102 ms\r\n>>    Buffers: shared hit=1\r\n>>  Execution Time: 0.047 ms\r\n>> (7 rows)\r\n>>\r\n>> It seems that 1 Buffer hit comes from get_relation_info() doing\r\n>> _bt_getrootheight() for that index on foo.\r\n>\r\n>\r\n> unfortunatelly, I cannot to repeat it.\r\n>\r\n> create table foo(a int);\r\n> create index on foo(a);\r\n> insert into foo values(1);\r\n> analyze foo;\r\n>\r\n> for this case any second EXPLAIN is without buffer on my comp\n\r\n_bt_getrootheight() won't cache any value if the index is totally\r\nempty.  Removing the INSERT in your example should lead to Amit's\r\nbehavior.aha. good to know it.Thank youPavel", "msg_date": "Tue, 14 Apr 2020 11:45:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Display of buffers for planning time show nothing for second run" } ]
[ { "msg_contents": "Hi ,\n\nWe have a sql file  called 'generated.sql' under src/test/regress/sql \nfolder . if we run this file on psql , take the dump and try to restore \nit on another db\nwe are getting error like -\n\npsql:/tmp/x:434: ERROR:  column \"b\" of relation \"gtest1_1\" is a \ngenerated column\npsql:/tmp/x:441: ERROR:  cannot use column reference in DEFAULT expression\n\nThese sql statements , i copied from the dump file\n\npostgres=# CREATE TABLE public.gtest30 (\npostgres(#     a integer,\npostgres(#     b integer\npostgres(# );\nCREATE TABLE\npostgres=#\npostgres=# CREATE TABLE public.gtest30_1 (\npostgres(# )\npostgres-# INHERITS (public.gtest30);\nCREATE TABLE\npostgres=# ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT \n(a * 2);\nERROR:  cannot use column reference in DEFAULT expression\npostgres=#\n\nSteps to reproduce -\n\nconnect to psql - ( ./psql postgres)\ncreate database ( create database x;)\nconnect to database x (\\c x )\nexecute generated.sql file (\\i ../../src/test/regress/sql/generated.sql)\ntake the dump of x db (./pg_dump -Fp x > /tmp/t.dump)\ncreate another database  (create database y;)\nConnect to y db (\\c y)\nexecute plain dump sql file (\\i /tmp/t.dump)\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 14 Apr 2020 19:11:26 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "While restoring -getting error if dump contain sql statements\n generated from generated.sql file" }, { "msg_contents": "On Tue, 14 Apr 2020 at 22:41, tushar <tushar.ahuja@enterprisedb.com> wrote:\n>\n> Hi ,\n>\n> We have a sql file called 'generated.sql' under src/test/regress/sql\n> folder . if we run this file on psql , take the dump and try to restore\n> it on another db\n> we are getting error like -\n>\n> psql:/tmp/x:434: ERROR: column \"b\" of relation \"gtest1_1\" is a\n> generated column\n> psql:/tmp/x:441: ERROR: cannot use column reference in DEFAULT expression\n>\n> These sql statements , i copied from the dump file\n>\n> postgres=# CREATE TABLE public.gtest30 (\n> postgres(# a integer,\n> postgres(# b integer\n> postgres(# );\n> CREATE TABLE\n> postgres=#\n> postgres=# CREATE TABLE public.gtest30_1 (\n> postgres(# )\n> postgres-# INHERITS (public.gtest30);\n> CREATE TABLE\n> postgres=# ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT\n> (a * 2);\n> ERROR: cannot use column reference in DEFAULT expression\n> postgres=#\n>\n> Steps to reproduce -\n>\n> connect to psql - ( ./psql postgres)\n> create database ( create database x;)\n> connect to database x (\\c x )\n> execute generated.sql file (\\i ../../src/test/regress/sql/generated.sql)\n> take the dump of x db (./pg_dump -Fp x > /tmp/t.dump)\n> create another database (create database y;)\n> Connect to y db (\\c y)\n> execute plain dump sql file (\\i /tmp/t.dump)\n>\n\nGood catch. The minimum reproducer is to execute the following\nqueries, pg_dump and pg_restore/psql.\n\n-- test case 1\ncreate table a (a int, b int generated always as (a * 2) stored);\ncreate table a1 () inherits(a);\n\n-- test case 2\ncreate table b (a int, b int generated always as (a * 2) stored);\ncreate table b1 () inherits(b);\nalter table only b alter column b drop expression;\n\nAfter executing the above queries, pg_dump will generate the following queries:\n\n-- test case 1\nCREATE TABLE public.a (\n a integer,\n b integer GENERATED ALWAYS AS ((a * 2)) STORED\n);\nALTER TABLE public.a OWNER TO masahiko;\nCREATE TABLE public.a1 (\n)\nINHERITS (public.a);\nALTER TABLE public.a1 OWNER TO masahiko;\nALTER TABLE ONLY public.a1 ALTER COLUMN b SET DEFAULT (a * 2); -- error!\n\n-- test case 2\nCREATE TABLE public.b (\n a integer,\n b integer\n);\nALTER TABLE public.b OWNER TO masahiko;\nCREATE TABLE public.b1 (\n)\nINHERITS (public.b);\nALTER TABLE public.b1 OWNER TO masahiko;\nALTER TABLE ONLY public.b1 ALTER COLUMN b SET DEFAULT (a * 2); -- error!\n\npg_dump generates the same SQL \"ALTER TABLE ... ALTER COLUMN b SET\nDEFAULT (a * 2);\" but the errors vary.\n\ntest case 1:\nERROR: column \"b\" of relation \"a1\" is a generated column\n\ntest case 2:\nERROR: cannot use column reference in DEFAULT expression\n\nIn both cases, I think we can simply get rid of that ALTER TABLE\nqueries if we don't support changing a normal column to a generated\ncolumn using ALTER TABLE .. ALTER COLUMN.\n\nI've attached a WIP patch. I'll look at this closely and add regression tests.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 17 Apr 2020 22:50:56 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: While restoring -getting error if dump contain sql statements\n generated from generated.sql file" }, { "msg_contents": "On Fri, 17 Apr 2020 at 22:50, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 14 Apr 2020 at 22:41, tushar <tushar.ahuja@enterprisedb.com> wrote:\n> >\n> > Hi ,\n> >\n> > We have a sql file called 'generated.sql' under src/test/regress/sql\n> > folder . if we run this file on psql , take the dump and try to restore\n> > it on another db\n> > we are getting error like -\n> >\n> > psql:/tmp/x:434: ERROR: column \"b\" of relation \"gtest1_1\" is a\n> > generated column\n> > psql:/tmp/x:441: ERROR: cannot use column reference in DEFAULT expression\n> >\n> > These sql statements , i copied from the dump file\n> >\n> > postgres=# CREATE TABLE public.gtest30 (\n> > postgres(# a integer,\n> > postgres(# b integer\n> > postgres(# );\n> > CREATE TABLE\n> > postgres=#\n> > postgres=# CREATE TABLE public.gtest30_1 (\n> > postgres(# )\n> > postgres-# INHERITS (public.gtest30);\n> > CREATE TABLE\n> > postgres=# ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT\n> > (a * 2);\n> > ERROR: cannot use column reference in DEFAULT expression\n> > postgres=#\n> >\n> > Steps to reproduce -\n> >\n> > connect to psql - ( ./psql postgres)\n> > create database ( create database x;)\n> > connect to database x (\\c x )\n> > execute generated.sql file (\\i ../../src/test/regress/sql/generated.sql)\n> > take the dump of x db (./pg_dump -Fp x > /tmp/t.dump)\n> > create another database (create database y;)\n> > Connect to y db (\\c y)\n> > execute plain dump sql file (\\i /tmp/t.dump)\n> >\n>\n> Good catch. The minimum reproducer is to execute the following\n> queries, pg_dump and pg_restore/psql.\n>\n> -- test case 1\n> create table a (a int, b int generated always as (a * 2) stored);\n> create table a1 () inherits(a);\n>\n> -- test case 2\n> create table b (a int, b int generated always as (a * 2) stored);\n> create table b1 () inherits(b);\n> alter table only b alter column b drop expression;\n>\n> After executing the above queries, pg_dump will generate the following queries:\n>\n> -- test case 1\n> CREATE TABLE public.a (\n> a integer,\n> b integer GENERATED ALWAYS AS ((a * 2)) STORED\n> );\n> ALTER TABLE public.a OWNER TO masahiko;\n> CREATE TABLE public.a1 (\n> )\n> INHERITS (public.a);\n> ALTER TABLE public.a1 OWNER TO masahiko;\n> ALTER TABLE ONLY public.a1 ALTER COLUMN b SET DEFAULT (a * 2); -- error!\n>\n> -- test case 2\n> CREATE TABLE public.b (\n> a integer,\n> b integer\n> );\n> ALTER TABLE public.b OWNER TO masahiko;\n> CREATE TABLE public.b1 (\n> )\n> INHERITS (public.b);\n> ALTER TABLE public.b1 OWNER TO masahiko;\n> ALTER TABLE ONLY public.b1 ALTER COLUMN b SET DEFAULT (a * 2); -- error!\n>\n> pg_dump generates the same SQL \"ALTER TABLE ... ALTER COLUMN b SET\n> DEFAULT (a * 2);\" but the errors vary.\n>\n> test case 1:\n> ERROR: column \"b\" of relation \"a1\" is a generated column\n>\n> test case 2:\n> ERROR: cannot use column reference in DEFAULT expression\n>\n> In both cases, I think we can simply get rid of that ALTER TABLE\n> queries if we don't support changing a normal column to a generated\n> column using ALTER TABLE .. ALTER COLUMN.\n>\n> I've attached a WIP patch. I'll look at this closely and add regression tests.\n>\n\nAfter more thoughts, the approach of the previous patch doesn't seem\ncorrect. Instead, I think we can change dumpAttrDef so that it skips\nemitting the query setting an expression of a generated column if the\ncolumn is a generated column.\n\nCurrently, we need to emit a query setting the default in the\nfollowing three cases (ref. adinfo->separate):\n\n1. default is for column on VIEW\n2. shouldPrintColumn() returns false in the two case:\n 2-1. the column is a dropped column.\n 2-2. the column is not a local column and the table is not a partition.\n\nSince we don't support to set generated column as a default value for\na column of a view the case (1) is always false. And for the case\n(2)-1, we don't dump a dropped column. I think the case (2)-2 means a\ncolumn inherited from the parent table but these columns are printed\nin CREATE TABLE of the parent table and a child table inherits it. We\ncan have a generated column having a different expression from the\nparent one but it will need to drop the inherited one and create a new\ngenerated column. Such operation will make the column a local column,\nso these definitions will be printed in the CREATE TABLE of the\ninherited table. Therefore, IIUC there is no case where we need a\nseparate query setting an expression of a generated column.\n\nAlso, I've tried to add a regression test for this but pg_dump TAP\ntests seem not to have a test if the dumped queries are loaded without\nerrors. I think we can have such a test but the attached updated\nversion patch doesn't include tests so far.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 20 Apr 2020 14:27:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: While restoring -getting error if dump contain sql statements\n generated from generated.sql file" } ]
[ { "msg_contents": ">>I m still working on testing this patch. If anyone has Idea please\nsuggest.\nI still see problems with this patch.\n\n1. Variable loct have redundant initialization, it would be enough to\ndeclare so: _locale_t loct;\n2. Style white space in variable rc declaration.\n3. Style variable cp_index can be reduced.\nif (tmp != NULL) {\n size_t cp_index;\n\ncp_index = (size_t)(tmp - winlocname);\nstrncpy(loc_name, winlocname, cp_index);\nloc_name[cp_index] = '\\0';\n4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is\nnot called.\n5. Why call _create_locale if _WIN32_WINNT >= 0x0600 is true and loct is\nnot used?\n\nregards,\nRanier Vilela\n\n\n>>I m still working on testing this patch. If anyone has Idea please suggest. I still see problems with this patch.1. Variable loct have redundant initialization, it would be enough to declare so: _locale_t\tloct;2. Style white space in variable rc declaration.3. Style variable cp_index can be reduced.\t\tif (tmp != NULL) {\t\t    size_t\t\tcp_index;\t\t\tcp_index = (size_t)(tmp - winlocname);\t\t\tstrncpy(loc_name, winlocname, cp_index);\t\t\tloc_name[cp_index] = '\\0';4. Memory leak if _WIN32_WINNT >= 0x0600 is true, _free_locale(loct); is not called.5. Why call _create_locale \nif _WIN32_WINNT >= 0x0600 is true and loct is not used?regards,Ranier Vilela", "msg_date": "Tue, 14 Apr 2020 12:41:44 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "PG compilation error with Visual Studio 2015/2017/2019" } ]
[ { "msg_contents": "Guys; This errors out with: \n\nERROR: could not determine which collation to use for string comparison \n HINT: Use the COLLATE clause to set the collation explicitly.\n\n\nThe database is init'ed with: \ninitdb -D $PGDATA -E utf8 --locale=nb_NO.UTF-8\n\n13-dev HEAD as of 8128b0c152a67917535f50738ac26da4f984ddd9 \n\nWorks fine in <= 12 \n\n=========================== \n\ncreate table person( id serial primary key, firstname varchar, lastname varchar\n);insert into person(firstname, lastname) values ('Andreas', 'Krogh'); CREATE \nOR REPLACE FUNCTIONconcat_lower(varchar, varchar) RETURNS varchar AS $$ SELECT \nnullif(lower(coalesce($1, '')) || lower(coalesce($2, '')), '') $$ LANGUAGE SQL \nIMMUTABLE; select * from person pers ORDER BY concat_lower(pers.firstname, \npers.lastname)ASC; =========================== \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Tue, 14 Apr 2020 18:49:11 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <andreas@visena.com>", "msg_from_op": true, "msg_subject": "ERROR: could not determine which collation to use for string\n comparison" }, { "msg_contents": "Andreas Joseph Krogh <andreas@visena.com> writes:\n> Guys; This errors out with: \n> ERROR: could not determine which collation to use for string comparison \n> HINT: Use the COLLATE clause to set the collation explicitly.\n\nFixed, thanks for the report.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 17:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: could not determine which collation to use for string\n comparison" } ]
[ { "msg_contents": "Hi,\n\nOver in [1], Tom and I had a discussion in response to some confusion\nabout why remove_useless_groupby_columns() goes to the trouble of\nrecording a dependency on the PRIMARY KEY constraint when removing\nsurplus columns from the GROUP BY clause.\n\nThe outcome was that we don't need to do this since\nremove_useless_groupby_columns() is used only as a plan-time\noptimisation, we don't need to record any dependency. Unlike\ncheck_functional_grouping(), where we must record the dependency as we\nmay end up with a VIEW with columns, e.g, in the select list which are\nfunctionally dependant on a pkey constraint. In that case, we must\nensure the view is also removed, or that the constraint removal is\nblocked. There's no such requirement for planner smarts, such as the\none in remove_useless_groupby_columns() as in that case we'll trigger\na relcache invalidation during ALTER TABLE DROP CONSTRAINT, which\ncached plans will notice when they obtain their locks just before\nexecution begins.\n\nTo prevent future confusion, I'd like to remove dependency recording\ncode from remove_useless_groupby_columns() and update the misleading\ncomment. Likely this should also be backpatched to 9.6.\n\nDoes anyone think differently?\n\nA patch to do this is attached.\n\n[1] https://www.postgresql.org/message-id/CAApHDvr4OW_OUd_Rxp0d1hRgz+a4mm8+8uR7QoM2VqKFX08SqA@mail.gmail.com", "msg_date": "Wed, 15 Apr 2020 13:27:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "remove_useless_groupby_columns does not need to record constraint\n dependencies" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Over in [1], Tom and I had a discussion in response to some confusion\n> about why remove_useless_groupby_columns() goes to the trouble of\n> recording a dependency on the PRIMARY KEY constraint when removing\n> surplus columns from the GROUP BY clause.\n\n> The outcome was that we don't need to do this since\n> remove_useless_groupby_columns() is used only as a plan-time\n> optimisation, we don't need to record any dependency.\n\nRight. I think it would be good for the comments to emphasize that\na relcache inval will be forced if the *index* underlying the pkey\nconstraint is dropped; the code doesn't care so much about the constraint\nas such. (This is also why it'd be safe to use a plain unique index\nfor the same optimization, assuming you can independently verify\nnon-nullness of the columns. Maybe we should trash the existing coding\nand just have it look for unique indexes + attnotnull flags.)\n\n> To prevent future confusion, I'd like to remove dependency recording\n> code from remove_useless_groupby_columns() and update the misleading\n> comment. Likely this should also be backpatched to 9.6.\n\n+1 for removing the dependency and improving the comments in HEAD.\nMinus quite a lot for back-patching: this is not a bug fix, and\nthere's a nonzero risk that we've overlooked something. I'd rather\nfind that out in beta testing than from bug reports against stable\nbranches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:24:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: remove_useless_groupby_columns does not need to record constraint\n dependencies" }, { "msg_contents": "On Thu, 16 Apr 2020 at 03:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Over in [1], Tom and I had a discussion in response to some confusion\n> > about why remove_useless_groupby_columns() goes to the trouble of\n> > recording a dependency on the PRIMARY KEY constraint when removing\n> > surplus columns from the GROUP BY clause.\n>\n> > The outcome was that we don't need to do this since\n> > remove_useless_groupby_columns() is used only as a plan-time\n> > optimisation, we don't need to record any dependency.\n>\n> Right. I think it would be good for the comments to emphasize that\n> a relcache inval will be forced if the *index* underlying the pkey\n> constraint is dropped; the code doesn't care so much about the constraint\n> as such. (This is also why it'd be safe to use a plain unique index\n> for the same optimization, assuming you can independently verify\n> non-nullness of the columns.\n\nI've reworded the comment in the attached version.\n\n> Maybe we should trash the existing coding\n> and just have it look for unique indexes + attnotnull flags.)\n\nI'd like to, but the timing seems off. Perhaps after we branch for PG14.\n\n> > To prevent future confusion, I'd like to remove dependency recording\n> > code from remove_useless_groupby_columns() and update the misleading\n> > comment. Likely this should also be backpatched to 9.6.\n>\n> +1 for removing the dependency and improving the comments in HEAD.\n> Minus quite a lot for back-patching: this is not a bug fix, and\n> there's a nonzero risk that we've overlooked something. I'd rather\n> find that out in beta testing than from bug reports against stable\n> branches.\n\nThat seems fair.\n\nDavid", "msg_date": "Thu, 16 Apr 2020 14:48:50 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: remove_useless_groupby_columns does not need to record constraint\n dependencies" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've reworded the comment in the attached version.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:53:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: remove_useless_groupby_columns does not need to record constraint\n dependencies" }, { "msg_contents": "On Fri, 17 Apr 2020 at 02:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've reworded the comment in the attached version.\n>\n> LGTM.\n\nThanks for reviewing. Pushed.\n\n\n", "msg_date": "Fri, 17 Apr 2020 10:31:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: remove_useless_groupby_columns does not need to record constraint\n dependencies" } ]
[ { "msg_contents": "Hi!\n\nOne of our users asked me why they cannot read details of pg_stat_progress_vacuum while they have pg_read_all_stats role.\nMaybe I'm missing something, but I think they should be able to read stats...\n\nPFA fix.\nThis affects pg_stat_progress_analyze, pg_stat_progress_basebackup, pg_stat_progress_cluster, pg_stat_progress_create_index and pg_stat_progress_vacuum.\n\nWith patch\npostgres=# set role pg_read_all_stats ;\npostgres=> select * from pg_stat_progress_vacuum ;\n pid | datid | datname | relid | phase | heap_blks_total | heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count | max_dead_tuples | num_dead_tuples \n-------+-------+----------+-------+---------------+-----------------+-------------------+--------------------+--------------------+-----------------+-----------------\n 76331 | 12923 | postgres | 1247 | scanning heap | 10 | 1 | 0 | 0 | 2910 | 0\n(1 row)\n\nWithout patch\npostgres=# set role pg_read_all_stats ;\nSET\npostgres=> select * from pg_stat_progress_vacuum ;\n pid | datid | datname | relid | phase | heap_blks_total | heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count | max_dead_tuples | num_dead_tuples \n-------+-------+----------+-------+-------+-----------------+-------------------+--------------------+--------------------+-----------------+-----------------\n 76331 | 12923 | postgres | | | | | | | | \n(1 row)\n\nThanks!\n\nBest regards, Andrey Borodin.", "msg_date": "Wed, 15 Apr 2020 12:13:57 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "On Wed, Apr 15, 2020 at 9:14 AM Andrey M. Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n> Hi!\n>\n> One of our users asked me why they cannot read details of\n> pg_stat_progress_vacuum while they have pg_read_all_stats role.\n> Maybe I'm missing something, but I think they should be able to read\n> stats...\n>\n> PFA fix.\n> This affects pg_stat_progress_analyze, pg_stat_progress_basebackup,\n> pg_stat_progress_cluster, pg_stat_progress_create_index and\n> pg_stat_progress_vacuum.\n>\n> With patch\n> postgres=# set role pg_read_all_stats ;\n> postgres=> select * from pg_stat_progress_vacuum ;\n> pid | datid | datname | relid | phase | heap_blks_total |\n> heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count |\n> max_dead_tuples | num_dead_tuples\n>\n> -------+-------+----------+-------+---------------+-----------------+-------------------+--------------------+--------------------+-----------------+-----------------\n> 76331 | 12923 | postgres | 1247 | scanning heap | 10 |\n> 1 | 0 | 0 | 2910 |\n> 0\n> (1 row)\n>\n> Without patch\n> postgres=# set role pg_read_all_stats ;\n> SET\n> postgres=> select * from pg_stat_progress_vacuum ;\n> pid | datid | datname | relid | phase | heap_blks_total |\n> heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count |\n> max_dead_tuples | num_dead_tuples\n>\n> -------+-------+----------+-------+-------+-----------------+-------------------+--------------------+--------------------+-----------------+-----------------\n> 76331 | 12923 | postgres | | | |\n> | | | |\n>\n> (1 row)\n>\n\nI think that makes perfect sense. The documentation explicitly says \"can\nread all pg_stat_* views\", which is clearly wrong -- so either the code or\nthe docs should be fixed, and it looks like it's the code that should be\nfixed to me.\n\nAs for the patch, one could argue that we should just store the resulting\nboolean instead of re-running the check (e.g. have a \"bool\nhas_stats_privilege\" or such), but perhaps that's an unnecessary\nmicro-optimization, like the attached.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Wed, 15 Apr 2020 12:25:20 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "\n\n> 15 апр. 2020 г., в 15:25, Magnus Hagander <magnus@hagander.net> написал(а):\n> \n> \n> I think that makes perfect sense. The documentation explicitly says \"can read all pg_stat_* views\", which is clearly wrong -- so either the code or the docs should be fixed, and it looks like it's the code that should be fixed to me.\nShould it be bug or v14 feature?\n\nAlso pgstatfuncs.c contains a lot more checks of has_privs_of_role(GetUserId(), beentry->st_userid).\nMaybe grant them all?\n\n> As for the patch, one could argue that we should just store the resulting boolean instead of re-running the check (e.g. have a \"bool has_stats_privilege\" or such), but perhaps that's an unnecessary micro-optimization, like the attached.\n\nLooks good to me.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 15 Apr 2020 15:58:05 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "At Wed, 15 Apr 2020 15:58:05 +0500, \"Andrey M. Borodin\" <x4mmm@yandex-team.ru> wrote in \n> > 15 апр. 2020 г., в 15:25, Magnus Hagander <magnus@hagander.net> написал(а):\n> > I think that makes perfect sense. The documentation explicitly says \"can read all pg_stat_* views\", which is clearly wrong -- so either the code or the docs should be fixed, and it looks like it's the code that should be fixed to me.\n> Should it be bug or v14 feature?\n> \n> Also pgstatfuncs.c contains a lot more checks of has_privs_of_role(GetUserId(), beentry->st_userid).\n> Maybe grant them all?\n> \n> > As for the patch, one could argue that we should just store the resulting boolean instead of re-running the check (e.g. have a \"bool has_stats_privilege\" or such), but perhaps that's an unnecessary micro-optimization, like the attached.\n> \n> Looks good to me.\n\npg_stat_get_activty checks (has_privs_of_role() ||\nis_member_of_role()) in-place for every entry. It's not necessary but\nI suppose that doing the same thing for pg_stat_progress_info might be\nbetter.\n\nIt's another issue, but pg_stat_get_backend_* functions don't consider\npg_read_all_stats. I suppose that the functions should work under the\nsame criteria to pg_stat views, and maybe explicitly documented?\n\nIf we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\nsomething like and replace the all occurances of the idiomatic\ncondition with it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 16 Apr 2020 14:05:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "On Thu, Apr 16, 2020 at 7:05 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 15 Apr 2020 15:58:05 +0500, \"Andrey M. Borodin\" <\n> x4mmm@yandex-team.ru> wrote in\n> > > 15 апр. 2020 г., в 15:25, Magnus Hagander <magnus@hagander.net>\n> написал(а):\n> > > I think that makes perfect sense. The documentation explicitly says\n> \"can read all pg_stat_* views\", which is clearly wrong -- so either the\n> code or the docs should be fixed, and it looks like it's the code that\n> should be fixed to me.\n> > Should it be bug or v14 feature?\n> >\n> > Also pgstatfuncs.c contains a lot more checks of\n> has_privs_of_role(GetUserId(), beentry->st_userid).\n> > Maybe grant them all?\n> >\n> > > As for the patch, one could argue that we should just store the\n> resulting boolean instead of re-running the check (e.g. have a \"bool\n> has_stats_privilege\" or such), but perhaps that's an unnecessary\n> micro-optimization, like the attached.\n> >\n> > Looks good to me.\n>\n> pg_stat_get_activty checks (has_privs_of_role() ||\n> is_member_of_role()) in-place for every entry. It's not necessary but\n> I suppose that doing the same thing for pg_stat_progress_info might be\n> better.\n>\n\n From a result perspective, it shouldn't make a difference though, should\nit? It's a micro-optimization, but it might not have an actual performance\neffect in reality as well, but the result should always be the same?\n\n(FWIW, pg_stat_statements has a coding pattern similar to the one I\nsuggested in the patch)\n\n\n\n>\n> It's another issue, but pg_stat_get_backend_* functions don't consider\n> pg_read_all_stats. I suppose that the functions should work under the\n> same criteria to pg_stat views, and maybe explicitly documented?\n>\n\nThat's a good question. They haven't been documented to do so, but it\ncertainly seems *weird* that the same information should be available\nthrough a view like pg_stat_activity, but not through the functions.\n\nI would guess this was simply forgotten in 25fff40798f -- I don't recall\nany discussion about it. The commit message specifically says\npg_database_size() and pg_tablespace_size(), but mentions nothing about\npg_stat_*.\n\n\n>\n> If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> something like and replace the all occurances of the idiomatic\n> condition with it.\n>\n\nYou mean something like the attached?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Thu, 16 Apr 2020 14:46:00 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "At Thu, 16 Apr 2020 14:46:00 +0200, Magnus Hagander <magnus@hagander.net> wrote in \n> On Thu, Apr 16, 2020 at 7:05 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> > At Wed, 15 Apr 2020 15:58:05 +0500, \"Andrey M. Borodin\" <\n> > x4mmm@yandex-team.ru> wrote in\n> > > > 15 апр. 2020 г., в 15:25, Magnus Hagander <magnus@hagander.net>\n> > написал(а):\n> > > > I think that makes perfect sense. The documentation explicitly says\n> > \"can read all pg_stat_* views\", which is clearly wrong -- so either the\n> > code or the docs should be fixed, and it looks like it's the code that\n> > should be fixed to me.\n> > > Should it be bug or v14 feature?\n> > >\n> > > Also pgstatfuncs.c contains a lot more checks of\n> > has_privs_of_role(GetUserId(), beentry->st_userid).\n> > > Maybe grant them all?\n> > >\n> > > > As for the patch, one could argue that we should just store the\n> > resulting boolean instead of re-running the check (e.g. have a \"bool\n> > has_stats_privilege\" or such), but perhaps that's an unnecessary\n> > micro-optimization, like the attached.\n> > >\n> > > Looks good to me.\n> >\n> > pg_stat_get_activty checks (has_privs_of_role() ||\n> > is_member_of_role()) in-place for every entry. It's not necessary but\n> > I suppose that doing the same thing for pg_stat_progress_info might be\n> > better.\n> >\n> \n> From a result perspective, it shouldn't make a difference though, should\n> it? It's a micro-optimization, but it might not have an actual performance\n> effect in reality as well, but the result should always be the same?\n> \n> (FWIW, pg_stat_statements has a coding pattern similar to the one I\n> suggested in the patch)\n\nAs a priciple, I prefer the \"optimized\" (or pg_stat_statements')\npattern because that style suggests that the privilege is (shold be)\nsame to all entries, not because that it might be a bit faster. My\nsuggestion above is just from \"same style with a nearby code\". But at\nleast the v2 code introduces the third style (mixture of in-place and\npre-evaluated) seemed a kind of ad-hoc.\n\n> > It's another issue, but pg_stat_get_backend_* functions don't consider\n> > pg_read_all_stats. I suppose that the functions should work under the\n> > same criteria to pg_stat views, and maybe explicitly documented?\n> >\n> \n> That's a good question. They haven't been documented to do so, but it\n> certainly seems *weird* that the same information should be available\n> through a view like pg_stat_activity, but not through the functions.\n> \n> I would guess this was simply forgotten in 25fff40798f -- I don't recall\n> any discussion about it. The commit message specifically says\n> pg_database_size() and pg_tablespace_size(), but mentions nothing about\n> pg_stat_*.\n\nYeah. pg_database_size() ERRORs out for insufficient privileges. On\nthe other hand pg_stat_* returns \"<insufficient privilege>\" not\nERRORing out.\n\nFor example, pg_stat_get_backend_wait_event_type is documented as\n\nhttps://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-BACKEND-FUNCS-TABLE\n\n\"Wait event type name if backend is currently waiting, otherwise\n NULL. See Table 27.4 for details.\"\n\nI would read this as \"If the function returns non-null value, the\nreturned value represents the wait event type mentioned in Table\n27.4\", but, \"<insufficient privilege>\" is not a wait event type. I\nthink something like \"text-returning functions may return some\nout-of-the-domain strings like \"<insufficient privilege>\" under\ncorresponding conditions\".\n\n> > If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> > something like and replace the all occurances of the idiomatic\n> > condition with it.\n> >\n> \n> You mean something like the attached?\n\nExactly. It looks good to me. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Apr 2020 10:29:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "\n\n> 16 апр. 2020 г., в 17:46, Magnus Hagander <magnus@hagander.net> написал(а):\n> \n> \n> If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> something like and replace the all occurances of the idiomatic\n> condition with it.\n> \n> You mean something like the attached? \n> \n> <allow_read_all_stats3.diff>\n\nIs it correct that we use DEFAULT_ROLE_READ_ALL_STATS regardless of inheritance? I'm not familiar with what is inherited and what is not, so I think it's better to ask explicitly.\n\n+#define HAS_PGSTAT_PERMISSIONS(role)\t (is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n\nBesides this, the patch looks good to me.\nThanks!\n\nBest regards, Andrey Borodin,\n\n", "msg_date": "Mon, 20 Apr 2020 15:43:22 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "On Mon, Apr 20, 2020 at 12:43 PM Andrey M. Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n>\n>\n> > 16 апр. 2020 г., в 17:46, Magnus Hagander <magnus@hagander.net>\n> написал(а):\n> >\n> >\n> > If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> > something like and replace the all occurances of the idiomatic\n> > condition with it.\n> >\n> > You mean something like the attached?\n> >\n> > <allow_read_all_stats3.diff>\n>\n> Is it correct that we use DEFAULT_ROLE_READ_ALL_STATS regardless of\n> inheritance? I'm not familiar with what is inherited and what is not, so I\n> think it's better to ask explicitly.\n>\n> +#define HAS_PGSTAT_PERMISSIONS(role) (is_member_of_role(GetUserId(),\n> DEFAULT_ROLE_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n>\n\n It is consistent with all the other uses of DEFAULT_ROLE_READ_ALL_STATS\nthat I can find.\n\n\nBesides this, the patch looks good to me.\n>\n\nThanks, I've pushed it now.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 20, 2020 at 12:43 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n\n> 16 апр. 2020 г., в 17:46, Magnus Hagander <magnus@hagander.net> написал(а):\n> \n> \n> If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> something like and replace the all occurances of the idiomatic\n> condition with it.\n> \n> You mean something like the attached? \n> \n> <allow_read_all_stats3.diff>\n\nIs it correct that we use DEFAULT_ROLE_READ_ALL_STATS regardless of inheritance? I'm not familiar with what is inherited and what is not, so I think it's better to ask explicitly.\n\n+#define HAS_PGSTAT_PERMISSIONS(role)    (is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role)) It is consistent with all the other uses of DEFAULT_ROLE_READ_ALL_STATS that I can find.\nBesides this, the patch looks good to me.Thanks, I've pushed it now. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 20 Apr 2020 13:05:35 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Apr 20, 2020 at 12:43 PM Andrey M. Borodin <x4mmm@yandex-team.ru>\n> wrote:\n> > > 16 апр. 2020 г., в 17:46, Magnus Hagander <magnus@hagander.net>\n> > написал(а):\n> > > If we do that, it may be better that we define \"PGSTAT_VIEW_PRIV()\" or\n> > > something like and replace the all occurances of the idiomatic\n> > > condition with it.\n> > >\n> > > You mean something like the attached?\n> > >\n> > > <allow_read_all_stats3.diff>\n> >\n> > Is it correct that we use DEFAULT_ROLE_READ_ALL_STATS regardless of\n> > inheritance? I'm not familiar with what is inherited and what is not, so I\n> > think it's better to ask explicitly.\n> >\n> > +#define HAS_PGSTAT_PERMISSIONS(role) (is_member_of_role(GetUserId(),\n> > DEFAULT_ROLE_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n> \n> It is consistent with all the other uses of DEFAULT_ROLE_READ_ALL_STATS\n> that I can find.\n\nUgh. That doesn't make it correct though.. We really should be using\nhas_privs_of_role() for these cases (and that goes for all of the\ndefault role cases- some of which are correct and others are not, it\nseems).\n\nThanks,\n\nStephen", "msg_date": "Mon, 20 Apr 2020 07:10:02 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Ugh. That doesn't make it correct though.. We really should be using\n> has_privs_of_role() for these cases (and that goes for all of the\n> default role cases- some of which are correct and others are not, it\n> seems).\n\nI have a different concern about this patch: while reading statistical\nvalues is fine, do we REALLY want pg_read_all_stats to enable\npg_stat_get_activity(), ie viewing other sessions' command strings?\nThat opens security considerations that don't seem to me to be covered\nby the description of the role.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Apr 2020 10:12:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" }, { "msg_contents": "On Mon, Apr 20, 2020 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Ugh. That doesn't make it correct though.. We really should be using\n> > has_privs_of_role() for these cases (and that goes for all of the\n> > default role cases- some of which are correct and others are not, it\n> > seems).\n>\n> I have a different concern about this patch: while reading statistical\n> values is fine, do we REALLY want pg_read_all_stats to enable\n> pg_stat_get_activity(), ie viewing other sessions' command strings?\n> That opens security considerations that don't seem to me to be covered\n> by the description of the role.\n>\n\nIt already did allow that, and that's fully documented.\n\nThe patch only adds the ability to get at it through functions, but not\nthrough views. (And the pg_stat_progress_* views).\n\nThe pg_stat_activity change is only:\n@@ -669,8 +671,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n nulls[16] = true;\n\n /* Values only available to role member or\npg_read_all_stats */\n- if (has_privs_of_role(GetUserId(), beentry->st_userid) ||\n- is_member_of_role(GetUserId(),\nDEFAULT_ROLE_READ_ALL_STATS))\n+ if (HAS_PGSTAT_PERMISSIONS(beentry->st_userid))\n {\n SockAddr zero_clientaddr;\n char *clipped_activity;\n\n\nWhich moves the check into the macro, but doesn't change how it works.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 20, 2020 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n> Ugh.  That doesn't make it correct though..  We really should be using\n> has_privs_of_role() for these cases (and that goes for all of the\n> default role cases- some of which are correct and others are not, it\n> seems).\n\nI have a different concern about this patch: while reading statistical\nvalues is fine, do we REALLY want pg_read_all_stats to enable\npg_stat_get_activity(), ie viewing other sessions' command strings?\nThat opens security considerations that don't seem to me to be covered\nby the description of the role.It already did allow that, and that's fully documented.The patch only adds the ability to get at it through functions, but not through views. (And the pg_stat_progress_* views).The pg_stat_activity change is only:@@ -669,8 +671,7 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)                        nulls[16] = true;                 /* Values only available to role member or pg_read_all_stats */-               if (has_privs_of_role(GetUserId(), beentry->st_userid) ||-                       is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS))+               if (HAS_PGSTAT_PERMISSIONS(beentry->st_userid))                {                        SockAddr        zero_clientaddr;                        char       *clipped_activity;Which moves the check into the macro, but doesn't change how it works. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 20 Apr 2020 16:15:10 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Allow pg_read_all_stats to read pg_stat_progress_*" } ]
[ { "msg_contents": "I had a report from the wilds that run-time partition pruning was not\nworking in certain cases.\n\nAfter some investigation and obtaining the mockup of the actual case,\nI discovered that the problem was down to accumulate_append_subpath()\nhitting the case where it does not pullup a Parallel Append where the\nfirst parallel node is > 0.\n\nWhat's actually happening is that the plan is left with a nested\nAppend, and in this particular case, the top-level Append only has a\nsingle subpath, to which the code for 8edd0e794 (Suppress Append and\nMergeAppend plan nodes that have a single child) causes the nested\nAppend to be pulled up to become the main Append. This causes\nrun-time pruning to break since we only attach the pruning information\nto the top-level Append.\n\nThe most simplified test case I can find to demonstrate this issue is:\n\ncreate table list (a int, b int) partition by list(a);\ncreate table list_12 partition of list for values in(1,2) partition by list(a);\ncreate table list_12_1 partition of list_12 for values in(1);\ncreate table list_12_2 partition of list_12 for values in(2);\n\ninsert into list select 2,0 from generate_Series(1,1000000) x;\nvacuum analyze list;\n\nexplain (analyze on, costs off, timing off, summary off)\nselect * from list where a = (select 1) and b > 0;\n\n-- force the 2nd subnode of the Append to be non-parallel.\nalter table list_12_1 set (parallel_workers=0);\n\nexplain (analyze on, costs off, timing off, summary off)\nselect * from list where a = (select 1) and b > 0;\n\n\nThe results of this in master are:\n\npostgres=# explain (analyze on, costs off, timing off, summary off)\nselect * from list where a = (select 1) and b > 0;\n QUERY PLAN\n---------------------------------------------------------------------------\n Gather (actual rows=0 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n InitPlan 1 (returns $0)\n -> Result (actual rows=1 loops=1)\n -> Parallel Append (actual rows=0 loops=3)\n -> Parallel Seq Scan on list_12_2 list_2 (never executed)\n Filter: ((b > 0) AND (a = $0))\n -> Parallel Seq Scan on list_12_1 list_1 (actual rows=0 loops=1)\n Filter: ((b > 0) AND (a = $0))\n(11 rows)\n\n\npostgres=# alter table list_12_1 set (parallel_workers=0);\nALTER TABLE\npostgres=# explain (analyze on, costs off, timing off, summary off)\nselect * from list where a = (select 1) and b > 0;\n QUERY PLAN\n---------------------------------------------------------------------------\n Gather (actual rows=0 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n InitPlan 1 (returns $0)\n -> Result (actual rows=1 loops=1)\n -> Parallel Append (actual rows=0 loops=3)\n -> Seq Scan on list_12_1 list_1 (actual rows=0 loops=1)\n Filter: ((b > 0) AND (a = $0))\n -> Parallel Seq Scan on list_12_2 list_2 (actual rows=0 loops=3)\n Filter: ((b > 0) AND (a = $0))\n Rows Removed by Filter: 333333\n(12 rows)\n\nNotice that we don't get \"(never executed)\" for list_12_2 in the 2nd case.\n\nI'm a bit divided on what the correct fix is. If I blame Parallel\nAppend for not trying hard enough to pull up the lower Append in\naccumulate_append_subpath(), then clearly the parallel append code is\nto blame. However, perhaps run-time pruning should be tagging on\nPartitionPruneInfo to more than top-level Appends. Fixing the latter\ncase, code-wise is about as simple as removing the \"rel->reloptkind ==\nRELOPT_BASEREL &&\" line from create_append_plan(). Certainly, if the\nouter Append hadn't been a single subpath Append, then we wouldn't\nhave pulled up the lower-level Append, so perhaps we should be\nrun-time pruning lower-level ones too.\n\nWhat do other people think?\n\n(copying in Robert and Amit K due to their work on Parallel Append,\nTom as I seem to remember him complaining about\naccumulate_append_subpath() at some point and Amit L because...\npartitioning...)\n\nDavid\n\n\n", "msg_date": "Wed, 15 Apr 2020 19:18:42 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Wed, Apr 15, 2020 at 4:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm a bit divided on what the correct fix is. If I blame Parallel\n> Append for not trying hard enough to pull up the lower Append in\n> accumulate_append_subpath(), then clearly the parallel append code is\n> to blame.\n\nI spent some time trying to understand how Append parallelism works\nand I am tempted to agree with you that there might be problems with\nhow accumulate_append_subpath()'s interacts with parallelism. Maybe it\nwould be better to disregard a non-parallel-aware partial Append if it\nrequires us to fail on flattening a child Append. I have as attached\na PoC fix to show that. While a nested Append is not really a problem\nin general, it appears to me that our run-time code is not in position\nto work correctly with them, or at least not with how things stand\ntoday...\n\n> However, perhaps run-time pruning should be tagging on\n> PartitionPruneInfo to more than top-level Appends. Fixing the latter\n> case, code-wise is about as simple as removing the \"rel->reloptkind ==\n> RELOPT_BASEREL &&\" line from create_append_plan(). Certainly, if the\n> outer Append hadn't been a single subpath Append, then we wouldn't\n> have pulled up the lower-level Append, so perhaps we should be\n> run-time pruning lower-level ones too.\n\nWhile looking at this, I observed that the PartitionPruneInfo of the\ntop-level Append (the one that later gets thrown out) contains bogus\ninformation:\n\n {PARTITIONPRUNEINFO\n :prune_infos ((\n {PARTITIONEDRELPRUNEINFO\n :rtindex 1\n :present_parts (b 0)\n :nparts 1\n :subplan_map 0\n :subpart_map 1\n\nOne of these should be -1.\n\n {PARTITIONEDRELPRUNEINFO\n :rtindex 2\n :present_parts (b)\n :nparts 2\n :subplan_map -1 -1\n :subpart_map -1 -1\n\nsubplan_map values are not correct, because subpaths list that would\nhave been passed would not include paths of lower-level partitions as\nthe flattening didn't occur.\n\n ))\n :other_subplans (b)\n }\n\nI guess the problem is that we let an Append be nested, but don't\naccount for that in how partitioned_rels list it parent Append is\nconstructed. The top-level Append's partitioned_rels should not have\ncontained sub-partitioned table's RTI if it got its own Append. Maybe\nif we want to make run-time pruning work with nested Appends, we need\nto fix how partitioned_rels are gathered.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Apr 2020 16:07:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Fri, 17 Apr 2020 at 19:08, Amit Langote <amitlangote09@gmail.com> wrote:\n> While looking at this, I observed that the PartitionPruneInfo of the\n> top-level Append (the one that later gets thrown out) contains bogus\n> information:\n>\n> {PARTITIONPRUNEINFO\n> :prune_infos ((\n> {PARTITIONEDRELPRUNEINFO\n> :rtindex 1\n> :present_parts (b 0)\n> :nparts 1\n> :subplan_map 0\n> :subpart_map 1\n>\n> One of these should be -1.\n>\n> {PARTITIONEDRELPRUNEINFO\n> :rtindex 2\n> :present_parts (b)\n> :nparts 2\n> :subplan_map -1 -1\n> :subpart_map -1 -1\n>\n> subplan_map values are not correct, because subpaths list that would\n> have been passed would not include paths of lower-level partitions as\n> the flattening didn't occur.\n\nIt's not great that we're generating that, but as far as I can see,\nit's not going to cause any misbehaviour. It'll cause a small\nslowdown in run-time pruning due to perhaps having to perform an\nadditional call to find_matching_subplans_recurse() during execution.\nIn this case, it'll never find any subnodes that match due to both\nmaps having all -1 element values.\n\nSince f2343653f5, we're not using partitioned_rels for anything else,\nso we should likely fix this so that we don't add the item to\npartitioned_rels when we don't pullup the sub-Append. I think we\nshould hold off on fixing that until we decide if any adjustments need\nto be made to the sub-Append pullup code.\n\nDavid\n\n\n", "msg_date": "Mon, 20 Apr 2020 11:00:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Fri, 17 Apr 2020 at 19:08, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Apr 15, 2020 at 4:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'm a bit divided on what the correct fix is. If I blame Parallel\n> > Append for not trying hard enough to pull up the lower Append in\n> > accumulate_append_subpath(), then clearly the parallel append code is\n> > to blame.\n>\n> I spent some time trying to understand how Append parallelism works\n> and I am tempted to agree with you that there might be problems with\n> how accumulate_append_subpath()'s interacts with parallelism. Maybe it\n> would be better to disregard a non-parallel-aware partial Append if it\n> requires us to fail on flattening a child Append. I have as attached\n> a PoC fix to show that. While a nested Append is not really a problem\n> in general, it appears to me that our run-time code is not in position\n> to work correctly with them, or at least not with how things stand\n> today...\n\nThanks for taking a look at this. I've now looked at this in more\ndetail and below is my understanding of what's going on:\n\nIt seems, in this case, what's going on is, on the following line:\n\naccumulate_append_subpath(cheapest_partial_path,\n &partial_subpaths, NULL);\n\nwe don't manage to pullup the sub-Append due to passing a NULL pointer\nfor the final special_subpaths argument. This results in just taking\nthe child's Append path verbatim. i.e. nested Append\n\nLater, when we do:\n\nelse if (nppath == NULL ||\n(cheapest_partial_path != NULL &&\n cheapest_partial_path->total_cost < nppath->total_cost))\n{\n/* Partial path is cheaper or the only option. */\nAssert(cheapest_partial_path != NULL);\naccumulate_append_subpath(cheapest_partial_path,\n &pa_partial_subpaths,\n &pa_nonpartial_subpaths);\n\nwe do pass a non-NULL special_subpaths argument to allow the\nsub-Append to be pulled up.\n\nSo, now we have 2 paths, one with a nested Append and one with a\nflattened Append. Both paths have the same cost, but due to the fact\nthat we call add_partial_path() for the nested Append version first,\nthe logic in add_partial_path() accepts that path. However, the\nsubsequent call of add_partial_path(), the one for the non-nested\nAppend, that path is rejected due to the total cost being too similar\nto one of the existing partial path. We just end up keeping the nested\nAppend as the cheapest partial path... That path, since in the example\ncase only has a single subpath, is pulled up into the main append by\nthe logic added in 8edd0e794.\n\nI think you've realised this and that's why your PoC patch just\nrejected the first path when it's unable to do the pullup. We'll get a\nbetter path later when we allow mixed partial and non-partial paths.\n\n(We'll never fail to do a pullup when calling\naccumulate_append_subpath() for \"nppath\", since that's a non-parallel\npath and accumulate_append_subpath() will always pull Append paths up\nwhen they're not parallel aware.)\n\nI wonder if the fix should be more something along the lines of trying\nto merge things do we only generate a single partial path. That way\nwe wouldn't be at the mercy of the logic in add_partial_path() to\naccept or reject the path based on the order the paths are added.\n\nDavid\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:03:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "Hi David,\n\nOn Tue, Apr 21, 2020 at 12:03 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 17 Apr 2020 at 19:08, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Apr 15, 2020 at 4:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > I'm a bit divided on what the correct fix is. If I blame Parallel\n> > > Append for not trying hard enough to pull up the lower Append in\n> > > accumulate_append_subpath(), then clearly the parallel append code is\n> > > to blame.\n> >\n> > I spent some time trying to understand how Append parallelism works\n> > and I am tempted to agree with you that there might be problems with\n> > how accumulate_append_subpath()'s interacts with parallelism. Maybe it\n> > would be better to disregard a non-parallel-aware partial Append if it\n> > requires us to fail on flattening a child Append. I have as attached\n> > a PoC fix to show that. While a nested Append is not really a problem\n> > in general, it appears to me that our run-time code is not in position\n> > to work correctly with them, or at least not with how things stand\n> > today...\n>\n> Thanks for taking a look at this. I've now looked at this in more\n> detail and below is my understanding of what's going on:\n>\n> It seems, in this case, what's going on is, on the following line:\n>\n> accumulate_append_subpath(cheapest_partial_path,\n> &partial_subpaths, NULL);\n>\n> we don't manage to pullup the sub-Append due to passing a NULL pointer\n> for the final special_subpaths argument. This results in just taking\n> the child's Append path verbatim. i.e. nested Append\n>\n> Later, when we do:\n>\n> else if (nppath == NULL ||\n> (cheapest_partial_path != NULL &&\n> cheapest_partial_path->total_cost < nppath->total_cost))\n> {\n> /* Partial path is cheaper or the only option. */\n> Assert(cheapest_partial_path != NULL);\n> accumulate_append_subpath(cheapest_partial_path,\n> &pa_partial_subpaths,\n> &pa_nonpartial_subpaths);\n>\n> we do pass a non-NULL special_subpaths argument to allow the\n> sub-Append to be pulled up.\n>\n> So, now we have 2 paths, one with a nested Append and one with a\n> flattened Append. Both paths have the same cost, but due to the fact\n> that we call add_partial_path() for the nested Append version first,\n> the logic in add_partial_path() accepts that path. However, the\n> subsequent call of add_partial_path(), the one for the non-nested\n> Append, that path is rejected due to the total cost being too similar\n> to one of the existing partial path. We just end up keeping the nested\n> Append as the cheapest partial path... That path, since in the example\n> case only has a single subpath, is pulled up into the main append by\n> the logic added in 8edd0e794.\n>\n> I think you've realised this and that's why your PoC patch just\n> rejected the first path when it's unable to do the pullup.\n\nRight.\n\n> We'll get a\n> better path later when we allow mixed partial and non-partial paths.\n\nYes, but only if parallel-aware Append is allowed (pa_subpaths_valid).\nSo it's possible that the Append may not participate in any\nparallelism whatsoever if we reject partial Append on failing to fold\na child Append, which does somewhat suck.\n\n> (We'll never fail to do a pullup when calling\n> accumulate_append_subpath() for \"nppath\", since that's a non-parallel\n> path and accumulate_append_subpath() will always pull Append paths up\n> when they're not parallel aware.)\n>\n> I wonder if the fix should be more something along the lines of trying\n> to merge things do we only generate a single partial path. That way\n> we wouldn't be at the mercy of the logic in add_partial_path() to\n> accept or reject the path based on the order the paths are added.\n\nSo as things stand, parallel-aware partial Append (Parallel Append)\npath competes with non-parallel partial Append path on cost grounds.\nAs far as I can see, it's only the latter that can contain among its\nsubpaths an (nested) Append which can be problematic. Given that, out\nchoice between the two types of partial Append paths becomes based on\nsomething that is not cost, but is that okay?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 14:23:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Tue, 21 Apr 2020 at 15:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> I wonder if the fix should be more something along the lines of trying\n> to merge things do we only generate a single partial path. That way\n> we wouldn't be at the mercy of the logic in add_partial_path() to\n> accept or reject the path based on the order the paths are added.\n\nI took a shot at doing things this way.\n\nFirst, I'll recap on the problem this is trying to solve:\n\nadd_paths_to_append_rel() attempts to create two separate partial\nAppend paths. I'll describe both of these below:\n\nPath1: This path is generated regardless of if Parallel Append is\nenabled and contains all the cheapest partial paths from each child\nrelation. If parallel append is enabled this will become a Parallel\nAppend. If it's not then a non-parallel append will be created\ncontaining the list of partial subpaths. Here's an example from\nselect_parallel.out:\n\n QUERY PLAN\n--------------------------------------------------------------\n Finalize Aggregate\n -> Gather\n Workers Planned: 1\n -> Partial Aggregate\n -> Append\n -> Parallel Seq Scan on a_star a_star_1\n -> Parallel Seq Scan on b_star a_star_2\n -> Parallel Seq Scan on c_star a_star_3\n -> Parallel Seq Scan on d_star a_star_4\n -> Parallel Seq Scan on e_star a_star_5\n -> Parallel Seq Scan on f_star a_star_6\n\nPath2: We only ever consider this one when enable_parallel_append ==\ntrue and the append rel's consider_parallel == true. When this path\nis generated, it'll always be for a Parallel Append. This path may\ncontain a mix of partial paths for child rels and parallel_safe child\npaths, of which will only be visited by a single worker.\n\nThe problem is that path1 does not pullup child Appends when the child\nappend path contains a mix of partial and parallel safe paths (i.e a\nsub-path2, per above). Since we create path2 in addition to path1, the\ncosts come out the same even though path1 couldn't pullup the\nsub-Append paths. Unfortunately, the costs are the same so path1 is\nprefered since it's added first. add_partial_path() just rejects path2\nbased on it being too similar in cost to the existing path1.\n\nIn the attached, I'm trying to solve this by only created 1 partial\nAppend path in the first place. This path will always try to use the\ncheapest partial path, or the cheapest parallel safe path, if parallel\nappend is allowed and that path is cheaper than the cheapest partial\npath.\n\nI believe the attached gives us what we want and additionally, since\nit should now always pullup the sub-Appends, then there's no need to\nconsider adjusting partitioned_rels based on if the pull-up occurred\nor not. Those should now be right in all cases. This should also fix\nthe run-time pruning issue too since in my original test case it'll\npullup the sub-Append which means that the code added in 8edd0e794 is\nno longer going to do anything with it as the top-level Append will\nnever contain just 1 subpath.\n\nI'm reasonably certain that this is correct, but I did find it a bit\nmind-bending considering all the possible cases, so it could do with\nsome more eyes on it. I've not really done a final polish of the\ncomments yet. I'll do that if the patch is looking promising.\n\nThe output of the original test with the attached is as follows:\n\npostgres=# explain (analyze on, costs off, timing off, summary off)\npostgres-# select * from list where a = (select 1) and b > 0;\n QUERY PLAN\n---------------------------------------------------------------------------\n Gather (actual rows=0 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n InitPlan 1 (returns $0)\n -> Result (actual rows=1 loops=1)\n -> Parallel Append (actual rows=0 loops=3)\n -> Parallel Seq Scan on list_12_2 list_2 (never executed)\n Filter: ((b > 0) AND (a = $0))\n -> Parallel Seq Scan on list_12_1 list_1 (actual rows=0 loops=1)\n Filter: ((b > 0) AND (a = $0))\n(11 rows)\n\n\npostgres=# -- force the 2nd subnode of the Append to be non-parallel.\npostgres=# alter table list_12_1 set (parallel_workers=0);\nALTER TABLE\npostgres=# explain (analyze on, costs off, timing off, summary off)\npostgres-# select * from list where a = (select 1) and b > 0;\n QUERY PLAN\n--------------------------------------------------------------------\n Gather (actual rows=0 loops=1)\n Workers Planned: 2\n Params Evaluated: $0\n Workers Launched: 2\n InitPlan 1 (returns $0)\n -> Result (actual rows=1 loops=1)\n -> Parallel Append (actual rows=0 loops=3)\n -> Seq Scan on list_12_1 list_1 (actual rows=0 loops=1)\n Filter: ((b > 0) AND (a = $0))\n -> Parallel Seq Scan on list_12_2 list_2 (never executed)\n Filter: ((b > 0) AND (a = $0))\n(11 rows)\n\nNotice we get \"(never executed)\" in both cases.\n\nDavid", "msg_date": "Wed, 22 Apr 2020 15:22:14 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Wed, Apr 22, 2020 at 12:22 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 21 Apr 2020 at 15:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I wonder if the fix should be more something along the lines of trying\n> > to merge things do we only generate a single partial path. That way\n> > we wouldn't be at the mercy of the logic in add_partial_path() to\n> > accept or reject the path based on the order the paths are added.\n>\n> In the attached, I'm trying to solve this by only created 1 partial\n> Append path in the first place. This path will always try to use the\n> cheapest partial path, or the cheapest parallel safe path, if parallel\n> append is allowed and that path is cheaper than the cheapest partial\n> path.\n>\n> I believe the attached gives us what we want and additionally, since\n> it should now always pullup the sub-Appends, then there's no need to\n> consider adjusting partitioned_rels based on if the pull-up occurred\n> or not. Those should now be right in all cases. This should also fix\n> the run-time pruning issue too since in my original test case it'll\n> pullup the sub-Append which means that the code added in 8edd0e794 is\n> no longer going to do anything with it as the top-level Append will\n> never contain just 1 subpath.\n>\n> I'm reasonably certain that this is correct, but I did find it a bit\n> mind-bending considering all the possible cases, so it could do with\n> some more eyes on it. I've not really done a final polish of the\n> comments yet. I'll do that if the patch is looking promising.\n\nThanks for the patch.\n\nIt's good to see that unfolded sub-Appends will not occur with the new\ncode structure or one hopes. Also, I am finding it somewhat easier to\nunderstand how partial Appends get built due to smaller code footprint\nafter patching.\n\nOne thing I remain concerned about is that it appears like we are no\nlonger leaving the choice between parallel and non-parallel Append to\nthe cost machinery which is currently the case. AFAICS with patched,\nas long as parallel Append is enabled and allowed, it will be chosen\nover a non-parallel Append as the partial path.\n\nRegarding the patch, I had been assuming that the \"pa\" in\npa_subpaths_valid stands for \"parallel append\", so it using the\nvariable as is in the new code structure would be misleading. Maybe,\nparallel_subpaths_valid?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Apr 2020 23:37:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 02:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> One thing I remain concerned about is that it appears like we are no\n> longer leaving the choice between parallel and non-parallel Append to\n> the cost machinery which is currently the case. AFAICS with patched,\n> as long as parallel Append is enabled and allowed, it will be chosen\n> over a non-parallel Append as the partial path.\n\nGiven the same set of paths, when would a non-parallel append be\ncheaper than a parallel one? I don't see anything in cost_append()\nthat could cause the costs to come out higher for the parallel\nversion. However, I might have misunderstood something. Can you give\nan example of a case that you think might change?\n\nThe cost comparison is still there for the cheapest parallel safe\nnormal path vs the cheapest partial path, so, when each of those paths\nare allowed, then we will still end up with the cheapest paths for\neach child.\n\n> Regarding the patch, I had been assuming that the \"pa\" in\n> pa_subpaths_valid stands for \"parallel append\", so it using the\n> variable as is in the new code structure would be misleading. Maybe,\n> parallel_subpaths_valid?\n\nYeah, I had wondered if it would be better to rename it, I'd just not\nthought too hard on what to call it yet.\n\nDavid\n\n\n", "msg_date": "Thu, 23 Apr 2020 10:58:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Given the same set of paths, when would a non-parallel append be\n> cheaper than a parallel one?\n\nWell, anytime the parallel startup cost is significant, for starters.\nBut maybe we account for that at some other point, like when building\nthe Gather?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 19:11:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 11:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Given the same set of paths, when would a non-parallel append be\n> > cheaper than a parallel one?\n>\n> Well, anytime the parallel startup cost is significant, for starters.\n> But maybe we account for that at some other point, like when building\n> the Gather?\n\nYeah. There's no mention of parallel_setup_cost or parallel_tuple_cost\nin any of the Append costing code. Those are only applied when we cost\nGather / GatherMerge At the point Amit and I are talking about, we're\nonly comparing two Append paths. No Gather/GatherMerge in sight yet,\nso any additional costs from those is not applicable.\n\nIf there was some reason that a Parallel Append could come out more\nexpensive, then maybe we could just create a non-parallel Append using\nthe same subpath list and add_partial_path() it. I just don't quite\nsee how that would ever win though. I'm willing to be proven wrong\nthough.\n\nDavid\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:35:55 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 23 Apr 2020 at 11:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, anytime the parallel startup cost is significant, for starters.\n>> But maybe we account for that at some other point, like when building\n>> the Gather?\n\n> Yeah. There's no mention of parallel_setup_cost or parallel_tuple_cost\n> in any of the Append costing code. Those are only applied when we cost\n> Gather / GatherMerge At the point Amit and I are talking about, we're\n> only comparing two Append paths. No Gather/GatherMerge in sight yet,\n> so any additional costs from those is not applicable.\n\nRight, so really the costs of partial and non-partial paths are not\ncommensurable, and comparing them directly is just misleading.\nI trust we're not throwing away non-partial paths on that basis?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 19:39:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 11:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Thu, 23 Apr 2020 at 11:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Well, anytime the parallel startup cost is significant, for starters.\n> >> But maybe we account for that at some other point, like when building\n> >> the Gather?\n>\n> > Yeah. There's no mention of parallel_setup_cost or parallel_tuple_cost\n> > in any of the Append costing code. Those are only applied when we cost\n> > Gather / GatherMerge At the point Amit and I are talking about, we're\n> > only comparing two Append paths. No Gather/GatherMerge in sight yet,\n> > so any additional costs from those is not applicable.\n>\n> Right, so really the costs of partial and non-partial paths are not\n> commensurable, and comparing them directly is just misleading.\n> I trust we're not throwing away non-partial paths on that basis?\n\nThere is a case in both master and in the patch where we compare the\ncost of the cheapest path in partial_pathlist. However, in this case,\nthe pathlist path will be used in an Append or Parallel Append with a\nGather below it, so those parallel_(setup|tuple)_costs will be applied\nregardless. The non-parallel Append in this case still requires a\nGather since it still is using multiple workers to execute the\nsubpaths. e.g the plan I posted in [1].\n\nThe code comparing the path costs is:\n\nelse if (nppath == NULL ||\n(cheapest_partial_path != NULL &&\n cheapest_partial_path->total_cost < nppath->total_cost))\n\n[1] https://www.postgresql.org/message-id/CAApHDvqcOD3ObPgPAeU+3qyFL_wzE5kmczw70qMAh7qJ-3wuzw@mail.gmail.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:01:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Wed, Apr 22, 2020 at 7:36 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> If there was some reason that a Parallel Append could come out more\n> expensive, then maybe we could just create a non-parallel Append using\n> the same subpath list and add_partial_path() it. I just don't quite\n> see how that would ever win though. I'm willing to be proven wrong\n> though.\n\nI think you're talking about the thing that this comment is trying to explain:\n\n /*\n * Consider a parallel-aware append using a mix of partial and non-partial\n * paths. (This only makes sense if there's at least one child which has\n * a non-partial path that is substantially cheaper than any partial path;\n * otherwise, we should use the append path added in the previous step.)\n */\n\nLike, suppose there are two relations A and B, and we're appending\nthem. A has no indexes, so we can only choose between a Seq Scan and\nan Index Scan. B has a GIST index that is well-suited to the query,\nbut GIST indexes don't support parallel scans. So we've got three\nchoices:\n\n1. Don't use parallelism at all. Then, we can do a normal Append with\na Seq Scan on a and an Index Scan on b. (\"If we found unparameterized\npaths for all children, build an unordered, unparameterized Append\npath for the rel.\")\n\n2. Use parallelism for both relations. Then, we can do a Gather over a\nParallel Append (or a regular Append, if Parallel Append is disabled)\nwith a Parallel Seq Scan on a and a Parallel Seq Scan on b. As\ncompared with #1, this should win for a, but it might lose heavily for\nb, because switching from an index scan to a Seq Scan could be a big\nloser. (\"Consider an append of unordered, unparameterized partial\npaths. Make it parallel-aware if possible.\")\n\n3. Use parallelism for a but not for b. The only way to do this is a\nParallel Append, because there's no other way to mix partial and\nnon-partial paths at present. This lets us get the benefit of a\nParallel Seq Scan on a while still being able to do a non-parallel\nGIST Index Scan on b. This has a good chance of being better than #2,\nbut it's fundamentally a costing decision, because if the table is\nsmall enough or the query isn't very selective, #2 will actually be\nfaster, just on the raw power of more workers and less random I/O\n(\"Consider a parallel-aware append using a mix of partial and\nnon-partial paths.\")\n\nIt seems to me that all three strategies are viable. The third one is\nmuch less likely to be used now that we have parallel index scans for\nbtree and parallel bitmap heap scans, but I believe it can be a winner\nif you have the right case. You want to think about cases where there\nare parallel plans available for everything in the tree, but at least\nsome of the children have much better non-parallel plans.\n\nNote that for strategy #2 we always prefer Parallel Append to\nnon-Parallel Append on the optimistic assumption that Parallel Append\nwill always be better; we only use regular Append if Parallel Append\nis disabled. But for strategy #3 there is no such choice to be made: a\nregular Append would not be valid. If placed under a Gather, it would\nexecute the non-partial paths more than once; if not placed under a\nGather, we'd have partial paths without any Gather above them, which\nis an invalid plan shape. So you can't \"just use a regular Append\" in\ncase #3.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 22:36:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 7:36 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > If there was some reason that a Parallel Append could come out more\n> > expensive, then maybe we could just create a non-parallel Append using\n> > the same subpath list and add_partial_path() it. I just don't quite\n> > see how that would ever win though. I'm willing to be proven wrong\n> > though.\n>\n> I think you're talking about the thing that this comment is trying to explain:\n>\n> /*\n> * Consider a parallel-aware append using a mix of partial and non-partial\n> * paths. (This only makes sense if there's at least one child which has\n> * a non-partial path that is substantially cheaper than any partial path;\n> * otherwise, we should use the append path added in the previous step.)\n> */\n>\n> Like, suppose there are two relations A and B, and we're appending\n> them. A has no indexes, so we can only choose between a Seq Scan and\n> an Index Scan. B has a GIST index that is well-suited to the query,\n> but GIST indexes don't support parallel scans. So we've got three\n> choices:\n>\n> 1. Don't use parallelism at all. Then, we can do a normal Append with\n> a Seq Scan on a and an Index Scan on b. (\"If we found unparameterized\n> paths for all children, build an unordered, unparameterized Append\n> path for the rel.\")\n>\n> 2. Use parallelism for both relations. Then, we can do a Gather over a\n> Parallel Append (or a regular Append, if Parallel Append is disabled)\n> with a Parallel Seq Scan on a and a Parallel Seq Scan on b. As\n> compared with #1, this should win for a, but it might lose heavily for\n> b, because switching from an index scan to a Seq Scan could be a big\n> loser. (\"Consider an append of unordered, unparameterized partial\n> paths. Make it parallel-aware if possible.\")\n>\n> 3. Use parallelism for a but not for b. The only way to do this is a\n> Parallel Append, because there's no other way to mix partial and\n> non-partial paths at present. This lets us get the benefit of a\n> Parallel Seq Scan on a while still being able to do a non-parallel\n> GIST Index Scan on b. This has a good chance of being better than #2,\n> but it's fundamentally a costing decision, because if the table is\n> small enough or the query isn't very selective, #2 will actually be\n> faster, just on the raw power of more workers and less random I/O\n> (\"Consider a parallel-aware append using a mix of partial and\n> non-partial paths.\")\n\nThanks for those examples.\n\nI ran this situation through the code but used a hash index instead of\nGIST. The 3 settings which give us control over this plan are\nenable_parallel_append, enable_indexscan, enable_bitmapscan.\nenable_bitmapscan must be included since we can still get a parallel\nbitmap scan with a hash index.\n\nFor completeness, I just tried with each of the 8 combinations of the\nGUCs, but I'd detailed below which of your cases I'm testing as a\ncomment. There are 4 cases since #2 works with parallel and\nnon-parallel Append. Naturally, the aim is that the patched version\ndoes not change the behaviour.\n\n-- Test case\ncreate table listp (a int, b int) partition by list(a);\ncreate table listp1 partition of listp for values in(1);\ncreate table listp2 partition of listp for values in(2);\ninsert into listp select x,y from generate_Series(1,2) x,\ngenerate_Series(1,1000000) y;\ncreate index on listp2 using hash(b);\nvacuum analyze listp;\n\nexplain (costs off) select * from listp where b = 1;\nSET enable_indexscan = off;\nexplain (costs off) select * from listp where b = 1;\nSET enable_indexscan = on;\nSET enable_bitmapscan = off;\nexplain (costs off) select * from listp where b = 1; -- case #3, Mixed\nscan of parallel and non-parallel paths with a Parallel Append\nSET enable_indexscan = off;\nexplain (costs off) select * from listp where b = 1; -- case #2 with\nParallel Append\nSET enable_indexscan = on;\nSET enable_bitmapscan = on;\nSET enable_parallel_append = off;\nexplain (costs off) select * from listp where b = 1;\nSET enable_indexscan = off;\nexplain (costs off) select * from listp where b = 1; -- case #2 with\nnon-Parallel Append\nSET enable_indexscan = on;\nSET enable_bitmapscan = off;\nexplain (costs off) select * from listp where b = 1; -- case #1, best\nserial plan\nSET enable_indexscan = off;\nexplain (costs off) select * from listp where b = 1;\n\nThe results, patched/unpatched, are the same.\n\nDavid\n\n\n", "msg_date": "Thu, 23 Apr 2020 16:36:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 02:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> Regarding the patch, I had been assuming that the \"pa\" in\n> pa_subpaths_valid stands for \"parallel append\", so it using the\n> variable as is in the new code structure would be misleading. Maybe,\n> parallel_subpaths_valid?\n\nI started making another pass over this patch again. I did the\nrenaming you've mentioned plus the other two List variables for the\nsubpaths.\n\nI did notice that I had forgotten to properly disable parallel append\nwhen it was disallowed by the rel's consider_parallel flag. For us to\nhave done something wrong, we'd have needed a parallel safe path in\nthe pathlist and also have needed the rel to be consider_parallel ==\nfalse. It was easy enough to make up a case for that by sticking a\nparallel restrict function in the target list. I've fixed that issue\nin the attached and also polished up the comments a little and removed\nthe unused variable that I had forgotten about.\n\nFor now. I'd still like to get a bit more confidence that the only\nnoticeable change in the outcome here is that we're now pulling up\nsub-Appends in all cases. I've read the code a number of times and\njust can't quite see any room for anything changing. My tests per\nRobert's case all matched what the previous version did, but I'm still\nonly about 93% on this. Given that I'm aiming to fix a bug in master,\nv11 and v12 here, I need to get that confidence level up to about the\n100% mark.\n\nI've attached the v2 patch.\n\nDavid", "msg_date": "Thu, 23 Apr 2020 22:37:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, Apr 23, 2020 at 7:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 23 Apr 2020 at 02:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Regarding the patch, I had been assuming that the \"pa\" in\n> > pa_subpaths_valid stands for \"parallel append\", so it using the\n> > variable as is in the new code structure would be misleading. Maybe,\n> > parallel_subpaths_valid?\n>\n> I started making another pass over this patch again. I did the\n> renaming you've mentioned plus the other two List variables for the\n> subpaths.\n>\n> I did notice that I had forgotten to properly disable parallel append\n> when it was disallowed by the rel's consider_parallel flag. For us to\n> have done something wrong, we'd have needed a parallel safe path in\n> the pathlist and also have needed the rel to be consider_parallel ==\n> false. It was easy enough to make up a case for that by sticking a\n> parallel restrict function in the target list. I've fixed that issue\n> in the attached and also polished up the comments a little and removed\n> the unused variable that I had forgotten about.\n\nThanks for updating the patch.\n\n> For now. I'd still like to get a bit more confidence that the only\n> noticeable change in the outcome here is that we're now pulling up\n> sub-Appends in all cases.\n\nYeah, I would have liked us to have enough confidence to remove the\nfollowing text in the header comment of accumulate_append_subpath:\n\n... If the latter is\n * NULL, we don't flatten the path at all (unless it contains only partial\n * paths).\n\n> I've read the code a number of times and\n> just can't quite see any room for anything changing. My tests per\n> Robert's case all matched what the previous version did, but I'm still\n> only about 93% on this. Given that I'm aiming to fix a bug in master,\n> v11 and v12 here, I need to get that confidence level up to about the\n> 100% mark.\n\nI think I have managed to convince myself (still < 100% though) that\nit's not all that bad that we won't be leaving it up to\nadd_partial_path() to decide between a Parallel Append whose subpaths\nset consist only of partial paths and another whose subpaths set\nconsists of a mix of partial and non-partial paths. That's because we\nwill be building only the latter if that one looks cheaper to begin\nwith. If Parallel Append is disabled, there could only ever be one\npartial Append path for add_partial_path() to consider, one whose\nsubpaths set consists only of partial paths.\n\nOn the patch itself:\n\n+ * We also build a set of paths for each child by trying to use the\n+ * cheapest partial path, or the cheapest parallel safe normal path\n+ * either when that is cheaper, or if parallel append is not allowed.\n */\n- if (pa_subpaths_valid)\n+ if (parallel_subpaths_valid)\n {\n\nIn the comment above \", or parallel append is not allowed\" should be \"\nprovided parallel append is allowed\". Or how about writing it as\nfollows:\n\n /*\n * Add the child's cheapest partial path, or if parallel append is\n * allowed, its cheapest parallel safe normal path if that one is\n * cheaper, to the partial Append path we are constructing for the\n * parent.\n */\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 17:46:40 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Thu, 23 Apr 2020 at 22:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> For now. I'd still like to get a bit more confidence that the only\n> noticeable change in the outcome here is that we're now pulling up\n> sub-Appends in all cases. I've read the code a number of times and\n> just can't quite see any room for anything changing. My tests per\n> Robert's case all matched what the previous version did, but I'm still\n> only about 93% on this. Given that I'm aiming to fix a bug in master,\n> v11 and v12 here, I need to get that confidence level up to about the\n> 100% mark.\n\nI looked at this again and I don't think what I've got is right.\n\nThe situation that I'm concerned about is that we generate a Parallel\nAppend path for a sub-partitioned table and that path has a mix of\npartial and parallel safe paths. When we perform\nadd_paths_to_append_rel() for the top-level partition, if for some\nreason we didn't do a Parallel Append, then we could pullup the mixed\nset of partial and parallel safe paths from the child-Append. We can't\nexecute a parallel_safe subpath under a normal Append that's below a\nGather as multiple workers could execute the same parallel safe scan,\nwhich won't lead to anything good, probably wrong results at best.\nNow, we'll only ever have a Parallel Append for the sub-partitioned\ntable if enable_parallel_append is on and the rel's consider_parallel\nis switched on too, but if for some reason the top-level partitioned\ntable had consider_parallel set to off, then we could end up pulling\nup the subpaths from a Parallel Append path, which could contain a mix\nof partial and parallel safe paths and we'd then throw those into a\nnon-Parallel Append partial path! The parallel safe path just can't\ngo in one of those as multiple workers could then execute that path.\nOnly Parallel Append knows how to execute those safely as it ensures\nonly 1 worker touches it.\n\nNow, looking at the code today, it does seem that a child rel will do\nparallel only if the parent rel does, but I don't think it's a great\nidea to assume that will never change. Additionally, it seems fragile\nto also assume the value of the enable_parallel_append, a global\nvariable would stay the same between calls too.\n\nI wonder if we could fix this by when we call\nadd_paths_to_append_rel() for the top-level partitioned table, just\nrecursively get all the child rels and their children too.\naccumulate_append_subpath() would then never see Appends or\nMergeAppends as we'd just be dealing with scan type Paths.\nadd_paths_to_append_rel() for sub-partitioned tables wouldn't have to\ndo very much then. I'll need to see how that idea would fit in with\npartition-wise joins. My guess is, not very well. There's also\napply_scanjoin_target_to_paths() which calls add_paths_to_append_rel()\ntoo.\n\nThe only other idea I have so far is just to either not generate the\npartial path using the partial_subpaths list in\nadd_paths_to_append_rel() when pa_subpaths_valid and\npa_nonpartial_subpaths != NIL. i.e only create the first of those\npartial paths if we're not going to create the 2nd one. Or even just\nswap the order that we add_partial_path() them so that\nadd_partial_path() does not reject the path with the flattened\nAppends.\n\nDavid\n\n\n", "msg_date": "Sat, 25 Apr 2020 23:24:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Sat, Apr 25, 2020 at 7:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I looked at this again and I don't think what I've got is right.\n\nApart from the considerations which you raised, I think there's also a\ncosting issue here. For instance, suppose we have an Append with three\nchildren. Two of them have partial paths which will complete after\nconsuming 1 second of CPU time, and using a partial path is\nunquestionably the best strategy for those children. The third one has\na partial path which will complete after using 10 seconds of CPU time\nand a non-partial path which will complete after using 8 seconds of\nCPU time. What strategy is best? If we pick the non-partial path, the\ntime that the whole thing takes to complete will be limited by the\nnon-partial path, which, since it cannot use parallelism, will take 8\nseconds of wall clock time. If we pick the partial path, we have a\ntotal of 12 seconds of CPU time which we can finish in 6 wall clock\nseconds with 2 participants, 4 seconds with 3 participants, or 3\nseconds with 4 participants. This is clearly a lot better.\n\nIncidentally, the planner knows about the fact that you can't finish\nan Append faster than you can finish its non-partial participants. See\nappend_nonpartial_cost().\n\nNow, I don't think anything necessarily gets broken here by your patch\nas written. Planner decisions are made on estimated cost, which is a\nproxy for wall clock time, not CPU time. Right now, we only prefer the\nnon-partial path if we think it can be executed in less time by ONE\nworker than the partial path could be executed by ALL workers:\n\n /*\n * Either we've got only a non-partial\npath, or we think that\n * a single backend can execute the\nbest non-partial path\n * faster than all the parallel\nbackends working together can\n * execute the best partial path.\n\nSo, in the above example, we wouldn't even consider the non-partial\npath. Even say we just have 2 workers. The partial path will have a\ncost of 6, which is less than 8, so it gets rejected. But as the\ncomment goes on to note:\n\n * It might make sense to be more\naggressive here. Even if\n * the best non-partial path is more\nexpensive than the best\n * partial path, it could still be\nbetter to choose the\n * non-partial path if there are\nseveral such paths that can\n * be given to different workers. For\nnow, we don't try to\n * figure that out.\n\nThis kind of strategy is particularly appealing when the number of\nAppend children is large compared to the number of workers. For\ninstance, if you have an Append with 100 children and you are planning\nwith 4 workers, it's probably a pretty good idea to be very aggressive\nabout picking the path that uses the least resources, which the\ncurrent algorithm would not do. You're unlikely to end up with idle\nworkers, because you have so many plans to which they can be\nallocated. However, it's possible: it could be that there's one child\nwhich is way bigger than all the others and the really important thing\nis to get a partial path for that child, so that it can be attacked in\nparallel, even if that means that overall resource consumption is\nhigher. As the number of children decreases relative to the number of\nworkers, having partial paths becomes increasingly appealing. To take\na degenerate case, suppose you have 4 workers but only 2 children.\nPartial paths should look really appealing, because the alternative is\nleaving workers unused.\n\nI *think* your patch is based around the idea of merging the\nturn-the-append-of-partial-paths-into-a-parallel-append case with the\nconsider-a-mix-of-partial-and-nonpartial-paths-for-parallel-append\ncase. That seems possible to do given that the current heuristic is to\ncompare the raw path costs, but I think that it wouldn't work if we\nwanted to be more aggressive about considering the mixed strategy and\nletting the costing machinery sort out which way is better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 28 Apr 2020 09:31:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "Robert forwarded me a link to an email sent to -general list, where\nthe reported problem seems to be the same problem that was being\ndiscussed here.\n\nhttps://www.postgresql.org/message-id/flat/d0f6d811-8946-eb9f-68e2-1a8a7f80ff21%40a-kretschmer.de\n\nGoing over the last few emails, it seems that David held off from\ncommitting the patch, because of the lack of confidence in its\nrobustness. With the patch, a sub-partitioned child's partial\nAppend's child paths will *always* be pulled up into the parent's set\nof partial child paths thus preventing the nesting of Appends, which\nthe run-time pruning code can't currently cope with. The lack of\nconfidence seems to be due to the fact that always pulling up a\nsub-Append's child paths into the parent partial Append's child paths\n*may* cause the latter to behave wrongly under parallelism and the new\ncode structure will prevent add_partial_path() from being the\narbitrator of whether such a path is really the best in terms of cost.\n\nIf we can't be confident in that approach, maybe we should consider\nmaking the run-time pruning code cope with nested Appends?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 15 Oct 2020 23:01:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Fri, 16 Oct 2020 at 03:01, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Going over the last few emails, it seems that David held off from\n> committing the patch, because of the lack of confidence in its\n> robustness. With the patch, a sub-partitioned child's partial\n> Append's child paths will *always* be pulled up into the parent's set\n> of partial child paths thus preventing the nesting of Appends, which\n> the run-time pruning code can't currently cope with. The lack of\n> confidence seems to be due to the fact that always pulling up a\n> sub-Append's child paths into the parent partial Append's child paths\n> *may* cause the latter to behave wrongly under parallelism and the new\n> code structure will prevent add_partial_path() from being the\n> arbitrator of whether such a path is really the best in terms of cost.\n>\n> If we can't be confident in that approach, maybe we should consider\n> making the run-time pruning code cope with nested Appends?\n\nI've been thinking about this and my thoughts are:\n\nThere are other cases where we don't pullup sub-Merge Append nodes\nanyway, so I think we should just make run-time pruning work for more\nthan just the top-level Append/Merge Append.\n\nThe case I'm thinking about is the code added in 959d00e9dbe for\nordered Append scans. It's possible a sub-partitioned table has\npartitions which don't participate in the same ordering. We need to\nkeep the sub-Merge Append in those cases... well, at least when\nthere's more than 1 partition remaining after plan-time pruning.\n\nI've attached the patch which, pending a final look, I'm proposing to\ncommit for master only. I just don't quite think this is a bug fix,\nand certainly, due to the invasiveness of the proposed patch, that\nmeans no backpatch.\n\nI fixed all the stuff about the incorrectly set partitioned_rels list.\nWhat I ended up with there is making it accumulate_append_subpath's\njob to also pullup the sub-paths partitioned_rels fields when pulling\nup a nested Append/MergeAppend. If there's no pullup, there then we\ndon't care about the sub-path's partitioned_rels. That's for it to\ndeal with. With that, I think that run-time pruning will only get the\nRT indexes for partitions that we actually have sub-paths for. That's\npretty good, so I added an Assert() to verify that in\nmake_partitionedrel_pruneinfo(). (I hope I don't regret that later)\n\nThis does mean we need to maintain a different partitioned_rels list\nfor each Append path we consider. So there's a number (six) of these\nnow between add_paths_to_append_rel() and\ngenerate_orderedappend_paths(). To try to minimise the impact of\nthat, I've changed the field so that instead of being a List of\nIntLists, it's just a List of Relids. The top-level list just\ncontains a single element until you get a UNION ALL that selects from\na partitioned table on each side of the union. Merging sub-path\npartitioned rel RT indexes into the existing element is now pretty\ncheap as it just uses bms_add_members() rather the list_concat we'd\nhave had to have used if it was still a List of IntLists.\n\nAfter fixing up how partitioned_rels is built, I saw there were no\nusages of RelOptInfo.partitioned_child_rels, so I got rid of it.\n\nI did another couple of little cleanups and wrote some regression\ntests to test all this.\n\nOverall I'm fairly happy with this, especially getting rid of a\npartitioned table related field from RelOptInfo.\n\nDavid", "msg_date": "Sun, 25 Oct 2020 14:06:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Sun, Oct 25, 2020 at 10:06 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 16 Oct 2020 at 03:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Going over the last few emails, it seems that David held off from\n> > committing the patch, because of the lack of confidence in its\n> > robustness. With the patch, a sub-partitioned child's partial\n> > Append's child paths will *always* be pulled up into the parent's set\n> > of partial child paths thus preventing the nesting of Appends, which\n> > the run-time pruning code can't currently cope with. The lack of\n> > confidence seems to be due to the fact that always pulling up a\n> > sub-Append's child paths into the parent partial Append's child paths\n> > *may* cause the latter to behave wrongly under parallelism and the new\n> > code structure will prevent add_partial_path() from being the\n> > arbitrator of whether such a path is really the best in terms of cost.\n> >\n> > If we can't be confident in that approach, maybe we should consider\n> > making the run-time pruning code cope with nested Appends?\n>\n> I've been thinking about this and my thoughts are:\n\nThanks for working on this.\n\n> There are other cases where we don't pullup sub-Merge Append nodes\n> anyway, so I think we should just make run-time pruning work for more\n> than just the top-level Append/Merge Append.\n>\n> The case I'm thinking about is the code added in 959d00e9dbe for\n> ordered Append scans. It's possible a sub-partitioned table has\n> partitions which don't participate in the same ordering. We need to\n> keep the sub-Merge Append in those cases... well, at least when\n> there's more than 1 partition remaining after plan-time pruning.\n\nAh, I guess that case would also likewise fail to use runtime pruning properly.\n\n> I've attached the patch which, pending a final look, I'm proposing to\n> commit for master only. I just don't quite think this is a bug fix,\n> and certainly, due to the invasiveness of the proposed patch, that\n> means no backpatch.\n>\n> I fixed all the stuff about the incorrectly set partitioned_rels list.\n> What I ended up with there is making it accumulate_append_subpath's\n> job to also pullup the sub-paths partitioned_rels fields when pulling\n> up a nested Append/MergeAppend. If there's no pullup, there then we\n> don't care about the sub-path's partitioned_rels. That's for it to\n> deal with. With that, I think that run-time pruning will only get the\n> RT indexes for partitions that we actually have sub-paths for. That's\n> pretty good, so I added an Assert() to verify that in\n> make_partitionedrel_pruneinfo(). (I hope I don't regret that later)\n\nSo AIUI, the fix here is to make any given Append/MergeAppend's\npart_prune_info only contain PartitionedRelPruneInfos of the (sub-)\npartitioned tables whose partitions' subplans are directly in\nappendplan/mergeplans, such that the partition indexes can be mapped\nto the subplan indexes. That does imply present_parts must be\nnon-empty, so the Assert seems warranted.\n\n> This does mean we need to maintain a different partitioned_rels list\n> for each Append path we consider. So there's a number (six) of these\n> now between add_paths_to_append_rel() and\n> generate_orderedappend_paths(). To try to minimise the impact of\n> that, I've changed the field so that instead of being a List of\n> IntLists, it's just a List of Relids. The top-level list just\n> contains a single element until you get a UNION ALL that selects from\n> a partitioned table on each side of the union. Merging sub-path\n> partitioned rel RT indexes into the existing element is now pretty\n> cheap as it just uses bms_add_members() rather the list_concat we'd\n> have had to have used if it was still a List of IntLists.\n\nThe refactoring seemed complicated on a first look, but overall looks good.\n\n> After fixing up how partitioned_rels is built, I saw there were no\n> usages of RelOptInfo.partitioned_child_rels, so I got rid of it.\n\n+1\n\n> I did another couple of little cleanups and wrote some regression\n> tests to test all this.\n>\n> Overall I'm fairly happy with this, especially getting rid of a\n> partitioned table related field from RelOptInfo.\n\nSome comments:\n\n+ * For partitioned tables, we accumulate a list of the partitioned RT\n+ * indexes for the subpaths that are directly under this Append.\n\nMaybe:\n\nFor partitioned tables, accumulate a list of the RT indexes of\npartitioned tables in the tree whose partitions' subpaths are directly\nunder this Append.\n\n+ * lists for each Append Path that we create as accumulate_append_subpath\n+ * sometimes can't flatten sub-Appends into the top-level Append.\n\nHow about expanding that reason a little bit as:\n\n...can't flatten sub-Appends into the top-level Append which prevents\nthe former's partitioned_rels from being pulled into the latter's.\n\n+ * most one element which is a RelIds of the partitioned relations which there\n\ns/RelIds/Relids\n\n+ * are subpaths for. In this case we just add the RT indexes for the\n+ * partitioned tables for the subpath we're pulling up to the single entry in\n+ * 'partitioned_rels'.\n\nHow about:\n\nIn this case, we just pull the RT indexes contained in\nsub-Append/MergeAppend's partitioned_rels into the single entry of\n*partitioned_rels, which belongs to the parent Append.\n\n * relid_subpart_map maps relid of a non-leaf partition to the index\n- * in 'partitioned_rels' of that rel (which will also be the index in\n- * the returned PartitionedRelPruneInfo list of the info for that\n+ * in 'partrelids' of that rel (which will also be the index in the\n+ * returned PartitionedRelPruneInfo list of the info for that\n\n...the index in 'partrelids', which in the new code is a bitmap set,\nsounds a bit odd. Maybe just mention the index in the list of\nPartitionedRelPruneInfos as:\n\nrelid_subpart_map maps relid of a given non-leaf partition in\n'partrelids' to the index of its PartitionedRelPruneInfo in the\nreturned list.\n\n+ /*\n+ * Ensure there were no stray PartitionedRelPruneInfo generated for\n+ * partitioned tables that had no sub-paths for.\n+ */\n+ Assert(!bms_is_empty(present_parts));\n\nMaybe you meant:\n\n...for partitioned tables for which we had neither leaf subpaths nor\nsub-PartitionedRelPruneInfos.\n\n+ List *partitioned_rels; /* List of Relids for each non-leaf\n+ * partitioned table in the partition\n+ * tree. One for each partition hierarchy.\n+ */\n List *subpaths; /* list of component Paths */\n\nHow about describing partitioned_rels as follows:\n\nList of Relids set containing RT indexes of non-leaf tables for each\npartition hierarchy whose paths are in 'subpaths'\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 15:39:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Tue, 27 Oct 2020 at 19:40, Amit Langote <amitlangote09@gmail.com> wrote:\n> Some comments:\n\nThanks for having a look at this.\n\nI've made some adjustments to those comments and pushed.\n\nDavid\n\n\n", "msg_date": "Mon, 2 Nov 2020 13:50:57 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Mon, Nov 2, 2020 at 9:51 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 27 Oct 2020 at 19:40, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Some comments:\n>\n> Thanks for having a look at this.\n>\n> I've made some adjustments to those comments and pushed.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Nov 2020 10:58:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" }, { "msg_contents": "On Mon, Nov 02, 2020 at 01:50:57PM +1300, David Rowley wrote:\n> On Tue, 27 Oct 2020 at 19:40, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Some comments:\n> \n> Thanks for having a look at this.\n> \n> I've made some adjustments to those comments and pushed.\n\ncommit a929e17e5 doesn't appear in the v14 release notes, but I wanted to\nmention that this appears to allow fixing a rowcount mis-estimate for us,\nwhich I had reported before:\nhttps://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com\nhttps://www.postgresql.org/message-id/20170415002322.GA24216@telsasoft.com\nhttps://www.postgresql.org/message-id/20170524211730.GM31097@telsasoft.com\n\nAnd others have reported before:\nhttps://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA@gmail.com\nhttps://www.postgresql.org/message-id/SG2PR01MB29673BE6F7AA24424FDBFF60BC670%40SG2PR01MB2967.apcprd01.prod.exchangelabs.com\n\nFor years, our reports have included a generated WHERE clause for each table\nbeing queried, to allow each table's partitions to be properly pruned/excluded\n(not just one table, as happened if we used a single WHERE clause).\n\nThat worked, but then the planner underestimates the rowcount, since it doesn't\nrealize that the conditions are redundant (since \"equality classes\" do not\nhandle the inequality conditions).\n\nIn v14, one WHERE clause per table still gives an underestimate; but, now\nmultiple WHERE clauses aren't required, because a single WHERE clause excludes\npartitions from each table, and the rowcount from the elided partitions is\nexcluded from the Append rowcount at plan time.\n\nThanks for this feature !\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Nov 2021 11:31:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Append can break run-time partition pruning" } ]
[ { "msg_contents": "I'm trying to restore a backup on a different machine and it terminates\nwith the not really helpful messages:\n\npg_restore: [directory archiver] could not close data file: Success\npg_restore: [parallel archiver] a worker process died unexpectedly\n\nThe backup was made with\n\npg_dump --compress=5 -v -Fd -f \"$dirname\" -j 4 $db\n\n(so it's in directory format)\n\nThe restore command was\n\npg_restore -c --if-exists -d $db -j 4 -v $dirname\n\n(I would use -C, but due to suboptimal partitioning I have to use a\ndifferent tablspace, so I need to create $db before the restore)\n\nBoth machines are running Ubuntu 18.04 and PostgreSQL is version 11.7\nfrom the pgdg repo.\n\nThe error happens while restoring the data for the tables.\n\nMy guess is that maybe one of the data files is damaged (\"Success\"\nprobably means that errno is 0, so it wasn't a system call that failed,\nbut something in the application). Does that sound plausible or should I\nlook somewhere else? A web search returned nothing relevant.\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Wed, 15 Apr 2020 12:01:46 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": true, "msg_subject": "pg_restore: could not close data file: Success" }, { "msg_contents": "On 2020-04-15 12:01:46 +0200, Peter J. Holzer wrote:\n> I'm trying to restore a backup on a different machine and it terminates\n> with the not really helpful messages:\n> \n> pg_restore: [directory archiver] could not close data file: Success\n> pg_restore: [parallel archiver] a worker process died unexpectedly\n[...]\n> My guess is that maybe one of the data files is damaged\n\nAs is often the case the matter became obvious a few minutes after\nwriting the mail. \n\nThere were indeed two file with length 0 in the dump. That happened\nbecause the backup failed because it couldn't obtain a lock on a table.\n\nI nicer error message (something like \"cannot decompress '13503.dat.gz':\nEmpty file\") would have helped.\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Wed, 15 Apr 2020 12:14:25 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": true, "msg_subject": "Re: pg_restore: could not close data file: Success" }, { "msg_contents": "Hello.\n\nAdded -hackers.\n\nAt Wed, 15 Apr 2020 12:14:25 +0200, \"Peter J. Holzer\" <hjp-pgsql@hjp.at> wrote in \n> On 2020-04-15 12:01:46 +0200, Peter J. Holzer wrote:\n> > I'm trying to restore a backup on a different machine and it terminates\n> > with the not really helpful messages:\n> > \n> > pg_restore: [directory archiver] could not close data file: Success\n> > pg_restore: [parallel archiver] a worker process died unexpectedly\n> [...]\n> > My guess is that maybe one of the data files is damaged\n> \n> As is often the case the matter became obvious a few minutes after\n> writing the mail. \n> \n> There were indeed two file with length 0 in the dump. That happened\n> because the backup failed because it couldn't obtain a lock on a table.\n> \n> I nicer error message (something like \"cannot decompress '13503.dat.gz':\n> Empty file\") would have helped.\n\nUnfortunately, just emptying .dat.gz file doesn't worked for me.\nAnyway the message is emitted the following way.\n\npg_backup_directoy.c:\n> if (cfclose(cfp) !=0)\n> fatal(\"could not close data file: %m\");\n\n%m doesn't work for some kinds of errors about compressed files but\ncfclose conseals the true cause.\n\nI'm surprised to find an old thread about the same issue.\n\nhttps://www.postgresql.org/message-id/20160307.174354.251049100.horiguchi.kyotaro%40lab.ntt.co.jp\n\nBut I don't think it's not acceptable that use fake errno for gzclose,\nbut cfclose properly passes-through the error code from gzclose, so it\nis enought that the caller should recognize the difference.\n\nPlease find the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 16 Apr 2020 12:08:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_restore: could not close data file: Success" }, { "msg_contents": "On Thu, Apr 16, 2020 at 12:08:09PM +0900, Kyotaro Horiguchi wrote:\n> I'm surprised to find an old thread about the same issue.\n> \n> https://www.postgresql.org/message-id/20160307.174354.251049100.horiguchi.kyotaro%40lab.ntt.co.jp\n> \n> But I don't think it's not acceptable that use fake errno for gzclose,\n> but cfclose properly passes-through the error code from gzclose, so it\n> is enought that the caller should recognize the difference.\n\nA problem with this patch is that we may forget again to add this\nspecial error handling if more code paths use cfclose().\n\nAs of HEAD, there are three code paths where cfclose() is called but\nit does not generate an error: two when ending a blob and one when\nending a data file. Perhaps it would make sense to just move all this\nerror within the routine itself? Note that it would also mean\nregistering file names in lclContext or equivalent as that's an\nimportant piece of the error message.\n--\nMichael", "msg_date": "Thu, 16 Apr 2020 14:40:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_restore: could not close data file: Success" }, { "msg_contents": "At Thu, 16 Apr 2020 14:40:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Apr 16, 2020 at 12:08:09PM +0900, Kyotaro Horiguchi wrote:\n> > I'm surprised to find an old thread about the same issue.\n> > \n> > https://www.postgresql.org/message-id/20160307.174354.251049100.horiguchi.kyotaro%40lab.ntt.co.jp\n> > \n> > But I don't think it's not acceptable that use fake errno for gzclose,\n> > but cfclose properly passes-through the error code from gzclose, so it\n> > is enought that the caller should recognize the difference.\n> \n> A problem with this patch is that we may forget again to add this\n> special error handling if more code paths use cfclose().\n\nDefinitely. The reason for the patch is the error codes are diffrent\naccording to callers and some of callers don't even checking the error\n(seemingly intentionally).\n\n> As of HEAD, there are three code paths where cfclose() is called but\n> it does not generate an error: two when ending a blob and one when\n> ending a data file. Perhaps it would make sense to just move all this\n> error within the routine itself? Note that it would also mean\n> registering file names in lclContext or equivalent as that's an\n> important piece of the error message.\n\nHmm. Sounds reasonable. I'm going to do that. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 16 Apr 2020 18:19:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_restore: could not close data file: Success" } ]
[ { "msg_contents": "Hi all\r\n\r\nIn some cases , PGresult is not cleared.\r\n\r\nFile: src\\bin\\pg_basebackup\\streamutil.c\r\n\r\nbool\r\nRetrieveWalSegSize(PGconn *conn)\r\n{\r\n\tPGresult *res;\r\n......\r\n\tres = PQexec(conn, \"SHOW wal_segment_size\");\r\n\tif (PQresultStatus(res) != PGRES_TUPLES_OK)\r\n\t{\r\n\t\tpg_log_error(\"could not send replication command \\\"%s\\\": %s\",\r\n\t\t\t\t\t \"SHOW wal_segment_size\", PQerrorMessage(conn));\r\n\r\n\t\tPQclear(res); // *** res is cleared ***\r\n\t\treturn false;\r\n\t}\r\n......\r\n\t/* fetch xlog value and unit from the result */\r\n\tif (sscanf(PQgetvalue(res, 0, 0), \"%d%s\", &xlog_val, xlog_unit) != 2)\r\n\t{\r\n\t\tpg_log_error(\"WAL segment size could not be parsed\");\r\n\t\treturn false; // *** res is not cleared ***\r\n\t}\r\n......\r\n\tif (!IsValidWalSegSize(WalSegSz))\r\n\t{\r\n\t\tpg_log_error(ngettext(\"WAL segment size must be a power of two between 1 MB and 1 GB, but the remote server reported a value of %d byte\",\r\n\t\t\t\t\t\t\t \"WAL segment size must be a power of two between 1 MB and 1 GB, but the remote server reported a value of %d bytes\",\r\n\t\t\t\t\t\t\t WalSegSz),\r\n\t\t\t\t\t WalSegSz);\r\n\t\treturn false; ; // *** res is not cleared ***\r\n\t}\r\n......\r\n\r\n\r\nHere is a patch.\r\n\r\nBest Regards!", "msg_date": "Wed, 15 Apr 2020 10:06:52 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "[PATHC] Fix minor memory leak in pg_basebackup" }, { "msg_contents": "On Wed, Apr 15, 2020 at 10:06:52AM +0000, Zhang, Jie wrote:\n> In some cases , PGresult is not cleared.\n> \n> File: src\\bin\\pg_basebackup\\streamutil.c\n> \n> bool\n> RetrieveWalSegSize(PGconn *conn)\n> {\n> \tPGresult *res;\n\nRetrieveWalSegSize() gets called only once at the beginning of\npg_basebackup and pg_receivewal, so that's not an issue that has major\neffects, still that's an issue. The first one PQclear() is needed\nwhere you say. Now for the second one, I would just move it once the\ncode is done with the query result, aka after calling PQgetvalue().\nWhat do you think? Please see the attached.\n--\nMichael", "msg_date": "Thu, 16 Apr 2020 15:30:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATHC] Fix minor memory leak in pg_basebackup" }, { "msg_contents": "Hi Michael\r\n\r\nso much the better!\r\n\r\n-----Original Message-----\r\nFrom: Michael Paquier [mailto:michael@paquier.xyz] \r\nSent: Thursday, April 16, 2020 2:31 PM\r\nTo: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>\r\nCc: pgsql-hackers@lists.postgresql.org\r\nSubject: Re: [PATHC] Fix minor memory leak in pg_basebackup\r\n\r\nOn Wed, Apr 15, 2020 at 10:06:52AM +0000, Zhang, Jie wrote:\r\n> In some cases , PGresult is not cleared.\r\n> \r\n> File: src\\bin\\pg_basebackup\\streamutil.c\r\n> \r\n> bool\r\n> RetrieveWalSegSize(PGconn *conn)\r\n> {\r\n> \tPGresult *res;\r\n\r\nRetrieveWalSegSize() gets called only once at the beginning of pg_basebackup and pg_receivewal, so that's not an issue that has major effects, still that's an issue. The first one PQclear() is needed where you say. Now for the second one, I would just move it once the code is done with the query result, aka after calling PQgetvalue().\r\nWhat do you think? Please see the attached.\r\n--\r\nMichael\r\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:54:09 +0000", "msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [PATHC] Fix minor memory leak in pg_basebackup" }, { "msg_contents": "On Thu, Apr 16, 2020 at 10:54:09AM +0000, Zhang, Jie wrote:\n> So much the better!\n\nThanks for confirming. Fixed, then.\n--\nMichael", "msg_date": "Fri, 17 Apr 2020 10:47:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATHC] Fix minor memory leak in pg_basebackup" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/10/sql-altersubscription.html\nDescription:\n\nIf the logical replication subscription is owned by a role that is not\nallowed to login (for example, if the LOGIN privilege is removed after the\nsubscription is created) then the logical replication worker (which uses the\nowner to connect to the database) will start to fail with this error\n(repeated every 5 seconds), which is pretty much undocumented:\r\n\r\nFATAL: role \"XXX\" is not permitted to log in\r\nLOG: background worker \"logical replication worker\" (PID X) exited with\nexit code 1\r\n\r\nYou might want to include that error message in the docs, to ensure that web\nsearches for it bring the user to this documentation.", "msg_date": "Wed, 15 Apr 2020 14:02:18 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "Logical replication subscription owner" }, { "msg_contents": "On 2020-Apr-15, PG Doc comments form wrote:\n\n> If the logical replication subscription is owned by a role that is not\n> allowed to login (for example, if the LOGIN privilege is removed after the\n> subscription is created) then the logical replication worker (which uses the\n> owner to connect to the database) will start to fail with this error\n> (repeated every 5 seconds), which is pretty much undocumented:\n> \n> FATAL: role \"XXX\" is not permitted to log in\n> LOG: background worker \"logical replication worker\" (PID X) exited with\n> exit code 1\n> \n> You might want to include that error message in the docs, to ensure that web\n> searches for it bring the user to this documentation.\n\nI wonder if a better answer is to allow the connection when the\nREPLICATION priv is granted, ignoring the LOGIN prov.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 13:40:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2020-Apr-15, PG Doc comments form wrote:\n> > If the logical replication subscription is owned by a role that is not\n> > allowed to login (for example, if the LOGIN privilege is removed after the\n> > subscription is created) then the logical replication worker (which uses the\n> > owner to connect to the database) will start to fail with this error\n> > (repeated every 5 seconds), which is pretty much undocumented:\n> > \n> > FATAL: role \"XXX\" is not permitted to log in\n> > LOG: background worker \"logical replication worker\" (PID X) exited with\n> > exit code 1\n> > \n> > You might want to include that error message in the docs, to ensure that web\n> > searches for it bring the user to this documentation.\n> \n> I wonder if a better answer is to allow the connection when the\n> REPLICATION priv is granted, ignoring the LOGIN prov.\n\nErm, no, I wouldn't have thought that'd make sense- maybe someone\nspecifically wants to stop allowing that role to login and they remove\nLOGIN? That REPLICATION would override that would surely be surprising\nand counter-intuitive..\n\nThanks,\n\nStephen", "msg_date": "Wed, 22 Apr 2020 13:46:11 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "On 2020-Apr-22, Stephen Frost wrote:\n> * Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n\n> > I wonder if a better answer is to allow the connection when the\n> > REPLICATION priv is granted, ignoring the LOGIN prov.\n> \n> Erm, no, I wouldn't have thought that'd make sense- maybe someone\n> specifically wants to stop allowing that role to login and they remove\n> LOGIN? That REPLICATION would override that would surely be surprising\n> and counter-intuitive..\n\nWell, I guess if somebody wants to stop replication, they can remove\nthe REPLICATION priv.\n\nI had it in my mind that LOGIN was for regular (SQL-based) login, and\nREPLICATION was for replication login, and that they were orthogonal.\n\nYou're saying that there's no way a role can have REPLICATION privs but\nno LOGIN. Is that sensible?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 18:59:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I had it in my mind that LOGIN was for regular (SQL-based) login, and\n> REPLICATION was for replication login, and that they were orthogonal.\n\nYeah, that's what I would've expected. Otherwise, is REPLICATION\nwithout LOGIN useful at all?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Apr 2020 19:14:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I had it in my mind that LOGIN was for regular (SQL-based) login, and\n> > REPLICATION was for replication login, and that they were orthogonal.\n> \n> Yeah, that's what I would've expected. Otherwise, is REPLICATION\n> without LOGIN useful at all?\n\nNo, but it's less surprising, at least to me, for all roles that login\nto require having the LOGIN right. Having REPLICATION ignore that would\nbe surprising (and a change from today). Maybe if we called it\nREPLICATIONLOGIN or something along those lines it would be less\nsurprising, but it still seems pretty awkward.\n\nI view REPLICATION as allowing a specific kind of connection, but you\nfirst need to be able to login.\n\nAlso- what about per-database connections? Does having REPLICATION mean\nyou get to override the CONNECT privileges on a database, if you're\nconnecting for the purposes of doing logical replication?\n\nI agree we could do better in these areas, but I'd argue that's mostly\naround improving the documentation rather than baking in implications\nthat one privilege implies another. We certainly get people who\ncomplain about getting a permission denied error when they have UPDATE\nrights on a table (but not SELECT) and they include a WHERE clause in\ntheir update statement, but that doesn't mean we should assume that\nhaving UPDATE rights means you also get SELECT rights, just because\nUPDATE is next to useless without SELECT.\n\nThanks,\n\nStephen", "msg_date": "Thu, 23 Apr 2020 07:31:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "I'd welcome input from other people on this issue; only now I noticed\nthat it's buried in pgsql-docs, so CCing pgsql-hackers now.\n\n\nOn 2020-Apr-23, Stephen Frost wrote:\n\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > I had it in my mind that LOGIN was for regular (SQL-based) login, and\n> > > REPLICATION was for replication login, and that they were orthogonal.\n> > \n> > Yeah, that's what I would've expected. Otherwise, is REPLICATION\n> > without LOGIN useful at all?\n> \n> No, but it's less surprising, at least to me, for all roles that login\n> to require having the LOGIN right. Having REPLICATION ignore that would\n> be surprising (and a change from today). Maybe if we called it\n> REPLICATIONLOGIN or something along those lines it would be less\n> surprising, but it still seems pretty awkward.\n> \n> I view REPLICATION as allowing a specific kind of connection, but you\n> first need to be able to login.\n> \n> Also- what about per-database connections? Does having REPLICATION mean\n> you get to override the CONNECT privileges on a database, if you're\n> connecting for the purposes of doing logical replication?\n> \n> I agree we could do better in these areas, but I'd argue that's mostly\n> around improving the documentation rather than baking in implications\n> that one privilege implies another. We certainly get people who\n> complain about getting a permission denied error when they have UPDATE\n> rights on a table (but not SELECT) and they include a WHERE clause in\n> their update statement, but that doesn't mean we should assume that\n> having UPDATE rights means you also get SELECT rights, just because\n> UPDATE is next to useless without SELECT.\n> \n> Thanks,\n> \n> Stephen\n\n\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 May 2020 21:47:34 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I'd welcome input from other people on this issue; only now I noticed\n> that it's buried in pgsql-docs, so CCing pgsql-hackers now.\n\nFWIW, I would argue that LOGIN permits logging in on a regular SQL\nconnection, while REPLICATION should permit logging in on a\nreplication connection, and there's no reason for either to depend on\nor require the other.\n\n> On 2020-Apr-23, Stephen Frost wrote:\n>> Also- what about per-database connections? Does having REPLICATION mean\n>> you get to override the CONNECT privileges on a database, if you're\n>> connecting for the purposes of doing logical replication?\n\nNo, why would it? Should LOGIN privilege mean you can override\nCONNECT? That's nonsense. You need the respective privilege\nto connect with the protocol you want to connect with, and you\nalso need CONNECT on the DB you want to connect to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 23:30:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "On 2020-May-07, Tom Lane wrote:\n\n> FWIW, I would argue that LOGIN permits logging in on a regular SQL\n> connection, while REPLICATION should permit logging in on a\n> replication connection, and there's no reason for either to depend on\n> or require the other.\n\nI agree with this.\n\n> >> Also- what about per-database connections? Does having REPLICATION mean\n> >> you get to override the CONNECT privileges on a database, if you're\n> >> connecting for the purposes of doing logical replication?\n> \n> No, why would it? Should LOGIN privilege mean you can override\n> CONNECT? That's nonsense. You need the respective privilege\n> to connect with the protocol you want to connect with, and you\n> also need CONNECT on the DB you want to connect to.\n\nAnd this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 May 2020 01:02:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "At Fri, 8 May 2020 01:02:11 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-May-07, Tom Lane wrote:\n> \n> > FWIW, I would argue that LOGIN permits logging in on a regular SQL\n> > connection, while REPLICATION should permit logging in on a\n> > replication connection, and there's no reason for either to depend on\n> > or require the other.\n> \n> I agree with this.\n\nI agree, too. Anyway, it is unreasonable that a user is banned for\nthe lack of replication-attribute after a successful *replication*\nlogin.\n\nLOG: replication connection authorized: user=user1 application_name=psql\nFATAL: must be superuser or replication role to start walsender\n\n> > >> Also- what about per-database connections? Does having REPLICATION mean\n> > >> you get to override the CONNECT privileges on a database, if you're\n> > >> connecting for the purposes of doing logical replication?\n> > \n> > No, why would it? Should LOGIN privilege mean you can override\n> > CONNECT? That's nonsense. You need the respective privilege\n> > to connect with the protocol you want to connect with, and you\n> > also need CONNECT on the DB you want to connect to.\n> \n> And this.\n\nA user can start physical replication without needing CONNECT on any\ndatabase if it has REPLICATION attribute. That means any user that\nis allowed logical replication on a specific database (or even no\ndatabases) can replicate the whole cluster using physical replication.\nI don't think it is a proper behavior from the security perspective.\n\nIt seems to me that we need to restrict physical replication to\nrequire CONNECT privilege on all databases, or separate physical\nreplication privilege from logical replication privilege.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 May 2020 15:03:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "On Fri, May 08, 2020 at 03:03:26PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 8 May 2020 01:02:11 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n>> On 2020-May-07, Tom Lane wrote:\n>>> FWIW, I would argue that LOGIN permits logging in on a regular SQL\n>>> connection, while REPLICATION should permit logging in on a\n>>> replication connection, and there's no reason for either to depend on\n>>> or require the other.\n>> \n>> I agree with this.\n> \n> I agree, too. Anyway, it is unreasonable that a user is banned for\n> the lack of replication-attribute after a successful *replication*\n> login.\n\nNot to make the life of everybody more complicated here, but I don't\nagree. LOGIN and REPLICATION are in my opinion completely orthogonal\nand it sounds more natural IMO that a REPLICATION user should be able\nto log into the server only if it has LOGIN defined.\n--\nMichael", "msg_date": "Sat, 9 May 2020 17:57:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Not to make the life of everybody more complicated here, but I don't\n> agree. LOGIN and REPLICATION are in my opinion completely orthogonal\n> and it sounds more natural IMO that a REPLICATION user should be able\n> to log into the server only if it has LOGIN defined.\n\nISTM those statements are contradictory. The two privileges could\nonly be called orthogonal if it's possible to make use of one without\nhaving the other. As things stand, REPLICATION without LOGIN is an\nentirely useless setting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 11:17:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Not to make the life of everybody more complicated here, but I don't\n> > agree. LOGIN and REPLICATION are in my opinion completely orthogonal\n> > and it sounds more natural IMO that a REPLICATION user should be able\n> > to log into the server only if it has LOGIN defined.\n> \n> ISTM those statements are contradictory. The two privileges could\n> only be called orthogonal if it's possible to make use of one without\n> having the other. As things stand, REPLICATION without LOGIN is an\n> entirely useless setting.\n\nAllowing a login to the system by a role that doesn't have the LOGIN\nprivilege isn't sensible though.\n\nPerhaps a middle ground would be to set LOGIN on a role when REPLICATION\nis set on it, if it's not already set (maybe with a NOTICE or WARNING or\nsuch saying \"also enabling LOGIN for role X\", or maybe not if people\nreally think it should be obvious).\n\nI don't think taking away login should take away replication though as\nmaybe there's some reason why someone would want that, nor should we\ntake away login if replication is taken away, this would strictly just\nbe a change for when REPLICATION is added to a role that doesn't have\nLOGIN already.\n\nThanks,\n\nStephen", "msg_date": "Sat, 9 May 2020 14:09:33 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> ISTM those statements are contradictory. The two privileges could\n>> only be called orthogonal if it's possible to make use of one without\n>> having the other. As things stand, REPLICATION without LOGIN is an\n>> entirely useless setting.\n\n> Allowing a login to the system by a role that doesn't have the LOGIN\n> privilege isn't sensible though.\n\nThe fundamental issue here is whether a replication connection is a\n\"login\". I'd argue that it is not; \"login\" ought to mean a normal\nSQL connection.\n\nI realize that a replication connection can issue SQL commands (which,\nas I recall, Robert has blasted as a crappy design --- and I agree).\nBut it's already the case that a replication connection has much greater\nprivileges than plain SQL, so I don't think that that aspect ought to\ncompel us to design the privilege bits as they are set up now. If\nyou think that LOGIN should be required to issue SQL commands, then\nshouldn't doing SET ROLE to a non-LOGIN role disable your ability\nto issue SQL?\n\n> Perhaps a middle ground would be to set LOGIN on a role when REPLICATION\n> is set on it, if it's not already set (maybe with a NOTICE or WARNING or\n> such saying \"also enabling LOGIN for role X\", or maybe not if people\n> really think it should be obvious).\n\nIt seems to me that there's value in having a role that can only\nconnect for replication purposes and not as a regular SQL user.\nThe existing definition doesn't support that, and the rather silly\nkluge you're proposing doesn't fix it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 May 2020 14:22:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" }, { "msg_contents": "On Fri, 8 May 2020 at 03:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> A user can start physical replication without needing CONNECT on any\n> database if it has REPLICATION attribute. That means any user that\n> is allowed logical replication on a specific database (or even no\n> databases) can replicate the whole cluster using physical replication.\n> I don't think it is a proper behavior from the security perspective.\n>\n> Physical replication has a special entry in pg_hba.conf, hence, I\ndon't think you need CONNECT on all databases. However, logical replication\nuses the same entry from a regular connection and I concur with Michael and\nStephen that we should have LOGIN and REPLICATION privileges in those\ncases.\nIf we drop the LOGIN requirement for logical replication, it means that a\nsimple NOLOGIN won't be sufficient to block a certain role to execute\nqueries\nbecause \"replication=database\" could be used to bypass it. Physical\nreplication can't execute queries but logical replication can. IMO\nREPLICATION is an additional capability and it is not a superset that\ncontains LOGIN. I prefer a fine-grained control. In sections 26.2.5.1 and\n30.7, LOGIN are documented accordingly. I'm +0.5 to the idea of adding a\nWARNING when you create/alter a role that has REPLICATION but not LOGIN.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 8 May 2020 at 03:03, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nA user can start physical replication without needing CONNECT on any\ndatabase if it has REPLICATION attribute.  That means any user that\nis allowed logical replication on a specific database (or even no\ndatabases) can replicate the whole cluster using physical replication.\nI don't think it is a proper behavior from the security perspective.\nPhysical replication has a special entry in pg_hba.conf, hence, I don't think you need CONNECT on all databases. However, logical replication uses the same entry from a regular connection and I concur with Michael and Stephen that we should have LOGIN and REPLICATION privileges in those cases. If we drop the LOGIN requirement for logical replication, it means that a simple NOLOGIN won't be sufficient to block a certain role to execute queries because \"replication=database\" could be used to bypass it. Physical replication can't execute queries but logical replication can. IMO REPLICATION is an additional capability and it is not a superset that contains LOGIN. I prefer a fine-grained control. In sections 26.2.5.1 and 30.7, LOGIN are documented accordingly. I'm +0.5 to the idea of adding a WARNING when you create/alter a role that has REPLICATION but not LOGIN.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 9 May 2020 15:32:41 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Logical replication subscription owner" } ]
[ { "msg_contents": "Hi,\n\nOver at http://postgr.es/m/CADM=JehKgobEknb+_nab9179HzGj=9EiTzWMOd2mpqr_rifm0Q@mail.gmail.com\nthere's a proposal for a parallel backup patch which works in the way\nthat I have always thought parallel backup would work: instead of\nhaving a monolithic command that returns a series of tarballs, you\nrequest individual files from a pool of workers. Leaving aside the\nquality-of-implementation issues in that patch set, I'm starting to\nthink that the design is fundamentally wrong and that we should take a\nwhole different approach. The problem I see is that it makes a\nparallel backup and a non-parallel backup work very differently, and\nI'm starting to realize that there are good reasons why you might want\nthem to be similar.\n\nSpecifically, as Andres recently pointed out[1], almost anything that\nyou might want to do on the client side, you might also want to do on\nthe server side. We already have an option to let the client compress\neach tarball, but you might also want the server to, say, compress\neach tarball[2]. Similarly, you might want either the client or the\nserver to be able to encrypt each tarball, or compress but with a\ndifferent compression algorithm than gzip. If, as is presently the\ncase, the server is always returning a set of tarballs, it's pretty\neasy to see how to make this work in the same way on either the client\nor the server, but if the server returns a set of tarballs in\nnon-parallel backup cases, and a set of tarballs in parallel backup\ncases, it's a lot harder to see how that any sort of server-side\nprocessing should work, or how the same mechanism could be used on\neither the client side or the server side.\n\nSo, my new idea for parallel backup is that the server will return\ntarballs, but just more of them. Right now, you get base.tar and\n${tablespace_oid}.tar for each tablespace. I propose that if you do a\nparallel backup, you should get base-${N}.tar and\n${tablespace_oid}-${N}.tar for some or all values of N between 1 and\nthe number of workers, with the server deciding which files ought to\ngo in which tarballs. This is more or less the naming convention that\nBART uses for its parallel backup implementation, which, incidentally,\nI did not write. I don't really care if we pick something else, but it\nseems like a sensible choice. The reason why I say \"some or all\" is\nthat some workers might not get any of the data for a given\ntablespace. In fact, it's probably desirable to have different workers\nwork on different tablespaces as far as possible, to maximize parallel\nI/O, but it's quite likely that you will have more workers than\ntablespaces. So you might end up, with pg_basebackup -j4, having the\nserver send you base-1.tar and base-2.tar and base-4.tar, but not\nbase-3.tar, because worker 3 spent all of its time on user-defined\ntablespaces, or was just out to lunch.\n\nNow, if you use -Fp, those tar files are just going to get extracted\nanyway by pg_basebackup itself, so you won't even know they exist.\nHowever, if you use -Ft, you're going to end up with more files than\nbefore. This seems like something of a wart, because you wouldn't\nnecessarily expect that the set of output files produced by a backup\nwould depend on the degree of parallelism used to take it. However,\nI'm not sure I see a reasonable alternative. The client could try to\nglue all of the related tar files sent by the server together into one\nbig tarfile, but that seems like it would slow down the process of\nwriting the backup by forcing the different server connections to\ncompete for the right to write to the same file. Moreover, if you end\nup needing to restore the backup, having a bunch of smaller tar files\ninstead of one big one means you can try to untar them in parallel if\nyou like, so it seems not impossible that it could be advantageous to\nhave them split in that case as well.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] http://postgr.es/m/20200412191702.ul7ohgv5gus3tsvo@alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/20190823172637.GA16436%40tamriel.snowman.net\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:57:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "design for parallel backup" }, { "msg_contents": "On Wed, Apr 15, 2020 at 9:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Over at http://postgr.es/m/CADM=JehKgobEknb+_nab9179HzGj=9EiTzWMOd2mpqr_rifm0Q@mail.gmail.com\n> there's a proposal for a parallel backup patch which works in the way\n> that I have always thought parallel backup would work: instead of\n> having a monolithic command that returns a series of tarballs, you\n> request individual files from a pool of workers. Leaving aside the\n> quality-of-implementation issues in that patch set, I'm starting to\n> think that the design is fundamentally wrong and that we should take a\n> whole different approach. The problem I see is that it makes a\n> parallel backup and a non-parallel backup work very differently, and\n> I'm starting to realize that there are good reasons why you might want\n> them to be similar.\n>\n> Specifically, as Andres recently pointed out[1], almost anything that\n> you might want to do on the client side, you might also want to do on\n> the server side. We already have an option to let the client compress\n> each tarball, but you might also want the server to, say, compress\n> each tarball[2]. Similarly, you might want either the client or the\n> server to be able to encrypt each tarball, or compress but with a\n> different compression algorithm than gzip. If, as is presently the\n> case, the server is always returning a set of tarballs, it's pretty\n> easy to see how to make this work in the same way on either the client\n> or the server, but if the server returns a set of tarballs in\n> non-parallel backup cases, and a set of tarballs in parallel backup\n> cases, it's a lot harder to see how that any sort of server-side\n> processing should work, or how the same mechanism could be used on\n> either the client side or the server side.\n>\n> So, my new idea for parallel backup is that the server will return\n> tarballs, but just more of them. Right now, you get base.tar and\n> ${tablespace_oid}.tar for each tablespace. I propose that if you do a\n> parallel backup, you should get base-${N}.tar and\n> ${tablespace_oid}-${N}.tar for some or all values of N between 1 and\n> the number of workers, with the server deciding which files ought to\n> go in which tarballs.\n>\n\nIt is not apparent how you are envisioning this division on the\nserver-side. I think in the currently proposed patch, each worker on\nthe client-side requests the specific files. So, how are workers going\nto request such numbered files and how we will ensure that the work\ndivision among workers is fair?\n\n> This is more or less the naming convention that\n> BART uses for its parallel backup implementation, which, incidentally,\n> I did not write. I don't really care if we pick something else, but it\n> seems like a sensible choice. The reason why I say \"some or all\" is\n> that some workers might not get any of the data for a given\n> tablespace. In fact, it's probably desirable to have different workers\n> work on different tablespaces as far as possible, to maximize parallel\n> I/O, but it's quite likely that you will have more workers than\n> tablespaces. So you might end up, with pg_basebackup -j4, having the\n> server send you base-1.tar and base-2.tar and base-4.tar, but not\n> base-3.tar, because worker 3 spent all of its time on user-defined\n> tablespaces, or was just out to lunch.\n>\n> Now, if you use -Fp, those tar files are just going to get extracted\n> anyway by pg_basebackup itself, so you won't even know they exist.\n> However, if you use -Ft, you're going to end up with more files than\n> before. This seems like something of a wart, because you wouldn't\n> necessarily expect that the set of output files produced by a backup\n> would depend on the degree of parallelism used to take it. However,\n> I'm not sure I see a reasonable alternative. The client could try to\n> glue all of the related tar files sent by the server together into one\n> big tarfile, but that seems like it would slow down the process of\n> writing the backup by forcing the different server connections to\n> compete for the right to write to the same file.\n>\n\nI think it also depends to some extent what we decide in the nearby\nthread [1] related to support of compression/encryption. Say, if we\nwant to support a new compression on client-side then we need to\nanyway process the contents of each tar file in which case combining\ninto single tar file might be okay but not sure what is the right\nthing here. I think this part needs some more thoughts.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmoYr7%2B-0_vyQoHbTP5H3QGZFgfhnrn6ewDteF%3DkUqkG%3DFw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:19:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Mon, Apr 20, 2020 at 8:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> It is not apparent how you are envisioning this division on the\n> server-side. I think in the currently proposed patch, each worker on\n> the client-side requests the specific files. So, how are workers going\n> to request such numbered files and how we will ensure that the work\n> division among workers is fair?\n\nI think that the workers would just say \"give me my share of the base\nbackup\" and then the server would divide up the files as it went. It\nwould probably keep a queue of whatever files still need to be\nprocessed in shared memory and each process would pop items from the\nqueue to send to its client.\n\n> I think it also depends to some extent what we decide in the nearby\n> thread [1] related to support of compression/encryption. Say, if we\n> want to support a new compression on client-side then we need to\n> anyway process the contents of each tar file in which case combining\n> into single tar file might be okay but not sure what is the right\n> thing here. I think this part needs some more thoughts.\n\nYes, it needs more thought, but the central idea is to try to create\nsomething that is composable. For example, if we have to do LZ4\ncompression, and code to do GPG encryption, than we should be able to\ndo both without adding any more code. Ideally, we should also be able\nto either of those operations either on the client side or on the\nserver side, using the same code either way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:09:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On 2020-04-15 17:57, Robert Haas wrote:\n> Over at http://postgr.es/m/CADM=JehKgobEknb+_nab9179HzGj=9EiTzWMOd2mpqr_rifm0Q@mail.gmail.com\n> there's a proposal for a parallel backup patch which works in the way\n> that I have always thought parallel backup would work: instead of\n> having a monolithic command that returns a series of tarballs, you\n> request individual files from a pool of workers. Leaving aside the\n> quality-of-implementation issues in that patch set, I'm starting to\n> think that the design is fundamentally wrong and that we should take a\n> whole different approach. The problem I see is that it makes a\n> parallel backup and a non-parallel backup work very differently, and\n> I'm starting to realize that there are good reasons why you might want\n> them to be similar.\n\nThat would clearly be a good goal. Non-parallel backup should ideally \nbe parallel backup with one worker.\n\nBut it doesn't follow that the proposed design is wrong. It might just \nbe that the design of the existing backup should change.\n\nI think making the wire format so heavily tied to the tar format is \ndubious. There is nothing particularly fabulous about the tar format. \nIf the server just sends a bunch of files with metadata for each file, \nthe client can assemble them in any way they want: unpacked, packed in \nseveral tarball like now, packed all in one tarball, packed in a zip \nfile, sent to S3, etc.\n\nAnother thing I would like to see sometime is this: Pull a minimal \nbasebackup, start recovery and possibly hot standby before you have \nreceived all the files. When you need to access a file that's not there \nyet, request that as a priority from the server. If you nudge the file \norder a little with perhaps prewarm-like data, you could get a mostly \nfunctional standby without having to wait for the full basebackup to \nfinish. Pull a file on request is a requirement for this.\n\n> So, my new idea for parallel backup is that the server will return\n> tarballs, but just more of them. Right now, you get base.tar and\n> ${tablespace_oid}.tar for each tablespace. I propose that if you do a\n> parallel backup, you should get base-${N}.tar and\n> ${tablespace_oid}-${N}.tar for some or all values of N between 1 and\n> the number of workers, with the server deciding which files ought to\n> go in which tarballs.\n\nI understand the other side of this: Why not compress or encrypt the \nbackup already on the server side? Makes sense. But this way seems \nweird and complicated. If I want a backup, I want one file, not an \nunpredictable set of files. How do I even know I have them all? Do we \nneed a meta-manifest?\n\nA format such as ZIP would offer more flexibility, I think. You can \nbuild a single target file incrementally, you can compress or encrypt \neach member file separately, thus allowing some compression etc. on the \nserver. I'm not saying it's perfect for this, but some more thinking \nabout the archive formats would potentially give some possibilities.\n\nAll things considered, we'll probably want more options and more ways of \ndoing things.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 22:02:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-15 11:57:29 -0400, Robert Haas wrote:\n> Over at http://postgr.es/m/CADM=JehKgobEknb+_nab9179HzGj=9EiTzWMOd2mpqr_rifm0Q@mail.gmail.com\n> there's a proposal for a parallel backup patch which works in the way\n> that I have always thought parallel backup would work: instead of\n> having a monolithic command that returns a series of tarballs, you\n> request individual files from a pool of workers. Leaving aside the\n> quality-of-implementation issues in that patch set, I'm starting to\n> think that the design is fundamentally wrong and that we should take a\n> whole different approach. The problem I see is that it makes a\n> parallel backup and a non-parallel backup work very differently, and\n> I'm starting to realize that there are good reasons why you might want\n> them to be similar.\n>\n> Specifically, as Andres recently pointed out[1], almost anything that\n> you might want to do on the client side, you might also want to do on\n> the server side. We already have an option to let the client compress\n> each tarball, but you might also want the server to, say, compress\n> each tarball[2]. Similarly, you might want either the client or the\n> server to be able to encrypt each tarball, or compress but with a\n> different compression algorithm than gzip. If, as is presently the\n> case, the server is always returning a set of tarballs, it's pretty\n> easy to see how to make this work in the same way on either the client\n> or the server, but if the server returns a set of tarballs in\n> non-parallel backup cases, and a set of tarballs in parallel backup\n> cases, it's a lot harder to see how that any sort of server-side\n> processing should work, or how the same mechanism could be used on\n> either the client side or the server side.\n>\n> So, my new idea for parallel backup is that the server will return\n> tarballs, but just more of them. Right now, you get base.tar and\n> ${tablespace_oid}.tar for each tablespace. I propose that if you do a\n> parallel backup, you should get base-${N}.tar and\n> ${tablespace_oid}-${N}.tar for some or all values of N between 1 and\n> the number of workers, with the server deciding which files ought to\n> go in which tarballs. This is more or less the naming convention that\n> BART uses for its parallel backup implementation, which, incidentally,\n> I did not write. I don't really care if we pick something else, but it\n> seems like a sensible choice. The reason why I say \"some or all\" is\n> that some workers might not get any of the data for a given\n> tablespace. In fact, it's probably desirable to have different workers\n> work on different tablespaces as far as possible, to maximize parallel\n> I/O, but it's quite likely that you will have more workers than\n> tablespaces. So you might end up, with pg_basebackup -j4, having the\n> server send you base-1.tar and base-2.tar and base-4.tar, but not\n> base-3.tar, because worker 3 spent all of its time on user-defined\n> tablespaces, or was just out to lunch.\n\nOne question I have not really seen answered well:\n\nWhy do we want parallelism here. Or to be more precise: What do we hope\nto accelerate by making what part of creating a base backup\nparallel. There's several potential bottlenecks, and I think it's\nimportant to know the design priorities to evaluate a potential design.\n\nBottlenecks (not ordered by importance):\n- compression performance (likely best solved by multiple compression\n threads and a better compression algorithm)\n- unencrypted network performance (I'd like to see benchmarks showing in\n which cases multiple TCP streams help / at which bandwidth it starts\n to help)\n- encrypted network performance, i.e. SSL overhead (not sure this is an\n important problem on modern hardware, given hardware accelerated AES)\n- checksumming overhead (a serious problem for cryptographic checksums,\n but presumably not for others)\n- file IO (presumably multiple facets here, number of concurrent\n in-flight IOs, kernel page cache overhead when reading TBs of data)\n\nI'm not really convinced that design addressing the more crucial\nbottlenecks really needs multiple fe/be connections. But that seems to\nbe have been the focus of the discussion so far.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:19:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Thanks for your thoughts.\n\nOn Mon, Apr 20, 2020 at 4:02 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> That would clearly be a good goal. Non-parallel backup should ideally\n> be parallel backup with one worker.\n\nRight.\n\n> But it doesn't follow that the proposed design is wrong. It might just\n> be that the design of the existing backup should change.\n>\n> I think making the wire format so heavily tied to the tar format is\n> dubious. There is nothing particularly fabulous about the tar format.\n> If the server just sends a bunch of files with metadata for each file,\n> the client can assemble them in any way they want: unpacked, packed in\n> several tarball like now, packed all in one tarball, packed in a zip\n> file, sent to S3, etc.\n\nYeah, that's true, and I agree that there's something a little\nunsatisfying and dubious about the current approach. However, I am not\nsure that there is sufficient reason to change it to something else,\neither. After all, what purpose would such a change serve? The client\ncan already do any of the things you mention here, provided that it\ncan interpret the data sent by the server, and pg_basebackup already\nhas code to do exactly this. Right now, we have pretty good\npg_basebackup compatibility across server versions, and if we change\nthe format, then we won't, unless we make both the client and the\nserver understand both formats. I'm not completely averse to such a\nchange if it has sufficient benefits to make it worthwhile, but it's\nnot clear to me that it does.\n\n> Another thing I would like to see sometime is this: Pull a minimal\n> basebackup, start recovery and possibly hot standby before you have\n> received all the files. When you need to access a file that's not there\n> yet, request that as a priority from the server. If you nudge the file\n> order a little with perhaps prewarm-like data, you could get a mostly\n> functional standby without having to wait for the full basebackup to\n> finish. Pull a file on request is a requirement for this.\n\nTrue, but that can always be implemented as a separate feature. I\nwon't be sad if that feature happens to fall out of work in this area,\nbut I don't think the possibility that we'll some day have such\nadvanced wizardry should bias the design of this feature very much.\nOne pretty major problem with this is that you can't open for\nconnections until you've reached a consistent state, and you can't say\nthat you're in a consistent state until you've replayed all the WAL\ngenerated during the backup, and you can't say that you're at the end\nof the backup until you've copied all the files. So, without some\nclever idea, this would only allow you to begin replay sooner; it\nwould not allow you to accept connections sooner. I suspect that makes\nit significantly less appealing.\n\n> > So, my new idea for parallel backup is that the server will return\n> > tarballs, but just more of them. Right now, you get base.tar and\n> > ${tablespace_oid}.tar for each tablespace. I propose that if you do a\n> > parallel backup, you should get base-${N}.tar and\n> > ${tablespace_oid}-${N}.tar for some or all values of N between 1 and\n> > the number of workers, with the server deciding which files ought to\n> > go in which tarballs.\n>\n> I understand the other side of this: Why not compress or encrypt the\n> backup already on the server side? Makes sense. But this way seems\n> weird and complicated. If I want a backup, I want one file, not an\n> unpredictable set of files. How do I even know I have them all? Do we\n> need a meta-manifest?\n\nYes, that's a problem, but...\n\n> A format such as ZIP would offer more flexibility, I think. You can\n> build a single target file incrementally, you can compress or encrypt\n> each member file separately, thus allowing some compression etc. on the\n> server. I'm not saying it's perfect for this, but some more thinking\n> about the archive formats would potentially give some possibilities.\n\n...I don't think this really solves anything. I expect you would have\nto write the file more or less sequentially, and I think that Amdahl's\nlaw will not be kind to us.\n\n> All things considered, we'll probably want more options and more ways of\n> doing things.\n\nYes. That's why I'm trying to figure out how to create a flexible framework.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:21:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Mon, Apr 20, 2020 at 4:19 PM Andres Freund <andres@anarazel.de> wrote:\n> Why do we want parallelism here. Or to be more precise: What do we hope\n> to accelerate by making what part of creating a base backup\n> parallel. There's several potential bottlenecks, and I think it's\n> important to know the design priorities to evaluate a potential design.\n>\n> Bottlenecks (not ordered by importance):\n> - compression performance (likely best solved by multiple compression\n> threads and a better compression algorithm)\n> - unencrypted network performance (I'd like to see benchmarks showing in\n> which cases multiple TCP streams help / at which bandwidth it starts\n> to help)\n> - encrypted network performance, i.e. SSL overhead (not sure this is an\n> important problem on modern hardware, given hardware accelerated AES)\n> - checksumming overhead (a serious problem for cryptographic checksums,\n> but presumably not for others)\n> - file IO (presumably multiple facets here, number of concurrent\n> in-flight IOs, kernel page cache overhead when reading TBs of data)\n>\n> I'm not really convinced that design addressing the more crucial\n> bottlenecks really needs multiple fe/be connections. But that seems to\n> be have been the focus of the discussion so far.\n\nI haven't evaluated this. Both BART and pgBackRest offer parallel\nbackup options, and I'm pretty sure both were performance tested and\nfound to be very significantly faster, but I didn't write the code for\neither, nor have I evaluated either to figure out exactly why it was\nfaster.\n\nMy suspicion is that it has mostly to do with adequately utilizing the\nhardware resources on the server side. If you are network-constrained,\nadding more connections won't help, unless there's something shaping\nthe traffic which can be gamed by having multiple connections.\nHowever, as things stand today, at any given point in time the base\nbackup code on the server will EITHER be attempting a single\nfilesystem I/O or a single network I/O, and likewise for the client.\nIf a backup client - either current or hypothetical - is compressing\nand encrypting, then it doesn't have either a filesystem I/O or a\nnetwork I/O in progress while it's doing so. You take not only the hit\nof the time required for compression and/or encryption, but also use\nthat much less of the available network and/or I/O capacity.\n\nWhile I agree that some of these problems could likely be addressed in\nother ways, parallelism seems to offer an approach that could solve\nmultiple issues at the same time. If you want to address it without\nthat, you need asynchronous filesystem I/O and asynchronous network\nI/O and both of those on both the client and server side, plus\nmultithreaded compression and multithreaded encryption and maybe some\nother things. That sounds pretty hairy and hard to get right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:36:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-20 16:36:16 -0400, Robert Haas wrote:\n> My suspicion is that it has mostly to do with adequately utilizing the\n> hardware resources on the server side. If you are network-constrained,\n> adding more connections won't help, unless there's something shaping\n> the traffic which can be gamed by having multiple connections.\n> However, as things stand today, at any given point in time the base\n> backup code on the server will EITHER be attempting a single\n> filesystem I/O or a single network I/O, and likewise for the client.\n\nWell, kinda, but not really. Both file reads (server)/writes(client) and\nnetwork send(server)/recv(client) are buffered by the OS, and the file\nIO is entirely sequential.\n\nThat's not true for checksum computations / compressions to the same\ndegree. They're largely bottlenecked in userland, without the kernel\ndoing as much async work.\n\n\n> If a backup client - either current or hypothetical - is compressing\n> and encrypting, then it doesn't have either a filesystem I/O or a\n> network I/O in progress while it's doing so. You take not only the hit\n> of the time required for compression and/or encryption, but also use\n> that much less of the available network and/or I/O capacity.\n\nI don't think it's really the time for network/file I/O that's the\nissue. Sure memcpy()'ing from the kernel takes time, but compared to\nencryption/compression it's not that much. Especially for compression,\nit's not really lack of cycles for networking that prevent a higher\nthroughput, it's that after buffering a few MB there's just no point\nbuffering more, given compression will plod along with 20-100MB/s.\n\n\n> While I agree that some of these problems could likely be addressed in\n> other ways, parallelism seems to offer an approach that could solve\n> multiple issues at the same time. If you want to address it without\n> that, you need asynchronous filesystem I/O and asynchronous network\n> I/O and both of those on both the client and server side, plus\n> multithreaded compression and multithreaded encryption and maybe some\n> other things. That sounds pretty hairy and hard to get right.\n\nI'm not really convinced. You're complicating the wire protocol by\nhaving multiple tar files with overlapping contents. With the\nconsequence that clients need additional logic to deal with that. We'll\nnot get one manifest, but multiple ones, etc.\n\nWe already do network IO non-blocking, and leaving the copying to\nkernel, the kernel does the actual network work asynchronously. Except\nfor file boundaries the kernel does asynchronous read IO for us (but we\nshould probably hint it to do that even at the start of a new file).\n\nI think we're quite a bit away from where we need to worry about making\nencryption multi-threaded:\nandres@awork3:~/src/postgresql$ openssl speed -evp aes-256-ctr\nDoing aes-256-ctr for 3s on 16 size blocks: 81878709 aes-256-ctr's in 3.00s\nDoing aes-256-ctr for 3s on 64 size blocks: 71062203 aes-256-ctr's in 3.00s\nDoing aes-256-ctr for 3s on 256 size blocks: 31738391 aes-256-ctr's in 3.00s\nDoing aes-256-ctr for 3s on 1024 size blocks: 10043519 aes-256-ctr's in 3.00s\nDoing aes-256-ctr for 3s on 8192 size blocks: 1346933 aes-256-ctr's in 3.00s\nDoing aes-256-ctr for 3s on 16384 size blocks: 674680 aes-256-ctr's in 3.00s\nOpenSSL 1.1.1f 31 Mar 2020\nbuilt on: Tue Mar 31 21:59:59 2020 UTC\noptions:bn(64,64) rc4(16x,int) des(int) aes(partial) blowfish(ptr) \ncompiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-hsg853/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2\nThe 'numbers' are in 1000s of bytes per second processed.\ntype 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes\naes-256-ctr 436686.45k 1515993.66k 2708342.70k 3428187.82k 3678025.05k 3684652.37k\n\n\nSo that really just leaves compression (and perhaps cryptographic\nchecksumming). Given that we can provide nearly all of the benefits of\nmulti-stream parallelism in a compatible way by using\nparallelism/threads at that level, I just have a hard time believing the\ncomplexity of doing those tasks in parallel is bigger than multi-stream\nparallelism. And I'd be fairly unsurprised if you'd end up with a lot\nmore \"bubbles\" in the pipeline when using multi-stream parallelism.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:10:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Tue, Apr 21, 2020 at 2:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-04-20 16:36:16 -0400, Robert Haas wrote:\n>\n> > If a backup client - either current or hypothetical - is compressing\n> > and encrypting, then it doesn't have either a filesystem I/O or a\n> > network I/O in progress while it's doing so. You take not only the hit\n> > of the time required for compression and/or encryption, but also use\n> > that much less of the available network and/or I/O capacity.\n>\n> I don't think it's really the time for network/file I/O that's the\n> issue. Sure memcpy()'ing from the kernel takes time, but compared to\n> encryption/compression it's not that much. Especially for compression,\n> it's not really lack of cycles for networking that prevent a higher\n> throughput, it's that after buffering a few MB there's just no point\n> buffering more, given compression will plod along with 20-100MB/s.\n>\n\nIt is quite likely that compression can benefit more from parallelism\nas compared to the network I/O as that is mostly a CPU intensive\noperation but I am not sure if we can just ignore the benefit of\nutilizing the network bandwidth. In our case, after copying from the\nnetwork we do write that data to disk, so during filesystem I/O the\nnetwork can be used if there is some other parallel worker processing\nother parts of data.\n\nAlso, there may be some users who don't want their data to be\ncompressed due to some reason like the overhead of decompression is so\nhigh that restore takes more time and they are not comfortable with\nthat as for them faster restore is much more critical then compressed\nor fast back up. So, for such things, the parallelism during backup\nas being discussed in this thread will still be helpful.\n\nOTOH, I think without some measurements it is difficult to say that we\nhave significant benefit by paralysing the backup without compression.\nI have scanned the other thread [1] where the patch for parallel\nbackup was discussed and didn't find any performance numbers, so\nprobably having some performance data with that patch might give us a\nbetter understanding of introducing parallelism in the backup.\n\n[1] - https://www.postgresql.org/message-id/CADM=JehKgobEknb+_nab9179HzGj=9EiTzWMOd2mpqr_rifm0Q@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 10:20:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 10:20:01 +0530, Amit Kapila wrote:\n> It is quite likely that compression can benefit more from parallelism\n> as compared to the network I/O as that is mostly a CPU intensive\n> operation but I am not sure if we can just ignore the benefit of\n> utilizing the network bandwidth. In our case, after copying from the\n> network we do write that data to disk, so during filesystem I/O the\n> network can be used if there is some other parallel worker processing\n> other parts of data.\n\nWell, as I said, network and FS IO as done by server / pg_basebackup are\nboth fully buffered by the OS. Unless the OS throttles the userland\nprocess, a large chunk of the work will be done by the kernel, in\nseparate kernel threads.\n\nMy workstation and my laptop can, in a single thread each, get close\n20GBit/s of network IO (bidirectional 10GBit, I don't have faster - it's\na thunderbolt 10gbe card) and iperf3 is at 55% CPU while doing so. Just\nconnecting locally it's 45Gbit/s. Or over 8GBbyte/s of buffered\nfilesystem IO. And it doesn't even have that high per-core clock speed.\n\nI just don't see this being the bottleneck for now.\n\n\n> Also, there may be some users who don't want their data to be\n> compressed due to some reason like the overhead of decompression is so\n> high that restore takes more time and they are not comfortable with\n> that as for them faster restore is much more critical then compressed\n> or fast back up. So, for such things, the parallelism during backup\n> as being discussed in this thread will still be helpful.\n\nI am not even convinced it'll be helpful in a large fraction of\ncases. The added overhead of more connections / processes isn't free.\n\nI believe there are some cases where it'd help. E.g. if there are\nmultiple tablespaces on independent storage, parallelism as described\nhere could end up to a significantly better utilization of the different\ntablespaces. But that'd require sorting work between processes\nappropriately.\n\n\n> OTOH, I think without some measurements it is difficult to say that we\n> have significant benefit by paralysing the backup without compression.\n> I have scanned the other thread [1] where the patch for parallel\n> backup was discussed and didn't find any performance numbers, so\n> probably having some performance data with that patch might give us a\n> better understanding of introducing parallelism in the backup.\n\nAgreed, we need some numbers.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Apr 2020 22:31:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-20 22:31:49 -0700, Andres Freund wrote:\n> On 2020-04-21 10:20:01 +0530, Amit Kapila wrote:\n> > It is quite likely that compression can benefit more from parallelism\n> > as compared to the network I/O as that is mostly a CPU intensive\n> > operation but I am not sure if we can just ignore the benefit of\n> > utilizing the network bandwidth. In our case, after copying from the\n> > network we do write that data to disk, so during filesystem I/O the\n> > network can be used if there is some other parallel worker processing\n> > other parts of data.\n>\n> Well, as I said, network and FS IO as done by server / pg_basebackup are\n> both fully buffered by the OS. Unless the OS throttles the userland\n> process, a large chunk of the work will be done by the kernel, in\n> separate kernel threads.\n>\n> My workstation and my laptop can, in a single thread each, get close\n> 20GBit/s of network IO (bidirectional 10GBit, I don't have faster - it's\n> a thunderbolt 10gbe card) and iperf3 is at 55% CPU while doing so. Just\n> connecting locally it's 45Gbit/s. Or over 8GBbyte/s of buffered\n> filesystem IO. And it doesn't even have that high per-core clock speed.\n>\n> I just don't see this being the bottleneck for now.\n\nFWIW, I just tested pg_basebackup locally.\n\nWithout compression and a stock postgres I get:\nunix tcp tcp+ssl:\n1.74GiB/s 1.02GiB/s 699MiB/s\n\nThat turns out to be bottlenecked by the backup manifest generation.\n\nWithout compression and a stock postgres I get, and --no-manifest\nunix tcp tcp+ssl:\n2.51GiB/s 1.63GiB/s 1.00GiB/s\n\nI.e. all of them area already above 10Gbit/s network.\n\nLooking at a profile it's clear that our small output buffer is the\nbottleneck:\n64kB Buffers + --no-manifest:\nunix tcp tcp+ssl:\n2.99GiB/s 2.56GiB/s 1.18GiB/s\n\nAt this point the backend is not actually the bottleneck anymore,\ninstead it's pg_basebackup. Which is in part due to the small buffer\nused for output data (i.e. libc's FILE buffering), and in part because\nwe spend too much time memmove()ing data, because of the \"left-justify\"\nlogic in pqCheckInBufferSpace().\n\n\n- Andres\n\n\n", "msg_date": "Mon, 20 Apr 2020 23:44:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Tue, Apr 21, 2020 at 2:44 AM Andres Freund <andres@anarazel.de> wrote:\n> FWIW, I just tested pg_basebackup locally.\n>\n> Without compression and a stock postgres I get:\n> unix tcp tcp+ssl:\n> 1.74GiB/s 1.02GiB/s 699MiB/s\n>\n> That turns out to be bottlenecked by the backup manifest generation.\n\nWhoa. That's unexpected, at least for me. Is that because of the\nCRC-32C overhead, or something else? What do you get with\n--manifest-checksums=none?\n\n> Without compression and a stock postgres I get, and --no-manifest\n> unix tcp tcp+ssl:\n> 2.51GiB/s 1.63GiB/s 1.00GiB/s\n>\n> I.e. all of them area already above 10Gbit/s network.\n>\n> Looking at a profile it's clear that our small output buffer is the\n> bottleneck:\n> 64kB Buffers + --no-manifest:\n> unix tcp tcp+ssl:\n> 2.99GiB/s 2.56GiB/s 1.18GiB/s\n>\n> At this point the backend is not actually the bottleneck anymore,\n> instead it's pg_basebackup. Which is in part due to the small buffer\n> used for output data (i.e. libc's FILE buffering), and in part because\n> we spend too much time memmove()ing data, because of the \"left-justify\"\n> logic in pqCheckInBufferSpace().\n\nHmm.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 21 Apr 2020 07:18:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 07:18:20 -0400, Robert Haas wrote:\n> On Tue, Apr 21, 2020 at 2:44 AM Andres Freund <andres@anarazel.de> wrote:\n> > FWIW, I just tested pg_basebackup locally.\n> >\n> > Without compression and a stock postgres I get:\n> > unix tcp tcp+ssl:\n> > 1.74GiB/s 1.02GiB/s 699MiB/s\n> >\n> > That turns out to be bottlenecked by the backup manifest generation.\n> \n> Whoa. That's unexpected, at least for me. Is that because of the\n> CRC-32C overhead, or something else? What do you get with\n> --manifest-checksums=none?\n\nIt's all CRC overhead. I don't see a difference with\n--manifest-checksums=none anymore. We really should look for a better\n\"fast\" checksum.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Tue, 21 Apr 2020 08:36:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Tue, Apr 21, 2020 at 11:36 AM Andres Freund <andres@anarazel.de> wrote:\n> It's all CRC overhead. I don't see a difference with\n> --manifest-checksums=none anymore. We really should look for a better\n> \"fast\" checksum.\n\nHmm, OK. I'm wondering exactly what you tested here. Was this over\nyour 20GiB/s connection between laptop and workstation, or was this\nlocal TCP? Also, was the database being read from persistent storage,\nor was it RAM-cached? How do you expect to take advantage of I/O\nparallelism without multiple processes/connections?\n\nMeanwhile, I did some local-only testing on my new 16GB MacBook Pro\nlaptop with all combinations of:\n\n- UNIX socket, local TCP socket, local TCP socket with SSL\n- Plain format, tar format, tar format with gzip\n- No manifest (\"omit\"), manifest with no checksums, manifest with\nCRC-32C checksums, manifest with SHA256 checksums.\n\nThe database is a fresh scale-factor 1000 pgbench database. No\nconcurrent database load. Observations:\n\n- UNIX socket was slower than a local TCP socket, and about the same\nspeed as a TCP socket with SSL.\n- CRC-32C is about 10% slower than no manifest and/or no checksums in\nthe manifest. SHA256 is 1.5-2x slower, but less when compression is\nalso used (see below).\n- Plain format is a little slower than tar format; tar with gzip is\ntypically >~5x slower, but less when the checksum algorithm is SHA256\n(again, see below).\n- SHA256 + tar format with gzip is the slowest combination, but it's\n\"only\" about 15% slower than no manifest, and about 3.3x slower than\nno compression, presumably because the checksumming is slowing down\nthe server and the compression is slowing down the client.\n- Fastest speeds I see in any test are ~650MB/s, and slowest are\n~65MB/s, obviously benefiting greatly from the fact that this is a\nlocal-only test.\n- The time for a raw cp -R of the backup directory is about 10s, and\nthe fastest time to take a backup (tcp+tar+m:omit) is about 22s.\n- In all cases I've checked so far both pg_basebackup and the server\nbackend are pegged at 98-100% CPU usage. I haven't looked into where\nthat time is going yet.\n\nFull results and test script attached. I and/or my colleagues will try\nto test out some other environments, but I'm not sure we have easy\naccess to anything as high-powered as a 20GiB/s interconnect.\n\nIt seems to me that the interesting cases may involve having lots of\navailable CPUs and lots of disk spindles, but a comparatively slow\npipe between the machines. I mean, if it takes 36 hours to read the\ndata from disk, you can't realistically expect to complete a full\nbackup in less than 36 hours. Incremental backup might help, but\notherwise you're just dead. On the other hand, if you can read the\ndata from the disk in 2 hours but it takes 36 hours to complete a\nbackup, it seems like you have more justification for thinking that\nthe backup software could perhaps do better. In such cases efficient\nserver-side compression may help a lot, but even then, I wonder\nwhether you can you read the data at maximum speed with only a single\nprocess? I tend to doubt it, but I guess you only have to be fast\nenough to saturate the network. Hmm.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Apr 2020 14:01:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 14:01:28 -0400, Robert Haas wrote:\n> On Tue, Apr 21, 2020 at 11:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > It's all CRC overhead. I don't see a difference with\n> > --manifest-checksums=none anymore. We really should look for a better\n> > \"fast\" checksum.\n>\n> Hmm, OK. I'm wondering exactly what you tested here. Was this over\n> your 20GiB/s connection between laptop and workstation, or was this\n> local TCP?\n\nIt was local TCP. The speeds I can reach are faster than the 10GiB/s\n(unidirectional) I can do between the laptop & workstation, so testing\nit over \"actual\" network isn't informative - I basically can reach line\nspeed between them with any method.\n\n\n> Also, was the database being read from persistent storage, or was it\n> RAM-cached?\n\nIt was in kernel buffer cache. But I can reach 100% utilization of\nstorage too (which is slightly slower than what I can do over unix\nsocket).\n\npg_basebackup --manifest-checksums=none -h /tmp/ -D- -Ft -cfast -Xnone |pv -B16M -r -a > /dev/null\n2.59GiB/s\nfind /srv/dev/pgdev-dev/base/ -type f -exec dd if={} bs=32k status=none \\; |pv -B16M -r -a > /dev/null\n2.53GiB/s\nfind /srv/dev/pgdev-dev/base/ -type f -exec cat {} + |pv -B16M -r -a > /dev/null\n2.42GiB/s\n\nI tested this with a -s 5000 DB, FWIW.\n\n\n> How do you expect to take advantage of I/O parallelism without\n> multiple processes/connections?\n\nWhich kind of I/O parallelism are you thinking of? Independent\ntablespaces? Or devices that can handle multiple in-flight IOs? WRT the\nlatter, at least linux will keep many IOs in-flight for sequential\nbuffered reads.\n\n\n> - UNIX socket was slower than a local TCP socket, and about the same\n> speed as a TCP socket with SSL.\n\nHm. Interesting. Wonder if that a question of the unix socket buffer\nsize?\n\n> - CRC-32C is about 10% slower than no manifest and/or no checksums in\n> the manifest. SHA256 is 1.5-2x slower, but less when compression is\n> also used (see below).\n> - Plain format is a little slower than tar format; tar with gzip is\n> typically >~5x slower, but less when the checksum algorithm is SHA256\n> (again, see below).\n\nI see about 250MB/s with -Z1 (from the source side). If I hack\npg_basebackup.c to specify a deflate level of 0 to gzsetparams, which\nzlib docs says should disable compression, I get up to 700MB/s. Which\nstill is a factor of ~3.7 to uncompressed.\n\nThis seems largely due to zlib's crc32 computation not being hardware\naccelerated:\n- 99.75% 0.05% pg_basebackup pg_basebackup [.] BaseBackup\n - 99.95% BaseBackup\n - 81.60% writeTarData\n - gzwrite\n - gz_write\n - gz_comp.constprop.0\n - 85.11% deflate\n - 97.66% deflate_stored\n + 87.45% crc32_z\n + 9.53% __memmove_avx_unaligned_erms\n + 3.02% _tr_stored_block\n 2.27% __memmove_avx_unaligned_erms\n + 14.86% __libc_write\n + 18.40% pqGetCopyData3\n\n\n\n> It seems to me that the interesting cases may involve having lots of\n> available CPUs and lots of disk spindles, but a comparatively slow\n> pipe between the machines.\n\nHm, I'm not sure I am following. If network is the bottleneck, we'd\nimmediately fill the buffers, and that'd be that?\n\nISTM all of this is only really relevant if either pg_basebackup or\nwalsender is the bottleneck?\n\n\n> I mean, if it takes 36 hours to read the\n> data from disk, you can't realistically expect to complete a full\n> backup in less than 36 hours. Incremental backup might help, but\n> otherwise you're just dead. On the other hand, if you can read the\n> data from the disk in 2 hours but it takes 36 hours to complete a\n> backup, it seems like you have more justification for thinking that\n> the backup software could perhaps do better. In such cases efficient\n> server-side compression may help a lot, but even then, I wonder\n> whether you can you read the data at maximum speed with only a single\n> process? I tend to doubt it, but I guess you only have to be fast\n> enough to saturate the network. Hmm.\n\nWell, I can do >8GByte/s of buffered reads in a single process\n(obviously cached, because I don't have storage quite that fast -\nuncached I can read at nearly 3GByte/s, the disk's speed). So sure,\nthere's a limit to what a single process can do, but I think we're\nfairly far away from it.\n\nI think it's fairly obvious that we need faster compression - and that\nwhile we clearly can win a lot by just using a faster\nalgorithm/implementation than standard zlib, we'll likely also need\nparallelism in some form. I'm doubtful that using multiple connections\nand multiple backends is the best way to achieve that, but it'd be a\nway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Apr 2020 13:14:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Tue, Apr 21, 2020 at 4:14 PM Andres Freund <andres@anarazel.de> wrote:\n> It was local TCP. The speeds I can reach are faster than the 10GiB/s\n> (unidirectional) I can do between the laptop & workstation, so testing\n> it over \"actual\" network isn't informative - I basically can reach line\n> speed between them with any method.\n\nIs that really a conclusive test, though? In the case of either local\nTCP or a fast local interconnect, you'll have negligible latency. It\nseems at least possible that saturating the available bandwidth is\nharder on a higher-latency connection. Cross-region data center\nconnections figure to have way higher latency than a local wired\nnetwork, let alone the loopback interface.\n\n> It was in kernel buffer cache. But I can reach 100% utilization of\n> storage too (which is slightly slower than what I can do over unix\n> socket).\n>\n> pg_basebackup --manifest-checksums=none -h /tmp/ -D- -Ft -cfast -Xnone |pv -B16M -r -a > /dev/null\n> 2.59GiB/s\n> find /srv/dev/pgdev-dev/base/ -type f -exec dd if={} bs=32k status=none \\; |pv -B16M -r -a > /dev/null\n> 2.53GiB/s\n> find /srv/dev/pgdev-dev/base/ -type f -exec cat {} + |pv -B16M -r -a > /dev/null\n> 2.42GiB/s\n>\n> I tested this with a -s 5000 DB, FWIW.\n\nBut that's not a real test either, because you're not writing the data\nanywhere. It's going to be a whole lot easier to saturate the read\nside if the write side is always zero latency.\n\n> > How do you expect to take advantage of I/O parallelism without\n> > multiple processes/connections?\n>\n> Which kind of I/O parallelism are you thinking of? Independent\n> tablespaces? Or devices that can handle multiple in-flight IOs? WRT the\n> latter, at least linux will keep many IOs in-flight for sequential\n> buffered reads.\n\nBoth. I know that the kernel will prefetch for sequential reads, but\nit won't know what file you're going to access next, so I think you'll\ntend to stall when you reach the end of each file. It also seems\npossible that on a large disk array, you could read N files at a time\nwith greater aggregate bandwidth than you can read a single file.\n\n> > It seems to me that the interesting cases may involve having lots of\n> > available CPUs and lots of disk spindles, but a comparatively slow\n> > pipe between the machines.\n>\n> Hm, I'm not sure I am following. If network is the bottleneck, we'd\n> immediately fill the buffers, and that'd be that?\n>\n> ISTM all of this is only really relevant if either pg_basebackup or\n> walsender is the bottleneck?\n\nI agree that if neither pg_basebackup nor walsender is the bottleneck,\nparallelism is unlikely to be very effective. I have realized as a\nresult of your comments that I actually don't care intrinsically about\nparallel backup; what I actually care about is making backups very,\nvery fast. I suspect that parallelism is a useful means to that end,\nbut I interpret your comments as questioning that, and specifically\ndrawing attention to the question of where the bottlenecks might be.\nSo I'm trying to think about that.\n\n> I think it's fairly obvious that we need faster compression - and that\n> while we clearly can win a lot by just using a faster\n> algorithm/implementation than standard zlib, we'll likely also need\n> parallelism in some form. I'm doubtful that using multiple connections\n> and multiple backends is the best way to achieve that, but it'd be a\n> way.\n\nI think it has a good chance of being pretty effective, but it's\ncertainly worth casting about for other possibilities that might\ndeliver more benefit or be less work. In terms of better compression,\nI did a little looking around and it seems like LZ4 is generally\nagreed to be a lot faster than gzip, and also significantly faster\nthan most other things that one might choose to use. On the other\nhand, the compression ratio may not be as good; e.g.\nhttps://facebook.github.io/zstd/ cites a 2.1 ratio (on some data set)\nfor lz4 and a 2.9 ratio for zstd. While the compression and\ndecompression speeds are slower, they are close enough that you might\nbe able to make up the difference by using 2x the cores for\ncompression and 3x for decompression. I don't know if that sort of\nthing is worth considering. If your limitation is the line speed, and\nyou have have CPU cores to burn, a significantly higher compression\nratio means significantly faster backups. On the other hand, if you're\nbacking up over the LAN and the machine is heavily taxed, that's\nprobably not an appealing trade.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 21 Apr 2020 17:09:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-21 17:09:50 -0400, Robert Haas wrote:\n> On Tue, Apr 21, 2020 at 4:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > It was local TCP. The speeds I can reach are faster than the 10GiB/s\n> > (unidirectional) I can do between the laptop & workstation, so testing\n> > it over \"actual\" network isn't informative - I basically can reach line\n> > speed between them with any method.\n>\n> Is that really a conclusive test, though? In the case of either local\n> TCP or a fast local interconnect, you'll have negligible latency. It\n> seems at least possible that saturating the available bandwidth is\n> harder on a higher-latency connection. Cross-region data center\n> connections figure to have way higher latency than a local wired\n> network, let alone the loopback interface.\n\nSure. But that's what the TCP window etc should take care of. You might\nhave to tune the OS if you have a high latency multi-GBit link, but\nyou'd have to do that regardless of whether a single process or multiple\nprocesses are used. And the number of people with high-latency\nmulti-gbit links isn't that high, compared to the number taking backups\nwithin a datacenter.\n\n\n> > It was in kernel buffer cache. But I can reach 100% utilization of\n> > storage too (which is slightly slower than what I can do over unix\n> > socket).\n> >\n> > pg_basebackup --manifest-checksums=none -h /tmp/ -D- -Ft -cfast -Xnone |pv -B16M -r -a > /dev/null\n> > 2.59GiB/s\n> > find /srv/dev/pgdev-dev/base/ -type f -exec dd if={} bs=32k status=none \\; |pv -B16M -r -a > /dev/null\n> > 2.53GiB/s\n> > find /srv/dev/pgdev-dev/base/ -type f -exec cat {} + |pv -B16M -r -a > /dev/null\n> > 2.42GiB/s\n> >\n> > I tested this with a -s 5000 DB, FWIW.\n>\n> But that's not a real test either, because you're not writing the data\n> anywhere. It's going to be a whole lot easier to saturate the read\n> side if the write side is always zero latency.\n\nI also stored data elsewhere in separate threads. But the bottleneck of\nthat is lower (my storage is faster on reads than on writes, at least\nafter the ram on the nvme is exhausted)...\n\n\n> > > It seems to me that the interesting cases may involve having lots of\n> > > available CPUs and lots of disk spindles, but a comparatively slow\n> > > pipe between the machines.\n> >\n> > Hm, I'm not sure I am following. If network is the bottleneck, we'd\n> > immediately fill the buffers, and that'd be that?\n> >\n> > ISTM all of this is only really relevant if either pg_basebackup or\n> > walsender is the bottleneck?\n>\n> I agree that if neither pg_basebackup nor walsender is the bottleneck,\n> parallelism is unlikely to be very effective. I have realized as a\n> result of your comments that I actually don't care intrinsically about\n> parallel backup; what I actually care about is making backups very,\n> very fast. I suspect that parallelism is a useful means to that end,\n> but I interpret your comments as questioning that, and specifically\n> drawing attention to the question of where the bottlenecks might be.\n> So I'm trying to think about that.\n\nI agree that trying to make backups very fast is a good goal (or well, I\nthink not very slow would be a good descriptor for the current\nsituation). I am just trying to make sure we tackle the right problems\nfor that. My gut feeling is that we have to tackle compression first,\nbecause without addressing that \"all hope is lost\" ;)\n\nFWIW, here's the base backup from pgbench -i -s 5000 compressed a number\nof ways. The uncompressed backup is 64622701911 bytes. Unfortunately\npgbench -i -s 5000 is not a particularly good example, it's just too\ncompressible.\n\n\nmethod level parallelism wall-time cpu-user-time cpu-kernel-time size\t\trate format\ngzip 1 1 380.79 368.46 12.15 3892457816 16.6 .gz\ngzip 6 1 976.05 963.10 12.84 3594605389 18.0 .gz\npigz 1 10 34.35 364.14 23.55 3892401867 16.6 .gz\npigz 6 10 101.27 1056.85 28.98 3620724251 17.8 .gz\nzstd-gz 1 1 278.14 265.31 12.81 3897174342 15.6 .gz\nzstd-gz 1 6 906.67 893.58 12.52 3598238594 18.0 .gz\nzstd 1 1 82.95 67.97 11.82 2853193736 22.6 .zstd\nzstd 1 6 228.58 214.65 13.92 2687177334 24.0 .zstd\nzstd 1 10 25.05 151.84 13.35 2847414913 22.7 .zstd\nzstd 6 10 43.47 374.30 12.37 2745211100 23.5 .zstd\nzstd 6 20 32.50 468.18 13.44 2745211100 23.5 .zstd\nzstd 9 20 57.99 949.91 14.13 2606535138 24.8 .zstd\nlz4 1 1 49.94 36.60 13.33 7318668265 8.8 .lz4\nlz4 3 1 201.79 187.36 14.42 6561686116 9.84 .lz4\nlz4 6 1 318.35 304.64 13.55 6560274369 9.9 .lz4\npixz 1 10 92.54 925.52 37.00 1199499772 53.8 .xz\npixz 3 10 210.77 2090.38 37.96 1186219752 54.5 .xz\nbzip2 1 1 2210.04 2190.89 17.67 1276905211 50.6 .bz2\npbzip2 1 10 236.03 2352.09 34.01 1332010572 48.5 .bz2\nplzip 1 10 243.08 2430.18 25.60 915598323 70.6 .lz\nplzip 3 10 359.04 3577.94 27.92 1018585193 63.4 .lz\nplzip 3 20 197.36 3911.85 22.02 1018585193 63.4 .lz\n\n(zstd-gz is zstd with --format=gzip, zstd with parallelism 1 is with\n--single-thread to avoid a separate IO thread it uses by default, even\nwith -T0)\n\nThese weren't taken on a completely quiesced system, and I tested gzip\nand bzip2 in parallel, because they took so long. But I think this still\ngives a good overview (cpu-user-time is not that affected by smaller\namounts of noise too).\n\nIt looks to me that bzip2/pbzip2 are clearly too slow. pixz looks\ninteresting as it achieves pretty good compression rates at a lower cost\nthan plzip. plzip's rates are impressive, but damn, is it expensive. And\nhigher compression ratios using more space is also a bit \"huh\"?\n\n\nDoes anybody have a better idea what exactly to use as a good test\ncorpus? pgbench -i clearly sucks, but ...\n\n\nOne thing this reminded me of is whether using a format (tar) that\ndoesn't allow efficient addressing of individual files is a good idea\nfor base backups. The compression rates very likely will be better when\nnot compressing tiny files individually, but at the same time it'd be\nvery useful to be able to access individual files more efficiently than\nO(N). I can imagine that being important for some cases of incremental\nbackup assembly.\n\n\n> > I think it's fairly obvious that we need faster compression - and that\n> > while we clearly can win a lot by just using a faster\n> > algorithm/implementation than standard zlib, we'll likely also need\n> > parallelism in some form. I'm doubtful that using multiple connections\n> > and multiple backends is the best way to achieve that, but it'd be a\n> > way.\n>\n> I think it has a good chance of being pretty effective, but it's\n> certainly worth casting about for other possibilities that might\n> deliver more benefit or be less work. In terms of better compression,\n> I did a little looking around and it seems like LZ4 is generally\n> agreed to be a lot faster than gzip, and also significantly faster\n> than most other things that one might choose to use. On the other\n> hand, the compression ratio may not be as good; e.g.\n> https://facebook.github.io/zstd/ cites a 2.1 ratio (on some data set)\n> for lz4 and a 2.9 ratio for zstd. While the compression and\n> decompression speeds are slower, they are close enough that you might\n> be able to make up the difference by using 2x the cores for\n> compression and 3x for decompression. I don't know if that sort of\n> thing is worth considering. If your limitation is the line speed, and\n> you have have CPU cores to burn, a significantly higher compression\n> ratio means significantly faster backups. On the other hand, if you're\n> backing up over the LAN and the machine is heavily taxed, that's\n> probably not an appealing trade.\n\nI think zstd with a low compression \"setting\" would be a pretty good\ndefault for most cases. lz4 is considerably faster, true, but the\ncompression rates are also considerably worse. I think lz4 is great for\nmostly in-memory workloads (e.g. a compressed cache / live database with\ncompressed data, as it allows to have reasonably close to memory speeds\nbut with twice the data), but for anything longer lived zstd is probably\nbetter.\n\nThe other big benefit is that zstd's library has multi-threaded\ncompression built in, whereas that's not the case for other libraries\nthat I am aware of.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:57:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Tue, Apr 21, 2020 at 6:57 PM Andres Freund <andres@anarazel.de> wrote:\n> I agree that trying to make backups very fast is a good goal (or well, I\n> think not very slow would be a good descriptor for the current\n> situation). I am just trying to make sure we tackle the right problems\n> for that. My gut feeling is that we have to tackle compression first,\n> because without addressing that \"all hope is lost\" ;)\n\nOK. I have no objection to the idea of starting with (1) server side\ncompression and (2) a better compression algorithm. However, I'm not\nvery sold on the idea of relying on parallelism that is specific to\ncompression. I think that parallelism across the whole operation -\nmultiple connections, multiple processes, etc. - may be a more\npromising approach than trying to parallelize specific stages of the\nprocess. I am not sure about that; it could be wrong, and I'm open to\nthe possibility that it is, in fact, wrong.\n\nLeaving out all the three and four digit wall times from your table:\n\n> method level parallelism wall-time cpu-user-time cpu-kernel-time size rate format\n> pigz 1 10 34.35 364.14 23.55 3892401867 16.6 .gz\n> zstd 1 1 82.95 67.97 11.82 2853193736 22.6 .zstd\n> zstd 1 10 25.05 151.84 13.35 2847414913 22.7 .zstd\n> zstd 6 10 43.47 374.30 12.37 2745211100 23.5 .zstd\n> zstd 6 20 32.50 468.18 13.44 2745211100 23.5 .zstd\n> zstd 9 20 57.99 949.91 14.13 2606535138 24.8 .zstd\n> lz4 1 1 49.94 36.60 13.33 7318668265 8.8 .lz4\n> pixz 1 10 92.54 925.52 37.00 1199499772 53.8 .xz\n\nIt's notable that almost all of the fast wall times here are with\nzstd; the surviving entries with pigz and pixz are with ten-way\nparallelism, and both pigz and lz4 have worse compression ratios than\nzstd. My impression, though, is that LZ4 might be getting a bit of a\nraw deal here because of the repetitive nature of the data. I theorize\nbased on some reading I did yesterday, and general hand-waving, that\nmaybe the compression ratios would be closer together on a more\nrealistic data set. It's also notable that lz1 -1 is BY FAR the winner\nin terms of absolute CPU consumption. So I kinda wonder whether\nsupporting both LZ4 and ZSTD might be the way to go, especially since\nonce we have the LZ4 code we might be able to use it for other things,\ntoo.\n\n> One thing this reminded me of is whether using a format (tar) that\n> doesn't allow efficient addressing of individual files is a good idea\n> for base backups. The compression rates very likely will be better when\n> not compressing tiny files individually, but at the same time it'd be\n> very useful to be able to access individual files more efficiently than\n> O(N). I can imagine that being important for some cases of incremental\n> backup assembly.\n\nYeah, being able to operate directly on the compressed version of the\nfile would be very useful, but I'm not sure that we have great options\navailable there. I think the only widely-used format that supports\nthat is \".zip\", and I'm not too sure about emitting zip files.\nApparently, pixz also supports random access to archive members, and\nit did have on entry that survived my arbitrary cut in the table\nabove, but the last release was in 2015, and it seems to be only a\ncommand-line tool, not a library. It also depends on libarchive and\nliblzma, which is not awful, but I'm not sure we want to suck in that\nmany dependencies. But that's really a secondary thing: I can't\nimagine us depending on something that hasn't had a release in 5\nyears, and has less than 300 total commits.\n\nNow, it is based on xz/liblzma, and those seems to have some built-in\nindexing capabilities which it may be leveraging, so possibly we could\nroll our own. I'm not too sure about that, though, and it would limit\nus to using only that form of compression.\n\nOther options include, perhaps, (1) emitting a tarfile of compressed\nfiles instead of a compressed tarfile, and (2) writing our own index\nfiles. We don't know when we begin emitting the tarfile what files\nwe're going to find our how big they will be, so we can't really emit\na directory at the beginning of the file. Even if we thought we knew,\nfiles can disappear or be truncated before we get around to archiving\nthem. However, when we reach the end of the file, we do know what we\nincluded and how big it was, so possibly we could generate an index\nfor each tar file, or include something in the backup manifest.\n\n> The other big benefit is that zstd's library has multi-threaded\n> compression built in, whereas that's not the case for other libraries\n> that I am aware of.\n\nWouldn't it be a problem to let the backend become multi-threaded, at\nleast on Windows?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:52:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 09:52:53 -0400, Robert Haas wrote:\n> On Tue, Apr 21, 2020 at 6:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > I agree that trying to make backups very fast is a good goal (or well, I\n> > think not very slow would be a good descriptor for the current\n> > situation). I am just trying to make sure we tackle the right problems\n> > for that. My gut feeling is that we have to tackle compression first,\n> > because without addressing that \"all hope is lost\" ;)\n> \n> OK. I have no objection to the idea of starting with (1) server side\n> compression and (2) a better compression algorithm. However, I'm not\n> very sold on the idea of relying on parallelism that is specific to\n> compression. I think that parallelism across the whole operation -\n> multiple connections, multiple processes, etc. - may be a more\n> promising approach than trying to parallelize specific stages of the\n> process. I am not sure about that; it could be wrong, and I'm open to\n> the possibility that it is, in fact, wrong.\n\n*My* gut feeling is that you're going to have a harder time using CPU\ntime efficiently when doing parallel compression via multiple processes\nand independent connections. You're e.g. going to have a lot more\ncontext switches, I think. And there will be network overhead from doing\nmore connections (including worse congestion control).\n\n\n> Leaving out all the three and four digit wall times from your table:\n> \n> > method level parallelism wall-time cpu-user-time cpu-kernel-time size rate format\n> > pigz 1 10 34.35 364.14 23.55 3892401867 16.6 .gz\n> > zstd 1 1 82.95 67.97 11.82 2853193736 22.6 .zstd\n> > zstd 1 10 25.05 151.84 13.35 2847414913 22.7 .zstd\n> > zstd 6 10 43.47 374.30 12.37 2745211100 23.5 .zstd\n> > zstd 6 20 32.50 468.18 13.44 2745211100 23.5 .zstd\n> > zstd 9 20 57.99 949.91 14.13 2606535138 24.8 .zstd\n> > lz4 1 1 49.94 36.60 13.33 7318668265 8.8 .lz4\n> > pixz 1 10 92.54 925.52 37.00 1199499772 53.8 .xz\n> \n> It's notable that almost all of the fast wall times here are with\n> zstd; the surviving entries with pigz and pixz are with ten-way\n> parallelism, and both pigz and lz4 have worse compression ratios than\n> zstd. My impression, though, is that LZ4 might be getting a bit of a\n> raw deal here because of the repetitive nature of the data. I theorize\n> based on some reading I did yesterday, and general hand-waving, that\n> maybe the compression ratios would be closer together on a more\n> realistic data set.\n\nI agree that most datasets won't get even close to what we've seen\nhere. And that disadvantages e.g. lz4.\n\nTo come up with a much less compressible case, I generated data the\nfollowing way:\n\nCREATE TABLE random_data(id serial NOT NULL, r1 float not null, r2 float not null, r3 float not null);\nALTER TABLE random_data SET (FILLFACTOR = 100);\nALTER SEQUENCE random_data_id_seq CACHE 1024\n-- with pgbench, I ran this in parallel for 100s\nINSERT INTO random_data(r1,r2,r3) SELECT random(), random(), random() FROM generate_series(1, 100000);\n-- then created indexes, using a high fillfactor to ensure few zeroed out parts\nALTER TABLE random_data ADD CONSTRAINT random_data_id_pkey PRIMARY KEY(id) WITH (FILLFACTOR = 100);\nCREATE INDEX random_data_r1 ON random_data(r1) WITH (fillfactor = 100);\n\nthis results in a 16GB base backup. I think this is probably a good bit\nless compressible than most PG databases.\n\n\nmethod level parallelism wall-time cpu-user-time cpu-kernel-time size rate format\ngzip 1 1 305.37 299.72 5.52 7067232465 2.28\nlz4 1 1 33.26 27.26 5.99 8961063439 1.80 .lz4\nlz4 3 1 188.50 182.91 5.58 8204501460 1.97 .lz4\nzstd 1 1 66.41 58.38 6.04 6925634128 2.33 .zstd\nzstd 1 10 9.64 67.04 4.82 6980075316 2.31 .zstd\nzstd 3 1 122.04 115.79 6.24 6440274143 2.50 .zstd\nzstd 3 10 13.65 106.11 5.64 6438439095 2.51 .zstd\nzstd 9 10 100.06 955.63 6.79 5963827497 2.71 .zstd\nzstd 15 10 259.84 2491.39 8.88 5912617243 2.73 .zstd\npixz 1 10 162.59 1626.61 15.52 5350138420 3.02 .xz\nplzip 1 20 135.54 2705.28 9.25 5270033640 3.06 .lz\n\n\n> It's also notable that lz1 -1 is BY FAR the winner in terms of\n> absolute CPU consumption. So I kinda wonder whether supporting both\n> LZ4 and ZSTD might be the way to go, especially since once we have the\n> LZ4 code we might be able to use it for other things, too.\n\nYea. I think the case for lz4 is far stronger in other\nplaces. E.g. having lz4 -1 for toast can make a lot of sense, suddenly\nrepeated detoasting is much less of an issue, while still achieving\nhigher compression than pglz.\n\n.oO(Now I really see how pglz compares to the above)\n\n\n> > One thing this reminded me of is whether using a format (tar) that\n> > doesn't allow efficient addressing of individual files is a good idea\n> > for base backups. The compression rates very likely will be better when\n> > not compressing tiny files individually, but at the same time it'd be\n> > very useful to be able to access individual files more efficiently than\n> > O(N). I can imagine that being important for some cases of incremental\n> > backup assembly.\n> \n> Yeah, being able to operate directly on the compressed version of the\n> file would be very useful, but I'm not sure that we have great options\n> available there. I think the only widely-used format that supports\n> that is \".zip\", and I'm not too sure about emitting zip files.\n\nI don't really see a problem with emitting .zip files. It's an extremely\nwidely used container format for all sorts of file formats these days.\nExcept for needing a bit more complicated (and I don't think it's *that*\nbig of a difference) code during generation / unpacking, it seems\nclearly advantageous over .tar.gz etc.\n\n\n> Apparently, pixz also supports random access to archive members, and\n> it did have on entry that survived my arbitrary cut in the table\n> above, but the last release was in 2015, and it seems to be only a\n> command-line tool, not a library. It also depends on libarchive and\n> liblzma, which is not awful, but I'm not sure we want to suck in that\n> many dependencies. But that's really a secondary thing: I can't\n> imagine us depending on something that hasn't had a release in 5\n> years, and has less than 300 total commits.\n\nOh, yea. I just looked at the various tools I could find that did\nparallel compression.\n\n\n> Other options include, perhaps, (1) emitting a tarfile of compressed\n> files instead of a compressed tarfile\n\nYea, that'd help some. Although I am not sure how good the tooling to\nseek through tarfiles in an O(files) rather than O(bytes) manner is.\n\nI think there some cases where using separate compression state for each\nfile would hurt us. Some of the archive formats have support for reusing\ncompression state, but I don't know which.\n\n\n> , and (2) writing our own index files. We don't know when we begin\n> emitting the tarfile what files we're going to find our how big they\n> will be, so we can't really emit a directory at the beginning of the\n> file. Even if we thought we knew, files can disappear or be truncated\n> before we get around to archiving them. However, when we reach the end\n> of the file, we do know what we included and how big it was, so\n> possibly we could generate an index for each tar file, or include\n> something in the backup manifest.\n\nHm. There's some appeal to just store offsets in the manifest, and to\nmake sure it's a seakable offset in the compression stream. OTOH, it\nmakes it pretty hard for other tools to generate a compatible archive.\n\n\n> > The other big benefit is that zstd's library has multi-threaded\n> > compression built in, whereas that's not the case for other libraries\n> > that I am aware of.\n> \n> Wouldn't it be a problem to let the backend become multi-threaded, at\n> least on Windows?\n\nWe already have threads in windows, e.g. the signal handler emulation\nstuff runs in one. Are you thinking of this bit in postmaster.c:\n\n#ifdef HAVE_PTHREAD_IS_THREADED_NP\n\n\t/*\n\t * On macOS, libintl replaces setlocale() with a version that calls\n\t * CFLocaleCopyCurrent() when its second argument is \"\" and every relevant\n\t * environment variable is unset or empty. CFLocaleCopyCurrent() makes\n\t * the process multithreaded. The postmaster calls sigprocmask() and\n\t * calls fork() without an immediate exec(), both of which have undefined\n\t * behavior in a multithreaded program. A multithreaded postmaster is the\n\t * normal case on Windows, which offers neither fork() nor sigprocmask().\n\t */\n\tif (pthread_is_threaded_np() != 0)\n\t\tereport(FATAL,\n\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t errmsg(\"postmaster became multithreaded during startup\"),\n\t\t\t\t errhint(\"Set the LC_ALL environment variable to a valid locale.\")));\n#endif\n\n?\n\nI don't really see any of the concerns there to apply for the base\nbackup case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 08:24:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Wed, Apr 22, 2020 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:\n> *My* gut feeling is that you're going to have a harder time using CPU\n> time efficiently when doing parallel compression via multiple processes\n> and independent connections. You're e.g. going to have a lot more\n> context switches, I think. And there will be network overhead from doing\n> more connections (including worse congestion control).\n\nOK, noted. I'm still doubtful that the optimal number of connections\nis 1, but it might be that the optimal number of CPU cores to apply to\ncompression is much higher than the optimal number of connections. For\ninstance, suppose there are two equally sized tablespaces on separate\ndrives, but zstd with 10-way parallelism is our chosen compression\nstrategy. It seems to me that two connections has an excellent chance\nof being faster than one, because with only one connection I don't see\nhow you can benefit from the opportunity to do I/O in parallel.\nHowever, I can also see that having twenty connections just as a way\nto get 10-way parallelism for each tablespace might be undesirable\nand/or inefficient for various reasons.\n\n> this results in a 16GB base backup. I think this is probably a good bit\n> less compressible than most PG databases.\n>\n> method level parallelism wall-time cpu-user-time cpu-kernel-time size rate format\n> gzip 1 1 305.37 299.72 5.52 7067232465 2.28\n> lz4 1 1 33.26 27.26 5.99 8961063439 1.80 .lz4\n> lz4 3 1 188.50 182.91 5.58 8204501460 1.97 .lz4\n> zstd 1 1 66.41 58.38 6.04 6925634128 2.33 .zstd\n> zstd 1 10 9.64 67.04 4.82 6980075316 2.31 .zstd\n> zstd 3 1 122.04 115.79 6.24 6440274143 2.50 .zstd\n> zstd 3 10 13.65 106.11 5.64 6438439095 2.51 .zstd\n> zstd 9 10 100.06 955.63 6.79 5963827497 2.71 .zstd\n> zstd 15 10 259.84 2491.39 8.88 5912617243 2.73 .zstd\n> pixz 1 10 162.59 1626.61 15.52 5350138420 3.02 .xz\n> plzip 1 20 135.54 2705.28 9.25 5270033640 3.06 .lz\n\nSo, picking a better compressor in this case looks a lot less\nexciting. Parallel zstd still compresses somewhat better than\nsingle-core lz4, but the difference in compression ratio is far less,\nand the amount of CPU you have to burn in order to get that extra\ncompression is pretty large.\n\n> I don't really see a problem with emitting .zip files. It's an extremely\n> widely used container format for all sorts of file formats these days.\n> Except for needing a bit more complicated (and I don't think it's *that*\n> big of a difference) code during generation / unpacking, it seems\n> clearly advantageous over .tar.gz etc.\n\nWouldn't that imply buying into DEFLATE as our preferred compression algorithm?\n\nEither way, I don't really like the idea of having PostgreSQL have its\nown code to generate and interpret various archive formats. That seems\nlike a maintenance nightmare and a recipe for bugs. How can anyone\neven verify that our existing 'tar' code works with all 'tar'\nimplementations out there, or that it's correct in all cases? Do we\nreally want to maintain similar code for other formats, or even for\nthis one? I'd say \"no\". We should pick archive formats that have good,\nwell-maintained libraries with permissive licenses and then use those.\nI don't know whether \"zip\" falls into that category or not.\n\n> > Other options include, perhaps, (1) emitting a tarfile of compressed\n> > files instead of a compressed tarfile\n>\n> Yea, that'd help some. Although I am not sure how good the tooling to\n> seek through tarfiles in an O(files) rather than O(bytes) manner is.\n\nWell, considering that at present we're using hand-rolled code...\n\n> I think there some cases where using separate compression state for each\n> file would hurt us. Some of the archive formats have support for reusing\n> compression state, but I don't know which.\n\nYeah, I had the same thought. People with mostly 1GB relation segments\nmight not notice much difference, but people with lots of little\nrelations might see a more significant difference.\n\n> Hm. There's some appeal to just store offsets in the manifest, and to\n> make sure it's a seakable offset in the compression stream. OTOH, it\n> makes it pretty hard for other tools to generate a compatible archive.\n\nYeah.\n\nFWIW, I don't see it as being entirely necessary to create a seekable\ncompressed archive format, let alone to make all of our compressed\narchive formats seekable. I think supporting multiple compression\nalgorithms in a flexible way that's not too tied to the capabilities\nof particular algorithms is more important. If you want fast restores\nof incremental and differential backups, consider using -Fp rather\nthan -Ft. Or we can have a new option that's like -Fp but every file\nis compressed individually in place, or files larger than N bytes are\ncompressed in place using a configurable algorithm. It might be\nsomewhat less efficient but it's also way less complicated to\nimplement, and I think that should count for something. I don't want\nto get so caught up in advanced features here that we don't make any\nuseful progress at all. If we can add better features without a large\ncomplexity increment, and without drawing objections from others on\nthis list, great. If not, I'm prepared to summarily jettison it as\nnice-to-have but not essential.\n\n> I don't really see any of the concerns there to apply for the base\n> backup case.\n\nI felt like there was some reason that threads were bad, but it may\nhave just been the case you mentioned and not relevant here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:12:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On 2020-04-20 22:36, Robert Haas wrote:\n> My suspicion is that it has mostly to do with adequately utilizing the\n> hardware resources on the server side. If you are network-constrained,\n> adding more connections won't help, unless there's something shaping\n> the traffic which can be gamed by having multiple connections.\n\nThis is a thing. See \"long fat network\" and \"bandwidth-delay product\" \n(https://en.wikipedia.org/wiki/Bandwidth-delay_product). The proper way \nto address this is presumably with TCP parameter tuning, but in practice \nit's often easier to just start multiple connections, for example, when \ndoing a backup via rsync.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 18:20:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Wed, Apr 22, 2020 at 12:20 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-20 22:36, Robert Haas wrote:\n> > My suspicion is that it has mostly to do with adequately utilizing the\n> > hardware resources on the server side. If you are network-constrained,\n> > adding more connections won't help, unless there's something shaping\n> > the traffic which can be gamed by having multiple connections.\n>\n> This is a thing. See \"long fat network\" and \"bandwidth-delay product\"\n> (https://en.wikipedia.org/wiki/Bandwidth-delay_product). The proper way\n> to address this is presumably with TCP parameter tuning, but in practice\n> it's often easier to just start multiple connections, for example, when\n> doing a backup via rsync.\n\nVery interesting -- thanks!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:29:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 12:12:32 -0400, Robert Haas wrote:\n> On Wed, Apr 22, 2020 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:\n> > *My* gut feeling is that you're going to have a harder time using CPU\n> > time efficiently when doing parallel compression via multiple processes\n> > and independent connections. You're e.g. going to have a lot more\n> > context switches, I think. And there will be network overhead from doing\n> > more connections (including worse congestion control).\n> \n> OK, noted. I'm still doubtful that the optimal number of connections\n> is 1, but it might be that the optimal number of CPU cores to apply to\n> compression is much higher than the optimal number of connections.\n\nYea, that's basically what I think too.\n\n\n> For instance, suppose there are two equally sized tablespaces on\n> separate drives, but zstd with 10-way parallelism is our chosen\n> compression strategy. It seems to me that two connections has an\n> excellent chance of being faster than one, because with only one\n> connection I don't see how you can benefit from the opportunity to do\n> I/O in parallel.\n\nYea. That's exactly the case for \"connection level\" parallelism I had\nupthread as well. It'd require being somewhat careful about different\ntablespaces in the selection for each connection, but that's not that\nhard.\n\nI also can see a case for using N backends and one connection, but I\nthink that'll be too complicated / too much bound by lcoking around the\nsocket etc.\n\n\n> \n> > this results in a 16GB base backup. I think this is probably a good bit\n> > less compressible than most PG databases.\n> >\n> > method level parallelism wall-time cpu-user-time cpu-kernel-time size rate format\n> > gzip 1 1 305.37 299.72 5.52 7067232465 2.28\n> > lz4 1 1 33.26 27.26 5.99 8961063439 1.80 .lz4\n> > lz4 3 1 188.50 182.91 5.58 8204501460 1.97 .lz4\n> > zstd 1 1 66.41 58.38 6.04 6925634128 2.33 .zstd\n> > zstd 1 10 9.64 67.04 4.82 6980075316 2.31 .zstd\n> > zstd 3 1 122.04 115.79 6.24 6440274143 2.50 .zstd\n> > zstd 3 10 13.65 106.11 5.64 6438439095 2.51 .zstd\n> > zstd 9 10 100.06 955.63 6.79 5963827497 2.71 .zstd\n> > zstd 15 10 259.84 2491.39 8.88 5912617243 2.73 .zstd\n> > pixz 1 10 162.59 1626.61 15.52 5350138420 3.02 .xz\n> > plzip 1 20 135.54 2705.28 9.25 5270033640 3.06 .lz\n> \n> So, picking a better compressor in this case looks a lot less\n> exciting.\n\nOh? I find it *extremely* exciting here. This is pretty close to the\nworst case compressability-wise, and zstd takes only ~22% of the time as\ngzip does, while still delivering better compression. A nearly 5x\nimprovement in compression times seems pretty exciting to me.\n\nOr do you mean for zstd over lz4, rather than anything over gzip? 1.8x\n-> 2.3x is a pretty decent improvement still, no? And being able to do\ndo it in 1/3 of the wall time seems pretty helpful.\n\n> Parallel zstd still compresses somewhat better than single-core lz4,\n> but the difference in compression ratio is far less, and the amount of\n> CPU you have to burn in order to get that extra compression is pretty\n> large.\n\nIt's \"just\" a ~2x difference for \"level 1\" compression, right? For\nhaving 1.9GiB less to write / permanently store of a 16GiB base\nbackup that doesn't seem that bad to me.\n\n\n> > I don't really see a problem with emitting .zip files. It's an extremely\n> > widely used container format for all sorts of file formats these days.\n> > Except for needing a bit more complicated (and I don't think it's *that*\n> > big of a difference) code during generation / unpacking, it seems\n> > clearly advantageous over .tar.gz etc.\n> \n> Wouldn't that imply buying into DEFLATE as our preferred compression algorithm?\n\nzip doesn't have to imply DEFLATE although it is the most common\noption. There's a compression method associated with each file.\n\n\n> Either way, I don't really like the idea of having PostgreSQL have its\n> own code to generate and interpret various archive formats. That seems\n> like a maintenance nightmare and a recipe for bugs. How can anyone\n> even verify that our existing 'tar' code works with all 'tar'\n> implementations out there, or that it's correct in all cases? Do we\n> really want to maintain similar code for other formats, or even for\n> this one? I'd say \"no\". We should pick archive formats that have good,\n> well-maintained libraries with permissive licenses and then use those.\n> I don't know whether \"zip\" falls into that category or not.\n\nI agree we should pick one. I think tar is not a great choice. .zip\nseems like it'd be a significant improvement - but not necessarily\noptimal.\n\n\n> > > Other options include, perhaps, (1) emitting a tarfile of compressed\n> > > files instead of a compressed tarfile\n> >\n> > Yea, that'd help some. Although I am not sure how good the tooling to\n> > seek through tarfiles in an O(files) rather than O(bytes) manner is.\n> \n> Well, considering that at present we're using hand-rolled code...\n\nGood point.\n\nAlso looks like at least gnu tar supports seeking (when not reading from\na pipe etc).\n\n\n> > I think there some cases where using separate compression state for each\n> > file would hurt us. Some of the archive formats have support for reusing\n> > compression state, but I don't know which.\n> \n> Yeah, I had the same thought. People with mostly 1GB relation segments\n> might not notice much difference, but people with lots of little\n> relations might see a more significant difference.\n\nYea. I suspect it's close to immeasurable for large relations. Reusing\nthe dictionary might help, although it likely would imply some\noverhead. OTOH, the overhead of small relations will usually probably be\nin the number of files, rather than the actual size.\n\n\nFWIW, not that it's really relevant to this discussion, but I played\naround with using trained compression dictionaries for postgres\ncontents. Can improve e.g. lz4's compression ratio a fair bit, in\nparticular when compressing small amounts of data. E.g. per-block\ncompression or such.\n\n\n> FWIW, I don't see it as being entirely necessary to create a seekable\n> compressed archive format, let alone to make all of our compressed\n> archive formats seekable. I think supporting multiple compression\n> algorithms in a flexible way that's not too tied to the capabilities\n> of particular algorithms is more important. If you want fast restores\n> of incremental and differential backups, consider using -Fp rather\n> than -Ft.\n\nGiven how compressible many real-world databases are (maybe not quite\nthe 50x as in the pgbench -i case, but still extremely so), I don't\nquite find -Fp a convincing alternative.\n\n\n> Or we can have a new option that's like -Fp but every file\n> is compressed individually in place, or files larger than N bytes are\n> compressed in place using a configurable algorithm. It might be\n> somewhat less efficient but it's also way less complicated to\n> implement, and I think that should count for something.\n\nYea, I think that'd be a decent workaround.\n\n\n> I don't want to get so caught up in advanced features here that we\n> don't make any useful progress at all. If we can add better features\n> without a large complexity increment, and without drawing objections\n> from others on this list, great. If not, I'm prepared to summarily\n> jettison it as nice-to-have but not essential.\n\nJust to be clear: I am not at all advocating tying a change of the\narchive format to compression method / parallelism changes or anything.\n\n\n> > I don't really see any of the concerns there to apply for the base\n> > backup case.\n> \n> I felt like there was some reason that threads were bad, but it may\n> have just been the case you mentioned and not relevant here.\n\nI mean, they do have some serious issues when postgres infrastructure is\nneeded. Not being threadsafe and all. One needs to be careful to not let\n\"threads escape\", to not fork() etc. That doesn't seems like a problem\nhere though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:06:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n> I also can see a case for using N backends and one connection, but I\n> think that'll be too complicated / too much bound by lcoking around the\n> socket etc.\n\nAgreed.\n\n> Oh? I find it *extremely* exciting here. This is pretty close to the\n> worst case compressability-wise, and zstd takes only ~22% of the time as\n> gzip does, while still delivering better compression. A nearly 5x\n> improvement in compression times seems pretty exciting to me.\n>\n> Or do you mean for zstd over lz4, rather than anything over gzip? 1.8x\n> -> 2.3x is a pretty decent improvement still, no? And being able to do\n> do it in 1/3 of the wall time seems pretty helpful.\n\nI meant the latter thing, not the former. I'm taking it as given that\nwe don't want gzip as the only option. Yes, 1.8x -> 2.3x is decent,\nbut not as earth-shattering as 8.8x -> ~24x.\n\nIn any case, I lean towards adding both lz4 and zstd as options, so I\nguess we're not really disagreeing here\n\n> > Parallel zstd still compresses somewhat better than single-core lz4,\n> > but the difference in compression ratio is far less, and the amount of\n> > CPU you have to burn in order to get that extra compression is pretty\n> > large.\n>\n> It's \"just\" a ~2x difference for \"level 1\" compression, right? For\n> having 1.9GiB less to write / permanently store of a 16GiB base\n> backup that doesn't seem that bad to me.\n\nSure, sure. I'm just saying some people may not be OK with ramping up\nto 10 or more compression threads on their master server, if it's\nalready heavily loaded, and maybe only has 4 vCPUs or whatever, so we\nshould have lighter-weight options for those people. I'm not trying to\nargue against zstd or against the idea of ramping up large numbers of\ncompression threads, just saying that lz4 looks awfully nice for\npeople who need some compression but are tight on CPU cycles.\n\n> I agree we should pick one. I think tar is not a great choice. .zip\n> seems like it'd be a significant improvement - but not necessarily\n> optimal.\n\nOther ideas?\n\n> > I don't want to get so caught up in advanced features here that we\n> > don't make any useful progress at all. If we can add better features\n> > without a large complexity increment, and without drawing objections\n> > from others on this list, great. If not, I'm prepared to summarily\n> > jettison it as nice-to-have but not essential.\n>\n> Just to be clear: I am not at all advocating tying a change of the\n> archive format to compression method / parallelism changes or anything.\n\nGood, thanks.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 14:40:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 14:40:17 -0400, Robert Haas wrote:\n> > Oh? I find it *extremely* exciting here. This is pretty close to the\n> > worst case compressability-wise, and zstd takes only ~22% of the time as\n> > gzip does, while still delivering better compression. A nearly 5x\n> > improvement in compression times seems pretty exciting to me.\n> >\n> > Or do you mean for zstd over lz4, rather than anything over gzip? 1.8x\n> > -> 2.3x is a pretty decent improvement still, no? And being able to do\n> > do it in 1/3 of the wall time seems pretty helpful.\n> \n> I meant the latter thing, not the former. I'm taking it as given that\n> we don't want gzip as the only option. Yes, 1.8x -> 2.3x is decent,\n> but not as earth-shattering as 8.8x -> ~24x.\n\nAh, good.\n\n\n> In any case, I lean towards adding both lz4 and zstd as options, so I\n> guess we're not really disagreeing here\n\nWe're agreeing, indeed ;)\n\n\n> > I agree we should pick one. I think tar is not a great choice. .zip\n> > seems like it'd be a significant improvement - but not necessarily\n> > optimal.\n> \n> Other ideas?\n\nThe 7zip format, perhaps. Does have format level support to address what\nwe were discussing earlier: \"Support for solid compression, where\nmultiple files of like type are compressed within a single stream, in\norder to exploit the combined redundancy inherent in similar files.\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:03:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Wed, Apr 22, 2020 at 3:03 PM Andres Freund <andres@anarazel.de> wrote:\n> The 7zip format, perhaps. Does have format level support to address what\n> we were discussing earlier: \"Support for solid compression, where\n> multiple files of like type are compressed within a single stream, in\n> order to exploit the combined redundancy inherent in similar files.\".\n\nI think that might not be a great choice. One potential problem is\nthat according to https://www.7-zip.org/license.txt the license is\npartly LGPL, partly three-clause BSD with an advertising clause, and\npartly some strange mostly-free thing with reverse-engineering\nrestrictions. That sounds pretty unappealing to me as a key dependency\nfor core technology. It also seems like it's mostly a Windows thing.\np7zip, the \"port of the command line version of 7-Zip to Linux/Posix\",\nlast released a new version in 2016. I therefore think that there is\nroom to question how well supported this is all going to be on the\nsystems where most of us work all day.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 22 Apr 2020 15:56:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Mon, Apr 20, 2020 at 4:19 PM Andres Freund <andres@anarazel.de> wrote:\n> One question I have not really seen answered well:\n>\n> Why do we want parallelism here. Or to be more precise: What do we hope\n> to accelerate by making what part of creating a base backup\n> parallel. There's several potential bottlenecks, and I think it's\n> important to know the design priorities to evaluate a potential design.\n\nI spent some time today trying to understand just one part of this,\nwhich is how long it will take to write the base backup out to disk\nand whether having multiple independent processes helps. I settled on\nwriting and fsyncing 64GB of data, written in 8kB chunks, divided into\n1, 2, 4, 8, or 16 equal size files, with each file written by a\nseparate process, and an fsync() at the end before process exit. So in\nthis test, there is no question of whether the master can read the\ndata fast enough, nor is there any issue of network bandwidth. It's\npurely a test of whether it's faster to have one process write a big\nfile or whether it's faster to have multiple processes each write a\nsmaller file.\n\nI tested this on EDB's cthulhu. It's an older server, but it happens\nto have 4 mount points available for testing, one with XFS + magnetic\ndisks, one with ext4 + magnetic disks, one with XFS + SSD, and one\nwith ext4 + SSD. I did the experiment described above on each mount\npoint separately, and then I also tried 4, 8, or 16 equal size files\nspread evenly across the 4 mount points. To summarize the results very\nbriefly:\n\n1. ext4 degraded really badly with >4 concurrent writers. XFS did not.\n2. SSDs were faster than magnetic disks, but you had to use XFS and\n>=4 concurrent writers to get the benefit.\n3. Spreading writes across the mount points works well, but the\nslowest mount point sets the pace.\n\nHere are more detailed results, with times in seconds:\n\nfilesystem media 1@64GB 2@32GB 4@16GB 8@8GB 16@4GB\nxfs mag 97 53 60 67 71\next4 mag 94 68 66 335 549\nxfs ssd 97 55 33 27 25\next4 ssd 116 70 66 227 450\nspread spread n/a n/a 48 42 44\n\nThe spread test with 16 files @ 4GB llooks like this:\n\n[/mnt/data-ssd/robert.haas/test14] open: 0, write: 7, fsync: 0, close:\n0, total: 7\n[/mnt/data-ssd/robert.haas/test10] open: 0, write: 7, fsync: 2, close:\n0, total: 9\n[/mnt/data-ssd/robert.haas/test2] open: 0, write: 7, fsync: 2, close:\n0, total: 9\n[/mnt/data-ssd/robert.haas/test6] open: 0, write: 7, fsync: 2, close:\n0, total: 9\n[/mnt/data-ssd2/robert.haas/test3] open: 0, write: 16, fsync: 0,\nclose: 0, total: 16\n[/mnt/data-ssd2/robert.haas/test11] open: 0, write: 16, fsync: 0,\nclose: 0, total: 16\n[/mnt/data-ssd2/robert.haas/test15] open: 0, write: 17, fsync: 0,\nclose: 0, total: 17\n[/mnt/data-ssd2/robert.haas/test7] open: 0, write: 18, fsync: 0,\nclose: 0, total: 18\n[/mnt/data-mag/robert.haas/test16] open: 0, write: 7, fsync: 18,\nclose: 0, total: 25\n[/mnt/data-mag/robert.haas/test4] open: 0, write: 7, fsync: 19, close:\n0, total: 26\n[/mnt/data-mag/robert.haas/test12] open: 0, write: 7, fsync: 19,\nclose: 0, total: 26\n[/mnt/data-mag/robert.haas/test8] open: 0, write: 7, fsync: 22, close:\n0, total: 29\n[/mnt/data-mag2/robert.haas/test9] open: 0, write: 20, fsync: 23,\nclose: 0, total: 43\n[/mnt/data-mag2/robert.haas/test13] open: 0, write: 18, fsync: 25,\nclose: 0, total: 43\n[/mnt/data-mag2/robert.haas/test5] open: 0, write: 19, fsync: 24,\nclose: 0, total: 43\n[/mnt/data-mag2/robert.haas/test1] open: 0, write: 18, fsync: 25,\nclose: 0, total: 43\n\nThe fastest write performance of any test was the 16-way XFS-SSD test,\nwhich wrote at about 2.56 gigabytes per second. The fastest\nsingle-file test was on ext4-magnetic, though ext4-ssd and\nxfs-magnetic were similar, around 0.66 gigabytes per second. Your\nsystem must be a LOT faster, because you were seeing pg_basebackup\nrunning at, IIUC, ~3 gigabytes per second, and that would have been a\nsecond process both writing and doing other things. For comparison,\nsome recent local pg_basebackup testing on this machine by some of my\ncolleagues ran at about 0.82 gigabytes per second.\n\nI suspect it would be possible to get significantly higher numbers on\nthis hardware by (1) changing all the filesystems over to XFS and (2)\ndividing the data dynamically based on write speed rather than writing\nthe same amount of it everywhere. I bet we could reach 6-8 gigabytes\nper second if we did all that.\n\nNow, I don't know how much this matters. To get limited by this stuff,\nyou'd need an incredibly fast network - 10 or maybe 40 or 100 Gigabit\nEthernet or something like that - or to be doing a local backup. But I\nthought that it was interesting and that I should share it, so here\nyou go! I do wonder if the apparently concurrency problems with ext4\nmight matter on systems with high connection counts just in normal\noperation, backups aside.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 30 Apr 2020 14:50:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-04-30 14:50:34 -0400, Robert Haas wrote:\n> On Mon, Apr 20, 2020 at 4:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > One question I have not really seen answered well:\n> >\n> > Why do we want parallelism here. Or to be more precise: What do we hope\n> > to accelerate by making what part of creating a base backup\n> > parallel. There's several potential bottlenecks, and I think it's\n> > important to know the design priorities to evaluate a potential design.\n>\n> I spent some time today trying to understand just one part of this,\n> which is how long it will take to write the base backup out to disk\n> and whether having multiple independent processes helps. I settled on\n> writing and fsyncing 64GB of data, written in 8kB chunks\n\nWhy 8kb? That's smaller than what we currently do in pg_basebackup,\nafaictl, and you're actually going to be bottlenecked by syscall\noverhead at that point (unless you disable / don't have the whole intel\nsecurity mitigation stuff).\n\n\n> , divided into 1, 2, 4, 8, or 16 equal size files, with each file\n> written by a separate process, and an fsync() at the end before\n> process exit. So in this test, there is no question of whether the\n> master can read the data fast enough, nor is there any issue of\n> network bandwidth. It's purely a test of whether it's faster to have\n> one process write a big file or whether it's faster to have multiple\n> processes each write a smaller file.\n\nThat's not necessarily the only question though, right? There's also the\napproach one process writing out multiple files (via buffered, not async\nIO)? E.g. one basebackup connecting to multiple backends, or just\nshuffeling multiple files through one copy stream.\n\n\n> I tested this on EDB's cthulhu. It's an older server, but it happens\n> to have 4 mount points available for testing, one with XFS + magnetic\n> disks, one with ext4 + magnetic disks, one with XFS + SSD, and one\n> with ext4 + SSD.\n\nIIRC cthulhu's SSDs are not that fast, compared to NVMe storage (by\nnearly an order of magnitude IIRC). So this might be disadvantaging the\nparallel case more than it should. Also perhaps the ext4 disadvantage is\nsmaller on more modern kernel versions?\n\nIf you can provide me with the test program, I'd happily run it on some\ndecent, but not upper end, NVMe SSDs.\n\n\n> The fastest write performance of any test was the 16-way XFS-SSD test,\n> which wrote at about 2.56 gigabytes per second. The fastest\n> single-file test was on ext4-magnetic, though ext4-ssd and\n> xfs-magnetic were similar, around 0.66 gigabytes per second.\n\nI think you might also be seeing some interaction with write caching on\nthe raid controller here. The file sizes are small enough to fit in\nthere to a significant degree for the single file tests.\n\n\n> Your system must be a LOT faster, because you were seeing\n> pg_basebackup running at, IIUC, ~3 gigabytes per second, and that\n> would have been a second process both writing and doing other\n> things.\n\nRight. On my workstation I have a NVMe SSD that can do ~2.5 GiB/s\nsustained, in my laptop one that peaks to ~3.2GiB/s but then quickly goes\nto ~2GiB/s.\n\nFWIW, I ran a \"benchmark\" just now just using dd, on my laptop, on\nbattery (so take this with a huge grain of salt). With 1 dd writing out\n150GiB in 8kb blocks I get 1.8GiB/s, and with two writing 75GiB each\n~840MiB/s, with three writing 50GiB each 550MiB/s.\n\n\n> Now, I don't know how much this matters. To get limited by this stuff,\n> you'd need an incredibly fast network - 10 or maybe 40 or 100 Gigabit\n> Ethernet or something like that - or to be doing a local backup. But I\n> thought that it was interesting and that I should share it, so here\n> you go! I do wonder if the apparently concurrency problems with ext4\n> might matter on systems with high connection counts just in normal\n> operation, backups aside.\n\nI have seen such problems. Some of them have gotten better though. For\nmost (all?) linux filesystems we can easily run into filesystem\nconcurrency issues from within postgres. There's basically a file level\nexclusive lock for buffered writes (only for the copy into the page\ncache though), due to posix requirements about the effects of a write\nbeing atomic.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Apr 2020 12:52:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Thu, Apr 30, 2020 at 3:52 PM Andres Freund <andres@anarazel.de> wrote:\n> Why 8kb? That's smaller than what we currently do in pg_basebackup,\n> afaictl, and you're actually going to be bottlenecked by syscall\n> overhead at that point (unless you disable / don't have the whole intel\n> security mitigation stuff).\n\nI just picked something. Could easily try other things.\n\n> > , divided into 1, 2, 4, 8, or 16 equal size files, with each file\n> > written by a separate process, and an fsync() at the end before\n> > process exit. So in this test, there is no question of whether the\n> > master can read the data fast enough, nor is there any issue of\n> > network bandwidth. It's purely a test of whether it's faster to have\n> > one process write a big file or whether it's faster to have multiple\n> > processes each write a smaller file.\n>\n> That's not necessarily the only question though, right? There's also the\n> approach one process writing out multiple files (via buffered, not async\n> IO)? E.g. one basebackup connecting to multiple backends, or just\n> shuffeling multiple files through one copy stream.\n\nSure, but that seems like it can't scale better than this. You have\nthe scaling limitations of the filesystem, plus the possibility that\nthe process is busy doing something else when it could be writing to\nany particular file.\n\n> If you can provide me with the test program, I'd happily run it on some\n> decent, but not upper end, NVMe SSDs.\n\nIt was attached, but I forgot to mention that in the body of the email.\n\n> I think you might also be seeing some interaction with write caching on\n> the raid controller here. The file sizes are small enough to fit in\n> there to a significant degree for the single file tests.\n\nYeah, that's possible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Apr 2020 18:06:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Thu, Apr 30, 2020 at 6:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Apr 30, 2020 at 3:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > Why 8kb? That's smaller than what we currently do in pg_basebackup,\n> > afaictl, and you're actually going to be bottlenecked by syscall\n> > overhead at that point (unless you disable / don't have the whole intel\n> > security mitigation stuff).\n>\n> I just picked something. Could easily try other things.\n\nI tried changing the write size to 64kB, keeping the rest the same.\nHere are the results:\n\nfilesystem media 1@64GB 2@32GB 4@16GB 8@8GB 16@4GB\nxfs mag 65 53 64 74 79\next4 mag 96 68 75 303 437\nxfs ssd 75 43 29 33 38\next4 ssd 96 68 63 214 254\nspread spread n/a n/a 43 38 40\n\nAnd here again are the previous results with an 8kB write size:\n\nxfs mag 97 53 60 67 71\next4 mag 94 68 66 335 549\nxfs ssd 97 55 33 27 25\next4 ssd 116 70 66 227 450\nspread spread n/a n/a 48 42 44\n\nGenerally, those numbers look better than the previous numbers, but\nparallelism still looks fairly appealing on the SSD storage - less so\non magnetic disks, at least in this test.\n\nHmm, now I wonder what write size pg_basebackup is actually using.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 May 2020 16:32:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-05-01 16:32:15 -0400, Robert Haas wrote:\n> On Thu, Apr 30, 2020 at 6:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Apr 30, 2020 at 3:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Why 8kb? That's smaller than what we currently do in pg_basebackup,\n> > > afaictl, and you're actually going to be bottlenecked by syscall\n> > > overhead at that point (unless you disable / don't have the whole intel\n> > > security mitigation stuff).\n> >\n> > I just picked something. Could easily try other things.\n> \n> I tried changing the write size to 64kB, keeping the rest the same.\n> Here are the results:\n> \n> filesystem media 1@64GB 2@32GB 4@16GB 8@8GB 16@4GB\n> xfs mag 65 53 64 74 79\n> ext4 mag 96 68 75 303 437\n> xfs ssd 75 43 29 33 38\n> ext4 ssd 96 68 63 214 254\n> spread spread n/a n/a 43 38 40\n> \n> And here again are the previous results with an 8kB write size:\n> \n> xfs mag 97 53 60 67 71\n> ext4 mag 94 68 66 335 549\n> xfs ssd 97 55 33 27 25\n> ext4 ssd 116 70 66 227 450\n> spread spread n/a n/a 48 42 44\n> \n> Generally, those numbers look better than the previous numbers, but\n> parallelism still looks fairly appealing on the SSD storage - less so\n> on magnetic disks, at least in this test.\n\nI spent a fair bit of time analyzing this, and my conclusion is that you\nmight largely be seeing numa effects. Yay.\n\nI don't have an as large numa machine at hand, but here's what I'm\nseeing on my local machine, during a run of writing out 400GiB (this is\na run with noise on the machine, the benchmarks below are without\nthat). The machine has 192GiB of ram, evenly distributed to two sockets\n/ numa domains.\n\n\nAt start I see\nnumastat -m|grep -E 'MemFree|MemUsed|Dirty|Writeback|Active\\(file\\)|Inactive\\(file\\)'\"\nMemFree 91908.20 92209.85 184118.05\nMemUsed 3463.05 4553.33 8016.38\nActive(file) 105.46 328.52 433.98\nInactive(file) 68.29 190.14 258.43\nDirty 0.86 0.90 1.76\nWriteback 0.00 0.00 0.00\nWritebackTmp 0.00 0.00 0.00\n\nFor a while there's pretty decent IO throughput (all 10s samples):\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 1955.67 2299.32 0.00 0.00 42.48 1203.94 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 82.10 89.33\n\nThen it starts to be slower on a sustained basis:\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 1593.33 1987.85 0.00 0.00 42.90 1277.55 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 67.55 76.53\n\nAnd then performance tanks completely:\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 646.33 781.85 0.00 0.00 132.68 1238.70 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 85.43 58.63\n\n\nThat amount of degradation confused me for a while, especially because I\ncouldn't reproduce it the more controlled I made the setups. In\nparticular I stopped seeing the same magnitude of issues after pinnning\nprocesses to one numa socket (both running and memory).\n\nAfter a few seconds:\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 1882.00 2320.07 0.00 0.00 42.50 1262.35 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 79.05 88.07\n\nMemFree 35356.50 80986.46 116342.96\nMemUsed 60014.75 15776.72 75791.47\nActive(file) 179.44 163.28 342.72\nInactive(file) 58293.18 13385.15 71678.33\nDirty 18407.50 882.00 19289.50\nWriteback 235.78 335.43 571.21\nWritebackTmp 0.00 0.00 0.00\n\nA bit later io starts to get slower:\n\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 1556.30 1898.70 0.00 0.00 40.92 1249.29 0.00 0.00 0.00 0.00 0.00 0.00 0.20 24.00 62.90 72.01\n\nMemFree 519.56 36086.14 36605.70\nMemUsed 94851.69 60677.04 155528.73\nActive(file) 303.84 212.96 516.80\nInactive(file) 92776.70 58133.28 150909.97\nDirty 10913.20 5374.07 16287.27\nWriteback 812.94 331.96 1144.90\nWritebackTmp 0.00 0.00 0.00\n\n\nAnd then later it gets worse:\nDevice r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util\nnvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 1384.70 1671.25 0.00 0.00 40.87 1235.91 0.00 0.00 0.00 0.00 0.00 0.00 0.20 7.00 55.89 63.45\n\nMemFree 519.54 242.98 762.52\nMemUsed 94851.71 96520.20 191371.91\nActive(file) 175.82 246.03 421.85\nInactive(file) 92820.19 93985.79 186805.98\nDirty 10482.75 4140.72 14623.47\nWriteback 0.00 0.00 0.00\nWritebackTmp 0.00 0.00 0.00\n\nWhen using a 1s iostat instead of a 10s, it's noticable that performance\nswings widely between very slow (<100MB/s) and very high throughput (>\n2500MB/s).\n\nIt's clearly visible that performance degrades substantially first when\nall of a numa node's free memory is exhausted, then when the second numa\nnode's is.\n\nLooking at profile I see a lot of cacheline bouncing between the kernel\nthreads that \"reclaim\" pages (i.e. make them available for reuse), the\nkernel threads that write out dirty pages, the kernel threads where the\nIO completes (i.e. where the dirty bit can be flipped / locks get\nreleased), and the writing process.\n\nI think there's a lot from the kernel side that can improve - but it's\nnot too surprising that letting the kernel cache / forcing it to make\ncaching decisions for a large streaming wide has substantial costs.\n\n\nI changed Robert's test program to optionall fallocate,\nsync_file_range(WRITE), posix_fadvise(DONTNEED), to avoid a large\nfootprint in the page cache. The performance\ndifferences are quite substantial:\n\ngcc -Wall -ggdb ~/tmp/write_and_fsync.c -o /tmp/write_and_fsync && \\\n rm -ff /srv/dev/bench/test* && echo 3 |sudo tee /proc/sys/vm/drop_caches && \\\n /tmp/write_and_fsync --sync_file_range=0 --fallocate=0 --fadvise=0 --filesize=$((400*1024*1024*1024)) /srv/dev/bench/test1\n\nrunning test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0\n[/srv/dev/bench/test1][11450] open: 0, fallocate: 0 write: 214, fsync: 6, close: 0, total: 220\n\ncomparing that with --sync_file_range=1 --fallocate=1 --fadvise=1\nrunning test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1\n[/srv/dev/bench/test1][14098] open: 0, fallocate: 0 write: 161, fsync: 0, close: 0, total: 161\n\nBelow are the results of running a the program with a variation of\nparameters (both file and resutls attached).\n\nI used perf stat in this run to measure the difference in CPU\nusage.\n\nref_cycles are the number of CPU cycles, across all 20 cores / 40\nthreads, CPUs were doing *something*. It is not affected by CPU\nfrequency scaling, just by the time CPUs were not \"halted\". Whereas\ncycles is affected by frequency scaling.\n\nA high ref_cycles_sec, combined with a decent number of total\ninstructions/cycles is *good*, because it indicates fewer CPUs\nused. Whereas a very high ref_cycles_tot means that more CPUs were\nrunning doing something for the duration of the benchmark.\n\nThe run-to-run variations between the runs without cache control are\npretty large. So this is probably not the end-all-be-all numbers. But I\nthink the trends are pretty clear.\n\ntest time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=0 248.430736196 1,497,048,950,014 150.653M/sec 1,226,822,167,960 0.123GHz 705,950,461,166 0.54\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=1 310.275952938 1,921,817,571,226 154.849M/sec 1,499,581,687,133 0.121GHz 944,243,167,053 0.59\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=0 243.609959554 1,802,385,405,203 184.970M/sec 1,449,560,513,247 0.149GHz 855,426,288,031 0.56\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=0 230.880100449 1,328,417,418,799 143.846M/sec 1,148,924,667,393 0.124GHz 723,158,246,628 0.63\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=1 253.591234992 1,548,485,571,798 152.658M/sec 1,229,926,994,613 0.121GHz 1,117,352,436,324 0.95\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1 164.488835158 911,974,902,254 138.611M/sec 760,756,011,483 0.116GHz 672,105,046,261 0.84\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=0 192.151682414 1,526,440,715,456 198.603M/sec 1,037,135,756,007 0.135GHz 802,754,964,096 0.76\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=1 242.648245159 1,782,637,416,163 183.629M/sec 1,463,696,313,881 0.151GHz 1,000,100,694,932 0.69\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=1 188.772193248 1,418,274,870,697 187.803M/sec 923,133,958,500 0.122GHz 799,212,291,243 0.92\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=0 421.580487642 2,756,486,952,728 163.449M/sec 1,387,708,033,752 0.082GHz 990,478,650,874 0.72\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=0 169.854206542 1,333,619,626,854 196.282M/sec 1,036,261,531,134 0.153GHz 666,052,333,591 0.64\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=1 305.078100578 1,970,042,289,192 161.445M/sec 1,505,706,462,812 0.123GHz 954,963,240,648 0.62\nnumprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=1 166.295223626 1,290,699,256,763 194.044M/sec 857,873,391,283 0.129GHz 761,338,026,415 0.89\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=0 256.156100686 2,407,922,637,215 235.003M/sec 1,133,311,037,956 0.111GHz 748,666,206,805 0.65\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=1 215.255015340 1,977,578,120,924 229.676M/sec 1,461,504,758,029 0.170GHz 1,005,270,838,642 0.68\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=1 158.262790654 1,720,443,307,097 271.769M/sec 1,004,079,045,479 0.159GHz 826,905,592,751 0.84\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=0 334.932246893 2,366,388,662,460 176.628M/sec 1,216,049,589,993 0.091GHz 796,698,831,717 0.68\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=0 161.697270285 1,866,036,713,483 288.576M/sec 1,068,181,502,433 0.165GHz 739,559,279,008 0.70\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=1 231.440889430 1,965,389,749,057 212.391M/sec 1,407,927,406,358 0.152GHz 997,199,361,968 0.72\nnumprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=1 214.433248700 2,232,198,239,769 260.300M/sec 1,073,334,918,389 0.125GHz 861,540,079,120 0.80\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=0 fadvise=0 644.521613661 3,688,449,404,537 143.079M/sec 2,020,128,131,309 0.078GHz 961,486,630,359 0.48\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=1 fadvise=0 243.830464632 1,499,608,983,445 153.756M/sec 1,227,468,439,403 0.126GHz 691,534,661,654 0.59\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=0 fadvise=1 292.866419420 1,753,376,415,877 149.677M/sec 1,483,169,463,392 0.127GHz 860,035,914,148 0.56\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=1 fadvise=1 162.152397194 925,643,754,128 142.719M/sec 743,208,501,601 0.115GHz 554,462,585,110 0.70\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=0 fadvise=0 211.369510165 1,558,996,898,599 184.401M/sec 1,359,343,408,200 0.161GHz 766,769,036,524 0.57\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=1 fadvise=0 233.315094908 1,427,133,080,540 152.927M/sec 1,166,000,868,597 0.125GHz 743,027,329,074 0.64\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=0 fadvise=1 290.698155820 1,732,849,079,701 149.032M/sec 1,441,508,612,326 0.124GHz 835,039,426,282 0.57\nnumprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=1 fadvise=1 159.945462440 850,162,390,626 132.892M/sec 724,286,281,548 0.113GHz 670,069,573,150 0.90\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=0 fadvise=0 163.244592275 1,524,807,507,173 233.531M/sec 1,398,319,581,978 0.214GHz 689,514,058,243 0.46\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=1 fadvise=0 231.795934322 1,731,030,267,153 186.686M/sec 1,124,935,745,020 0.121GHz 736,084,922,669 0.70\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=0 fadvise=1 315.564163702 1,958,199,733,216 155.128M/sec 1,405,115,546,716 0.111GHz 1,000,595,890,394 0.73\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=1 fadvise=1 210.945487961 1,527,169,148,899 180.990M/sec 906,023,518,692 0.107GHz 700,166,552,207 0.80\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=0 fadvise=0 161.759094088 1,468,321,054,671 226.934M/sec 1,221,167,105,510 0.189GHz 735,855,415,612 0.59\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=1 fadvise=0 158.578248952 1,354,770,825,277 213.586M/sec 936,436,363,752 0.148GHz 654,823,079,884 0.68\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=0 fadvise=1 274.628500801 1,792,841,068,080 163.209M/sec 1,343,398,055,199 0.122GHz 996,073,874,051 0.73\nnumprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=1 fadvise=1 179.140070123 1,383,595,004,328 193.095M/sec 850,299,722,091 0.119GHz 706,959,617,654 0.83\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=0 fadvise=0 445.496787199 2,663,914,572,687 149.495M/sec 1,267,340,496,930 0.071GHz 787,469,552,454 0.62\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=1 fadvise=0 261.866083604 2,325,884,820,091 222.043M/sec 1,094,814,208,219 0.105GHz 649,479,233,453 0.57\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=0 fadvise=1 172.963505544 1,717,387,683,260 248.228M/sec 1,356,381,335,831 0.196GHz 822,256,638,370 0.58\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=1 fadvise=1 157.934678897 1,650,503,807,778 261.266M/sec 970,705,561,971 0.154GHz 637,953,927,131 0.66\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=0 fadvise=0 225.623143601 1,804,402,820,599 199.938M/sec 1,086,394,788,362 0.120GHz 656,392,112,807 0.62\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=1 fadvise=0 157.930900998 1,797,506,082,342 284.548M/sec 1,001,509,813,741 0.159GHz 644,107,150,289 0.66\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=0 fadvise=1 165.772265335 1,805,895,001,689 272.353M/sec 1,514,173,918,970 0.228GHz 823,435,044,810 0.54\nnumprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=1 fadvise=1 187.664764448 1,964,118,348,429 261.660M/sec 978,060,510,880 0.130GHz 668,316,194,988 0.67\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 2 May 2020 19:36:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Sat, May 2, 2020 at 10:36 PM Andres Freund <andres@anarazel.de> wrote:\n> I changed Robert's test program to optionall fallocate,\n> sync_file_range(WRITE), posix_fadvise(DONTNEED), to avoid a large\n> footprint in the page cache. The performance\n> differences are quite substantial:\n>\n> gcc -Wall -ggdb ~/tmp/write_and_fsync.c -o /tmp/write_and_fsync && \\\n> rm -ff /srv/dev/bench/test* && echo 3 |sudo tee /proc/sys/vm/drop_caches && \\\n> /tmp/write_and_fsync --sync_file_range=0 --fallocate=0 --fadvise=0 --filesize=$((400*1024*1024*1024)) /srv/dev/bench/test1\n>\n> running test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0\n> [/srv/dev/bench/test1][11450] open: 0, fallocate: 0 write: 214, fsync: 6, close: 0, total: 220\n>\n> comparing that with --sync_file_range=1 --fallocate=1 --fadvise=1\n> running test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1\n> [/srv/dev/bench/test1][14098] open: 0, fallocate: 0 write: 161, fsync: 0, close: 0, total: 161\n\nAh, nice.\n\n> The run-to-run variations between the runs without cache control are\n> pretty large. So this is probably not the end-all-be-all numbers. But I\n> think the trends are pretty clear.\n\nCould you be explicit about what you think those clear trends are?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 3 May 2020 09:12:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-05-03 09:12:59 -0400, Robert Haas wrote:\n> On Sat, May 2, 2020 at 10:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > I changed Robert's test program to optionall fallocate,\n> > sync_file_range(WRITE), posix_fadvise(DONTNEED), to avoid a large\n> > footprint in the page cache. The performance\n> > differences are quite substantial:\n> >\n> > gcc -Wall -ggdb ~/tmp/write_and_fsync.c -o /tmp/write_and_fsync && \\\n> > rm -ff /srv/dev/bench/test* && echo 3 |sudo tee /proc/sys/vm/drop_caches && \\\n> > /tmp/write_and_fsync --sync_file_range=0 --fallocate=0 --fadvise=0 --filesize=$((400*1024*1024*1024)) /srv/dev/bench/test1\n> >\n> > running test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0\n> > [/srv/dev/bench/test1][11450] open: 0, fallocate: 0 write: 214, fsync: 6, close: 0, total: 220\n> >\n> > comparing that with --sync_file_range=1 --fallocate=1 --fadvise=1\n> > running test with: numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1\n> > [/srv/dev/bench/test1][14098] open: 0, fallocate: 0 write: 161, fsync: 0, close: 0, total: 161\n>\n> Ah, nice.\n\nBtw, I forgot to include the result for 0 / 0 / 0 in the results\n(off-by-one error in a script :))\n\nnumprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n\n> > The run-to-run variations between the runs without cache control are\n> > pretty large. So this is probably not the end-all-be-all numbers. But I\n> > think the trends are pretty clear.\n>\n> Could you be explicit about what you think those clear trends are?\n\nLargely that concurrency can help a bit, but also hurt\ntremendously. Below is some more detailed analysis, it'll be a bit\nlong...\n\nTaking the no concurrency / cache management as a baseline:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n\nand comparing cache management with using some concurrency:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n\nwe can see very similar timing. Which makes sense, because that's\nroughly the device's max speed. But then going to higher concurrency,\nthere's clearly regressions:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n\nAnd I think it is instructive to look at the\nref_cycles_tot/cycles_tot/instructions_tot vs\nref_cycles_sec/cycles_sec/ipc. The units are confusing because they are\nacross all cores and most are idle. But it's pretty obvious that\nnumprocs=1 sfr=1 fadvise=1 has cores running for a lot shorter time\n(reference cycles basically count the time cores were running on a\nabsolute time scale). Compared to numprocs=2 sfr=0 fadvise=0, which has\nthe same resulting performance, it's clear that cores were busier, but\nless efficient (lower ipc).\n\nWith cache mangement there's very little benefit, and some risk (1->2\nregression) in this workload with increasing concurrency:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=1 188.772193248 1,418,274,870,697 187.803M/sec 923,133,958,500 0.122GHz 799,212,291,243 0.92\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=1 158.262790654 1,720,443,307,097 271.769M/sec 1,004,079,045,479 0.159GHz 826,905,592,751 0.84\n\n\nAnd there's good benefit, but tremendous risk, of concurrency in the no\ncache control case:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n\n\nsync file range without fadvise isn't a benefit at low concurrency, but prevents bad regressions at high concurency:\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=0 248.430736196 1,497,048,950,014\t150.653M/sec\t1,226,822,167,960 0.123GHz 705,950,461,166\t0.54\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=0 192.151682414 1,526,440,715,456 198.603M/sec 1,037,135,756,007 0.135GHz 802,754,964,096 0.76\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=0 256.156100686 2,407,922,637,215 235.003M/sec 1,133,311,037,956 0.111GHz 748,666,206,805 0.65\n\nfadvise alone is similar:\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=1 310.275952938 1,921,817,571,226 154.849M/sec 1,499,581,687,133 0.121GHz 944,243,167,053 0.59\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=1 242.648245159 1,782,637,416,163 183.629M/sec 1,463,696,313,881 0.151GHz 1,000,100,694,932 0.69\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=1 215.255015340 1,977,578,120,924 229.676M/sec 1,461,504,758,029 0.170GHz 1,005,270,838,642 0.68\n\n\nThere does not appear to be a huge of benefit in fallocate in this\nworkload, the OS's delayed allocation works well. Compare:\n\nnumprocs=1\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081\t 1,569,524,602,961\t178.188M/sec\t1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=0 243.609959554 1,802,385,405,203 184.970M/sec 1,449,560,513,247 0.149GHz 855,426,288,031 0.56\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=0 248.430736196 1,497,048,950,014 150.653M/sec 1,226,822,167,960 0.123GHz 705,950,461,166 0.54\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=0 230.880100449 1,328,417,418,799 143.846M/sec 1,148,924,667,393 0.124GHz 723,158,246,628 0.63\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=1 310.275952938 1,921,817,571,226 154.849M/sec 1,499,581,687,133 0.121GHz 944,243,167,053 0.59\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=1 253.591234992 1,548,485,571,798 152.658M/sec 1,229,926,994,613 0.121GHz 1,117,352,436,324 0.95\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1 164.488835158 911,974,902,254 138.611M/sec 760,756,011,483 0.116GHz 672,105,046,261 0.84\n\nnumprocs=2\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=0 421.580487642 2,756,486,952,728 163.449M/sec 1,387,708,033,752 0.082GHz 990,478,650,874 0.72\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=0 192.151682414 1,526,440,715,456 198.603M/sec 1,037,135,756,007 0.135GHz 802,754,964,096 0.76\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=0 169.854206542 1,333,619,626,854 196.282M/sec 1,036,261,531,134 0.153GHz 666,052,333,591 0.64\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=1 242.648245159 1,782,637,416,163 183.629M/sec 1,463,696,313,881 0.151GHz 1,000,100,694,932 0.69\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=1 305.078100578 1,970,042,289,192 161.445M/sec 1,505,706,462,812 0.123GHz 954,963,240,648 0.62\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=1 188.772193248 1,418,274,870,697 187.803M/sec 923,133,958,500 0.122GHz 799,212,291,243 0.92\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=1 166.295223626 1,290,699,256,763 194.044M/sec 857,873,391,283 0.129GHz 761,338,026,415 0.89\n\nnumprocs=4\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=0 334.932246893 2,366,388,662,460 176.628M/sec 1,216,049,589,993 0.091GHz 796,698,831,717 0.68\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=0 256.156100686 2,407,922,637,215 235.003M/sec 1,133,311,037,956 0.111GHz 748,666,206,805 0.65\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=0 161.697270285 1,866,036,713,483 288.576M/sec 1,068,181,502,433 0.165GHz 739,559,279,008 0.70\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=1 215.255015340 1,977,578,120,924 229.676M/sec 1,461,504,758,029 0.170GHz 1,005,270,838,642 0.68\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=1 231.440889430 1,965,389,749,057 212.391M/sec 1,407,927,406,358 0.152GHz 997,199,361,968 0.72\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=1 158.262790654 1,720,443,307,097 271.769M/sec 1,004,079,045,479 0.159GHz 826,905,592,751 0.84\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=1 214.433248700 2,232,198,239,769 260.300M/sec 1,073,334,918,389 0.125GHz 861,540,079,120 0.80\n\nI would say that it seems to help concurrent cases without cache\ncontrol, but not particularly reliably so. At higher concurrency it\nseems to hurt with cache control, not sure I undstand why.\n\n\nI was at first confused why 128kb write sizes hurt (128kb is probably on\nthe higher end of useful, but I wanted to have see a more extreme\ndifference):\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=0 220.210155081 1,569,524,602,961 178.188M/sec 1,363,686,761,705 0.155GHz 833,345,334,408 0.68\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=0 fadvise=0 644.521613661 3,688,449,404,537 143.079M/sec 2,020,128,131,309 0.078GHz 961,486,630,359 0.48\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=0 248.430736196 1,497,048,950,014 150.653M/sec 1,226,822,167,960 0.123GHz 705,950,461,166 0.54\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=1 fadvise=0 243.830464632 1,499,608,983,445 153.756M/sec 1,227,468,439,403 0.126GHz 691,534,661,654 0.59\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=0 fadvise=1 310.275952938 1,921,817,571,226 154.849M/sec 1,499,581,687,133 0.121GHz 944,243,167,053 0.59\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=0 fadvise=1 292.866419420 1,753,376,415,877 149.677M/sec 1,483,169,463,392 0.127GHz 860,035,914,148 0.56\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=1 fadvise=1 162.152397194 925,643,754,128 142.719M/sec 743,208,501,601 0.115GHz 554,462,585,110 0.70\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=0 243.609959554 1,802,385,405,203 184.970M/sec 1,449,560,513,247 0.149GHz 855,426,288,031 0.56\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=0 fadvise=0 211.369510165 1,558,996,898,599 184.401M/sec 1,359,343,408,200 0.161GHz 766,769,036,524 0.57\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=0 230.880100449 1,328,417,418,799 143.846M/sec 1,148,924,667,393 0.124GHz 723,158,246,628 0.63\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=1 fadvise=0 233.315094908 1,427,133,080,540 152.927M/sec 1,166,000,868,597 0.125GHz 743,027,329,074 0.64\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=0 fadvise=1 253.591234992 1,548,485,571,798 152.658M/sec 1,229,926,994,613 0.121GHz 1,117,352,436,324 0.95\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=0 fadvise=1 290.698155820 1,732,849,079,701 149.032M/sec 1,441,508,612,326 0.124GHz 835,039,426,282 0.57\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1 164.488835158 911,974,902,254 138.611M/sec 760,756,011,483 0.116GHz 672,105,046,261 0.84\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=1 fadvise=1 159.945462440 850,162,390,626 132.892M/sec 724,286,281,548 0.113GHz 670,069,573,150 0.90\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=0 164.052510134 1,561,521,537,336 237.972M/sec 1,404,761,167,120 0.214GHz 715,274,337,015 0.51\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=0 fadvise=0 163.244592275 1,524,807,507,173 233.531M/sec 1,398,319,581,978 0.214GHz 689,514,058,243 0.46\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=0 192.151682414 1,526,440,715,456 198.603M/sec 1,037,135,756,007 0.135GHz 802,754,964,096 0.76\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=1 fadvise=0 231.795934322 1,731,030,267,153 186.686M/sec 1,124,935,745,020 0.121GHz 736,084,922,669 0.70\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=0 fadvise=1 242.648245159 1,782,637,416,163 183.629M/sec 1,463,696,313,881 0.151GHz 1,000,100,694,932 0.69\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=0 fadvise=1 315.564163702 1,958,199,733,216 155.128M/sec 1,405,115,546,716 0.111GHz 1,000,595,890,394 0.73\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=1 188.772193248 1,418,274,870,697 187.803M/sec 923,133,958,500 0.122GHz 799,212,291,243 0.92\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=1 fadvise=1 210.945487961 1,527,169,148,899 180.990M/sec 906,023,518,692 0.107GHz 700,166,552,207 0.80\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=0 421.580487642 2,756,486,952,728 163.449M/sec 1,387,708,033,752 0.082GHz 990,478,650,874 0.72\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=0 fadvise=0 161.759094088 1,468,321,054,671 226.934M/sec 1,221,167,105,510 0.189GHz 735,855,415,612 0.59\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=0 169.854206542 1,333,619,626,854 196.282M/sec 1,036,261,531,134 0.153GHz 666,052,333,591 0.64\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=1 fadvise=0 158.578248952 1,354,770,825,277 213.586M/sec 936,436,363,752 0.148GHz 654,823,079,884 0.68\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=0 fadvise=1 305.078100578 1,970,042,289,192 161.445M/sec 1,505,706,462,812 0.123GHz 954,963,240,648 0.62\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=0 fadvise=1 274.628500801 1,792,841,068,080 163.209M/sec 1,343,398,055,199 0.122GHz 996,073,874,051 0.73\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=1 166.295223626 1,290,699,256,763 194.044M/sec 857,873,391,283 0.129GHz 761,338,026,415 0.89\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=1 fadvise=1 179.140070123 1,383,595,004,328 193.095M/sec 850,299,722,091 0.119GHz 706,959,617,654 0.83\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=0 455.096916715 2,808,715,616,077 154.293M/sec 1,366,660,063,053 0.075GHz 888,512,073,477 0.66\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=0 fadvise=0 445.496787199 2,663,914,572,687 149.495M/sec 1,267,340,496,930 0.071GHz 787,469,552,454 0.62\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=0 256.156100686 2,407,922,637,215 235.003M/sec 1,133,311,037,956 0.111GHz 748,666,206,805 0.65\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=1 fadvise=0 261.866083604 2,325,884,820,091 222.043M/sec 1,094,814,208,219 0.105GHz 649,479,233,453 0.57\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=0 fadvise=1 215.255015340 1,977,578,120,924 229.676M/sec 1,461,504,758,029 0.170GHz 1,005,270,838,642 0.68\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=0 fadvise=1 172.963505544 1,717,387,683,260 248.228M/sec 1,356,381,335,831 0.196GHz 822,256,638,370 0.58\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=1 158.262790654 1,720,443,307,097 271.769M/sec 1,004,079,045,479 0.159GHz 826,905,592,751 0.84\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=1 fadvise=1 157.934678897 1,650,503,807,778 261.266M/sec 970,705,561,971 0.154GHz 637,953,927,131 0.66\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=0 334.932246893 2,366,388,662,460 176.628M/sec 1,216,049,589,993 0.091GHz 796,698,831,717 0.68\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=0 fadvise=0 225.623143601 1,804,402,820,599 199.938M/sec 1,086,394,788,362 0.120GHz 656,392,112,807 0.62\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=0 161.697270285 1,866,036,713,483 288.576M/sec 1,068,181,502,433 0.165GHz 739,559,279,008 0.70\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=1 fadvise=0 157.930900998 1,797,506,082,342 284.548M/sec 1,001,509,813,741 0.159GHz 644,107,150,289 0.66\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=0 fadvise=1 231.440889430 1,965,389,749,057 212.391M/sec 1,407,927,406,358 0.152GHz 997,199,361,968 0.72\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=0 fadvise=1 165.772265335 1,805,895,001,689 272.353M/sec 1,514,173,918,970 0.228GHz 823,435,044,810 0.54\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=1 214.433248700 2,232,198,239,769 260.300M/sec 1,073,334,918,389 0.125GHz 861,540,079,120 0.80\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=1 fadvise=1 187.664764448 1,964,118,348,429 261.660M/sec 978,060,510,880 0.130GHz 668,316,194,988 0.67\n\nIt's pretty clear that the larger write block size can hurt quite\nbadly. I was somewhat confused by this at first, but after thinking\nabout it for a while longer it actually makes sense: For the OS to\nfinish an 8k write it needs to find two free pagecache pages. For an\n128k write it needs to find 32. Which means that it's much more likely\nthat kernel threads and the writes are going to fight over locks /\ncachelines: In the 8k page it's quite likely that ofen the kernel\nthreads will do so while the memcpy() from userland is happening, but\nthat's less the case with 32 pages that need to be acquired before the\nmemcpy() can happen.\n\nWith cache control that problem doesn't exist, which is why the larger\nblock size is beneficial:\n\n> test time ref_cycles_tot ref_cycles_sec cycles_tot cycles_sec instructions_tot ipc\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=0 sfr=1 fadvise=1 164.175492485 913,991,290,231 139.183M/sec 762,359,320,428 0.116GHz 678,451,556,273 0.84\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=0 sfr=1 fadvise=1 162.152397194 925,643,754,128 142.719M/sec 743,208,501,601 0.115GHz 554,462,585,110 0.70\n\n> numprocs=1 filesize=429496729600 blocksize=8192 fallocate=1 sfr=1 fadvise=1 164.488835158 911,974,902,254 138.611M/sec 760,756,011,483 0.116GHz 672,105,046,261 0.84\n> numprocs=1 filesize=429496729600 blocksize=131072 fallocate=1 sfr=1 fadvise=1 159.945462440 850,162,390,626 132.892M/sec 724,286,281,548 0.113GHz 670,069,573,150 0.90\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=0 sfr=1 fadvise=1 188.772193248 1,418,274,870,697 187.803M/sec 923,133,958,500 0.122GHz 799,212,291,243 0.92\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=0 sfr=1 fadvise=1 210.945487961 1,527,169,148,899 180.990M/sec 906,023,518,692 0.107GHz 700,166,552,207 0.80\n\n> numprocs=2 filesize=214748364800 blocksize=8192 fallocate=1 sfr=1 fadvise=1 166.295223626 1,290,699,256,763 194.044M/sec 857,873,391,283 0.129GHz 761,338,026,415 0.89\n> numprocs=2 filesize=214748364800 blocksize=131072 fallocate=1 sfr=1 fadvise=1 179.140070123 1,383,595,004,328 193.095M/sec 850,299,722,091 0.119GHz 706,959,617,654 0.83\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=0 sfr=1 fadvise=1 158.262790654 1,720,443,307,097 271.769M/sec 1,004,079,045,479 0.159GHz 826,905,592,751 0.84\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=0 sfr=1 fadvise=1 157.934678897 1,650,503,807,778 261.266M/sec 970,705,561,971 0.154GHz 637,953,927,131 0.66\n\n> numprocs=4 filesize=107374182400 blocksize=8192 fallocate=1 sfr=1 fadvise=1 214.433248700 2,232,198,239,769 260.300M/sec 1,073,334,918,389 0.125GHz 861,540,079,120 0.80\n> numprocs=4 filesize=107374182400 blocksize=131072 fallocate=1 sfr=1 fadvise=1 187.664764448 1,964,118,348,429 261.660M/sec 978,060,510,880 0.130GHz 668,316,194,988 0.67\n\nNote how especially in the first few cases the total number of\ninstructions required is improved (although due to the way I did the\nperf stat the sampling error is pretty large).\n\n\nI haven't run that test yest, but after looking at all this I would bet\nthat reducing the block size to 4kb (i.e. a single os/hw page) would\nhelp the no cache control case significantly, in particular in the\nconcurrent case.\n\nAnd conversely, I'd expect that the CPU efficiency will be improved by\nlarger block size for the cache control case for just about any\nrealistic block size.\n\n\nI'd love to have a faster storage available (faster NVMes, or multiple\nones I can use for benchmarking) to see what the cutoff point for\nactually benefiting from concurrency is.\n\n\nAlso worthwhile to note that even the \"best case\" from a CPU usage point\nhere absolutely *pales* against using direct-IO. It's not an\napples/apples comparison, but comparing buffered io using\nwrite_and_fsync, and unbuffered io using fio:\n\n128KiB blocksize:\n\nwrite_and_fsync:\necho 3 |sudo tee /proc/sys/vm/drop_caches && /usr/bin/time perf stat -a -e cpu-clock,ref-cycles,cycles,instructions /tmp/write_and_fsync --blocksize $((128*1024)) --sync_file_range=1 --fallocate=1 --fadvise=1 --sequential=0 --filesize=$((400*1024*1024*1024)) /srv/dev/bench/test1\n\n Performance counter stats for 'system wide':\n\n 6,377,903.65 msec cpu-clock # 39.999 CPUs utilized\n 628,014,590,200 ref-cycles # 98.467 M/sec\n 634,468,623,514 cycles # 0.099 GHz\n 795,771,756,320 instructions # 1.25 insn per cycle\n\n 159.451492209 seconds time elapsed\n\nfio:\nrm -f /srv/dev/bench/test* && echo 3 |sudo tee /proc/sys/vm/drop_caches && /usr/bin/time perf stat -a -e cpu-clock,ref-cycles,cycles,instructions fio --name=test --iodepth=512 --iodepth_low=8 --iodepth_batch_submit=8 --iodepth_batch_complete_min=8 --iodepth_batch_complete_max=128 --ioengine=libaio --rw=write --bs=128k --filesize=$((400*1024*1024*1024)) --direct=1 --numjobs=1\n\n Performance counter stats for 'system wide':\n\n 6,313,522.71 msec cpu-clock # 39.999 CPUs utilized\n 458,476,185,800 ref-cycles # 72.618 M/sec\n 196,148,015,054 cycles # 0.031 GHz\n 158,921,457,853 instructions # 0.81 insn per cycle\n\n 157.842080440 seconds time elapsed\n\nCPU usage for fio most of the time was around 98% for write_and_fsync\nand 40% for fio.\n\nI.e. system-wide CPUs were active 0.73x the time, and 0.2x as many\ninstructions had to be executed in the DIO case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 May 2020 10:49:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "On Sun, May 3, 2020 at 1:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > > The run-to-run variations between the runs without cache control are\n> > > pretty large. So this is probably not the end-all-be-all numbers. But I\n> > > think the trends are pretty clear.\n> >\n> > Could you be explicit about what you think those clear trends are?\n>\n> Largely that concurrency can help a bit, but also hurt\n> tremendously. Below is some more detailed analysis, it'll be a bit\n> long...\n\nOK, thanks. Let me see if I can summarize here. On the strength of\nprevious experience, you'll probably tell me that some parts of this\nsummary are wildly wrong or at least \"not quite correct\" but I'm going\nto try my best.\n\n- Server-side compression seems like it has the potential to be a\nsignificant win by stretching bandwidth. We likely need to do it with\n10+ parallel threads, at least for stronger compressors, but these\nmight be threads within a single PostgreSQL process rather than\nmultiple separate backends.\n\n- Client-side cache management -- that is, use of\nposix_fadvise(DONTNEED), posix_fallocate, and sync_file_range, where\navailable -- looks like it can improve write rates and CPU efficiency\nsignificantly. Larger block sizes show a win when used together with\nsuch techniques.\n\n- The benefits of multiple concurrent connections remain somewhat\nelusive. Peter Eisentraut hypothesized upthread that such an approach\nmight be the most practical way forward for networks with a high\nbandwidth-delay product, and I hypothesized that such an approach\nmight be beneficial when there are multiple tablespaces on independent\ndisks, but we don't have clear experimental support for those\npropositions. Also, both your data and mine indicate that too much\nparallelism can lead to major regressions.\n\n- Any work we do while trying to make backup super-fast should also\nlend itself to super-fast restore, possibly including parallel\nrestore. Compressed tarfiles don't permit random access to member\nfiles. Uncompressed tarfiles do, but software that works this way is\nnot commonplace. The only mainstream archive format that seems to\nsupport random access seems to be zip. Adopting that wouldn't be\ncrazy, but might limit our choice of compression options more than\nwe'd like. A tar file of individually compressed files might be a\nplausible alternative, though there would probably be some hit to\ncompression ratios for small files. Then again, if a single,\nhighly-efficient process can handle a server-to-client backup, maybe\nthe same is true for extracting a compressed tarfile...\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 4 May 2020 14:04:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: design for parallel backup" }, { "msg_contents": "Hi,\n\nOn 2020-05-04 14:04:32 -0400, Robert Haas wrote:\n> OK, thanks. Let me see if I can summarize here. On the strength of\n> previous experience, you'll probably tell me that some parts of this\n> summary are wildly wrong or at least \"not quite correct\" but I'm going\n> to try my best.\n\n> - Server-side compression seems like it has the potential to be a\n> significant win by stretching bandwidth. We likely need to do it with\n> 10+ parallel threads, at least for stronger compressors, but these\n> might be threads within a single PostgreSQL process rather than\n> multiple separate backends.\n\nThat seems right. I think it might be reasonable to just support\n\"compression parallelism\" for zstd, as the library has all the code\ninternally. So we basically wouldn't have to care about it.\n\n\n> - Client-side cache management -- that is, use of\n> posix_fadvise(DONTNEED), posix_fallocate, and sync_file_range, where\n> available -- looks like it can improve write rates and CPU efficiency\n> significantly. Larger block sizes show a win when used together with\n> such techniques.\n\nYea. Alternatively direct io, but I am not sure we want to go there for\nnow.\n\n\n> - The benefits of multiple concurrent connections remain somewhat\n> elusive. Peter Eisentraut hypothesized upthread that such an approach\n> might be the most practical way forward for networks with a high\n> bandwidth-delay product, and I hypothesized that such an approach\n> might be beneficial when there are multiple tablespaces on independent\n> disks, but we don't have clear experimental support for those\n> propositions. Also, both your data and mine indicate that too much\n> parallelism can lead to major regressions.\n\nI think for that we'd basically have to create two high bandwidth nodes\nacross the pond. My experience in the somewhat recent past is that I\ncould saturate multi-gbit cross-atlantic links without too much trouble,\nat least once I changed sys.net.ipv4.tcp_congestion_control to something\nappropriate for such setups (BBR is probably the thing to use here these\ndays).\n\n\n> - Any work we do while trying to make backup super-fast should also\n> lend itself to super-fast restore, possibly including parallel\n> restore.\n\nI'm not sure I see a super clear case for parallel restore in any of the\nexperiments done so far. The only case we know it's a clear win is when\nthere's independent filesystems for parts of the data. There's an\nobvious case for parallel decompression however.\n\n\n> Compressed tarfiles don't permit random access to member files.\n\nThis is an issue for selective restores too, not just parallel\nrestore. I'm not sure how important a case that is, although it'd\ncertainly be useful if e.g. pg_rewind could read from compressed base\nbackups.\n\n> Uncompressed tarfiles do, but software that works this way is not\n> commonplace.\n\nI am not 100% sure which part you comment on not being commonplace\nhere. Supporting randomly accessing data in tarfiles?\n\nMy understanding of that is that one still has to \"skip\" through the\nentire archive, right? What not being compressed allows is to not have\nto read the files inbetween. Given the size of our data files compared\nto the metadata size that's probably fine?\n\n\n> The only mainstream archive format that seems to support random access\n> seems to be zip. Adopting that wouldn't be crazy, but might limit our\n> choice of compression options more than we'd like.\n\nI'm not sure that's *really* an issue - there's compression format codes\nin zip ([1] 4.4.5, also 4.3.14.3 & 4.5 for another approach), and\nseveral tools seem to have used that to add additional compression\nmethods.\n\n\n> A tar file of individually compressed files might be a plausible\n> alternative, though there would probably be some hit to compression\n> ratios for small files.\n\nI'm not entirely sure using zip over\nuncompressed-tar-over-compressed-files gains us all that much. AFAIU zip\ncompresses each file individually. So the advantage would be a more\nefficient (less seeking) storage of archive metadata (i.e. which file is\nwhere) and that the metadata could be compressed.\n\n\n> Then again, if a single, highly-efficient process can handle a\n> server-to-client backup, maybe the same is true for extracting a\n> compressed tarfile...\n\nYea. I'd expect that to be the case, at least for the single filesystem\ncase. Depending on the way multiple tablespaces / filesystems are\nhandled, it could even be doable to handle that reasonably - but it'd\nprobably be harder.\n\nGreetings,\n\nAndres Freund\n\n[1] https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT\n\n\n", "msg_date": "Mon, 4 May 2020 12:41:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: design for parallel backup" } ]
[ { "msg_contents": "Hi,\n\n> On 2020-Apr-13, I wrote to buildfarm-admins:\n> > As skate and snapper are increasingly difficult to keep alive and \n> > building on debian sparc, and as there aren't many sparc animals \n> > in general, I've set up four new debian sparc64 animals, two on \n> > stretch and two on buster.\n> > \n> > All four animals are already successfully building all current \n> > branches, just not submitting results yet.\n\nIt turns out I'd missed one failure:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=tadarida&br=REL_12_STABLE\n\nOnly tadarida fails the sequence regression test, and only \nfor REL_12_STABLE. It fails with -O2 and -O1 but succeeds with -O0.\n\nOther than that, all branches for all four animals succeed:\n\n> mussurana | Debian | 9 Stretch | gcc | 6 | with cassert\n> tadarida | Debian | 9 Stretch | gcc | 6 | without cassert\n> ibisbill | Debian | 10 Buster | gcc | 8 | with cassert\n> kittiwake | Debian | 10 Buster | gcc | 8 | without cassert\n\nBoth mussurana and tadarida are Stretch 9.12 with gcc 6.3.0-18+deb9u1.\nThere is no newer gcc source pkg for stretch or for stretch-backports.\n\nShould I keep the -O0 flag for REL_12_STABLE for tadarida?\n\nThanks,\nTom\n\n\n", "msg_date": "Wed, 15 Apr 2020 23:10:27 +0200", "msg_from": "\"Tom Turelinckx\" <pgbf@twiska.com>", "msg_from_op": true, "msg_subject": "tadarida vs REL_12_STABLE" }, { "msg_contents": "\"Tom Turelinckx\" <pgbf@twiska.com> writes:\n> It turns out I'd missed one failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=tadarida&br=REL_12_STABLE\n> Only tadarida fails the sequence regression test, and only \n> for REL_12_STABLE. It fails with -O2 and -O1 but succeeds with -O0.\n\nYeah, I saw that. The failure mode is really pretty odd:\n\n CREATE SEQUENCE sequence_test9 AS integer INCREMENT BY -1;\n+ERROR: MINVALUE (-9223372036854775808) is out of range for sequence data type integer\n\nIt's not difficult to see where things must be going wrong: sequence.c,\naround line 1490 in HEAD, must be choosing to set the sequence's seqmin\nto PG_INT64_MIN instead of PG_INT32_MIN as it should. But that code is\nexactly the same from HEAD back to v11 (and probably further, though\nI didn't look).\n\nThe next two failures are the same thing for smallint, and the rest is\njust fallout from the sequence-creation failures. So that's one extremely\nspecific codegen bug in the whole test suite. I wonder if it's related to\nthe branch-delay-slot codegen bug we identified for sparc32 awhile back.\n\nNot sure what to tell you, other than that it's darn odd that this only\nfails in v12. But I don't have much faith that \"use -O0 in v12 only\"\nis going to be a long-term answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Apr 2020 20:40:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tadarida vs REL_12_STABLE" } ]
[ { "msg_contents": "Hi,\n\ncommit a96c41feec6b6616eb9d5baee9a9e08c20533c38\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2019-04-04 14:58:53 -0400\n\n Allow VACUUM to be run with index cleanup disabled.\n\n This commit adds a new reloption, vacuum_index_cleanup, which\n controls whether index cleanup is performed for a particular\n relation by default. It also adds a new option to the VACUUM\n command, INDEX_CLEANUP, which can be used to override the\n reloption. If neither the reloption nor the VACUUM option is\n used, the default is true, as before.\n\n Masahiko Sawada, reviewed and tested by Nathan Bossart, Alvaro\n Herrera, Kyotaro Horiguchi, Darafei Praliaskouski, and me.\n The wording of the documentation is mostly due to me.\n\n Discussion: http://postgr.es/m/CAD21AoAt5R3DNUZSjOoXDUY=naYPUOuffVsRzuTYMz29yLzQCA@mail.gmail.com\n\nmade the index scan that is part of vacuum optional. I'm afraid that it\nis not safe to do so unconditionally. Until this commit indexes could\nrely on at least the amvacuumcleanup callback being called once per\nvacuum. Which guaranteed that an index could ensure that there are no\ntoo-old xids anywhere in the index.\n\nBut now that's not the case anymore:\n\n\tvacrelstats->useindex = (nindexes > 0 &&\n\t\t\t\t\t\t\t params->index_cleanup == VACOPT_TERNARY_ENABLED);\n...\n\t/* Do post-vacuum cleanup */\n\tif (vacrelstats->useindex)\n\t\tlazy_cleanup_all_indexes(Irel, indstats, vacrelstats, lps, nindexes);\n\nE.g. btree has xids both in the metapage contents, as well as using it\non normal index pages as part of page deletion.\n\nThe slightly oder feature to avoid unnecessary scans during cleanup\nprotects against this issue by skipping the scan inside the index AM:\n\n/*\n * _bt_vacuum_needs_cleanup() -- Checks if index needs cleanup assuming that\n *\t\t\tbtbulkdelete() wasn't called.\n */\nstatic bool\n_bt_vacuum_needs_cleanup(IndexVacuumInfo *info)\n{\n...\n\telse if (TransactionIdIsValid(metad->btm_oldest_btpo_xact) &&\n\t\t\t TransactionIdPrecedes(metad->btm_oldest_btpo_xact,\n\t\t\t\t\t\t\t\t RecentGlobalXmin))\n\t{\n\t\t/*\n\t\t * If oldest btpo.xact in the deleted pages is older than\n\t\t * RecentGlobalXmin, then at least one deleted page can be recycled.\n\t\t */\n\t\tresult = true;\n\t}\n\nwhich will afaict result in all such xids getting removed (or at least\ngive the AM the choice to do so).\n\n\nIt's possible that something protects against dangers in the case of\nINDEX CLEANUP false, or that the consequences aren't too bad. But I\ndidn't see any comments about the danagers in the patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:38:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 15, 2020 at 7:38 PM Andres Freund <andres@anarazel.de> wrote:\n> It's possible that something protects against dangers in the case of\n> INDEX CLEANUP false, or that the consequences aren't too bad. But I\n> didn't see any comments about the danagers in the patch.\n\nI seem to recall Simon raising this issue at the time that the patch\nwas being discussed, and I thought that we had eventually decided that\nit was OK for some reason. But I don't remember the details, and it is\npossible that we got it wrong. :-(\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 19:57:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 15, 2020 at 4:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I seem to recall Simon raising this issue at the time that the patch\n> was being discussed, and I thought that we had eventually decided that\n> it was OK for some reason. But I don't remember the details, and it is\n> possible that we got it wrong. :-(\n\nIt must be unreliable because it's based on something that is known to\nbe unreliable:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/README;h=c5b0a30e4ebd4fe3bd4a6f8192284c452d1170b9;hb=refs/heads/REL_12_STABLE#l331\n\nAlso, the commit message of 6655a729 says that nbtree has had this\nproblem \"since time immemorial\". I am planning to work on that\nproblem, eventually.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 15 Apr 2020 18:11:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 8:38 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> commit a96c41feec6b6616eb9d5baee9a9e08c20533c38\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: 2019-04-04 14:58:53 -0400\n>\n> Allow VACUUM to be run with index cleanup disabled.\n>\n> This commit adds a new reloption, vacuum_index_cleanup, which\n> controls whether index cleanup is performed for a particular\n> relation by default. It also adds a new option to the VACUUM\n> command, INDEX_CLEANUP, which can be used to override the\n> reloption. If neither the reloption nor the VACUUM option is\n> used, the default is true, as before.\n>\n> Masahiko Sawada, reviewed and tested by Nathan Bossart, Alvaro\n> Herrera, Kyotaro Horiguchi, Darafei Praliaskouski, and me.\n> The wording of the documentation is mostly due to me.\n>\n> Discussion: http://postgr.es/m/CAD21AoAt5R3DNUZSjOoXDUY=naYPUOuffVsRzuTYMz29yLzQCA@mail.gmail.com\n>\n> made the index scan that is part of vacuum optional. I'm afraid that it\n> is not safe to do so unconditionally. Until this commit indexes could\n> rely on at least the amvacuumcleanup callback being called once per\n> vacuum. Which guaranteed that an index could ensure that there are no\n> too-old xids anywhere in the index.\n>\n> But now that's not the case anymore:\n>\n> vacrelstats->useindex = (nindexes > 0 &&\n> params->index_cleanup == VACOPT_TERNARY_ENABLED);\n> ...\n> /* Do post-vacuum cleanup */\n> if (vacrelstats->useindex)\n> lazy_cleanup_all_indexes(Irel, indstats, vacrelstats, lps, nindexes);\n>\n> E.g. btree has xids both in the metapage contents, as well as using it\n> on normal index pages as part of page deletion.\n>\n> The slightly oder feature to avoid unnecessary scans during cleanup\n> protects against this issue by skipping the scan inside the index AM:\n>\n> /*\n> * _bt_vacuum_needs_cleanup() -- Checks if index needs cleanup assuming that\n> * btbulkdelete() wasn't called.\n> */\n> static bool\n> _bt_vacuum_needs_cleanup(IndexVacuumInfo *info)\n> {\n> ...\n> else if (TransactionIdIsValid(metad->btm_oldest_btpo_xact) &&\n> TransactionIdPrecedes(metad->btm_oldest_btpo_xact,\n> RecentGlobalXmin))\n> {\n> /*\n> * If oldest btpo.xact in the deleted pages is older than\n> * RecentGlobalXmin, then at least one deleted page can be recycled.\n> */\n> result = true;\n> }\n>\n> which will afaict result in all such xids getting removed (or at least\n> give the AM the choice to do so).\n\nFor btree indexes, IIRC skipping index cleanup could not be a cause of\ncorruption, but be a cause of index bloat since it leaves recyclable\npages which are not marked as recyclable. The index bloat is the main\nside effect of skipping index cleanup. When user executes VACUUM with\nINDEX_CLEANUP to reclaim index garbage, such pages will also be\nrecycled sooner or later? Or skipping index cleanup can be a cause of\nrecyclable page never being recycled?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 16:30:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-16 16:30:02 +0900, Masahiko Sawada wrote:\n> For btree indexes, IIRC skipping index cleanup could not be a cause of\n> corruption, but be a cause of index bloat since it leaves recyclable\n> pages which are not marked as recyclable. The index bloat is the main\n> side effect of skipping index cleanup. When user executes VACUUM with\n> INDEX_CLEANUP to reclaim index garbage, such pages will also be\n> recycled sooner or later? Or skipping index cleanup can be a cause of\n> recyclable page never being recycled?\n\nWell, it depends on what you define as \"never\". Once the xids on the\npages have wrapped around, the page level xids will appear to be from\nthe future for a long time. And the metapage xid appearing to be from\nthe future will prevent some vacuums from actually doing the scan too,\neven if INDEX_CLEANUP is reenabled. So a VACUUM, even with\nINDEX_CLEANUP on, will not be able to recycle those pages anymore. At\nsome point the wrapped around xids will be \"current\" again, if there's\nenough new xids.\n\n\nIt's no ok for vacuumlazy.c to make decisions like this. I think the\nINDEX_CLEANUP logic clearly needs to be pushed down into the\namvacuumcleanup callbacks, and it needs to be left to the index AMs to\ndecide what the correct behaviour is.\n\n\nYou can't just change things like this without reviewing the\nconsequences to AMs and documenting them?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:58:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-15 18:11:40 -0700, Peter Geoghegan wrote:\n> On Wed, Apr 15, 2020 at 4:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I seem to recall Simon raising this issue at the time that the patch\n> > was being discussed, and I thought that we had eventually decided that\n> > it was OK for some reason. But I don't remember the details, and it is\n> > possible that we got it wrong. :-(\n> \n> It must be unreliable because it's based on something that is known to\n> be unreliable:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/README;h=c5b0a30e4ebd4fe3bd4a6f8192284c452d1170b9;hb=refs/heads/REL_12_STABLE#l331\n\nSure, there is some pre-existing wraparound danger for individual\npages. But it's a pretty narrow corner case before INDEX_CLEANUP\noff.\n\nThat comment says something about \"shared-memory free space map\", making\nit sound like any crash would loose the page. But it's a normal FSM\nthese days. Vacuum will insert the deleted page into the free space\nmap. So either the FSM would need to be corrupted to not find the\ninserted page anymore, or the index would need to grow slow enough to\nnot use a page before the wraparound. And then such wrapped around xids\nwould exist on individual pages. Not on all deleted pages, like with\nINDEX_CLEANUP false.\n\nAnd, what's worse, in the INDEX_CLEANUP off case, future VACUUMs with\nINDEX_CLEANUP on might not even visit the index. As there very well\nmight not be many dead heap tuples around anymore (previous vacuums with\ncleanup off will have removed them), the\nvacuum_cleanup_index_scale_factor logic may prevent index vacuums. In\ncontrast to the normal situations where the btm_oldest_btpo_xact check\nwill prevent that from becoming a problem.\n\n\nPeter, as far as I can tell, with INDEX_CLEANUP off, nbtree will never\nbe able to recycle half-dead pages? And thus would effectively never\nrecycle any dead space? Is that correct?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Apr 2020 11:27:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> Sure, there is some pre-existing wraparound danger for individual\n> pages. But it's a pretty narrow corner case before INDEX_CLEANUP\n> off.\n\nIt's a matter of degree. Hard to judge something like that.\n\n> And, what's worse, in the INDEX_CLEANUP off case, future VACUUMs with\n> INDEX_CLEANUP on might not even visit the index. As there very well\n> might not be many dead heap tuples around anymore (previous vacuums with\n> cleanup off will have removed them), the\n> vacuum_cleanup_index_scale_factor logic may prevent index vacuums. In\n> contrast to the normal situations where the btm_oldest_btpo_xact check\n> will prevent that from becoming a problem.\n\nI guess that they should visit the metapage to see if they need to do\nthat much. That would allow us to fix the problem while mostly\nhonoring INDEX_CLEANUP off, I think.\n\n> Peter, as far as I can tell, with INDEX_CLEANUP off, nbtree will never\n> be able to recycle half-dead pages? And thus would effectively never\n> recycle any dead space? Is that correct?\n\nI agree. The fact that btm_oldest_btpo_xact is an all-or-nothing thing\n(with wraparound hazards) is bad in itself, and introduced new risk to\nv11 compared to previous versions (without the INDEX_CLEANUP = off\nfeature entering into it). The simple fact that we don't even check\nit with INDEX_CLEANUP = off is a bigger problem, though, and one that\nnow seems unrelated.\n\nBWT, a lot of people get confused about what half-dead pages are. I\nwould like to make something clear that may not be obvious: While it's\nbad that the implementation leaks pages that should go in the FSM,\nit's not the end of the world. They should get evicted from\nshared_buffers pretty quickly if there is any pressure, and impose no\nreal cost on index scans.\n\nThere are (roughly) 3 types of pages that we're concerned about here\nin the common case where we're just deleting a leaf page:\n\n* A half-dead page -- no downlink in its parent, marked dead.\n\n* A deleted page -- now no sidelinks, either. Not initially safe to recycle.\n\n* A deleted page in the FSM -- this is what we have the interlock for.\n\nHalf-dead pages are pretty rare, because VACUUM really has to have a\nhard crash for that to happen (that might not be 100% true, but it's\nat least 99% true). That's always been the case, and we don't really\nneed to talk about them here at all. We're just concerned with deleted\npages in the context of this discussion (and whether or not they can\nbe recycled without confusing in-flight index scans). These are the\nonly pages that are marked with an XID at all.\n\nAnother thing that's worth pointing out is that this whole\nRecentGlobalXmin business is how we opted to implement what Lanin &\nSasha call \"the drain technique\". It is rather different to the usual\nways in which we use RecentGlobalXmin. We're only using it as a proxy\n(an absurdly conservative proxy) for whether or not there might be an\nin-flight index scan that lands on a concurrently recycled index page\nand gets completely confused. So it is purely about the integrity of\nthe data structure itself. It is a consequence of doing so little\nlocking when descending the tree -- our index scans don't need to\ncouple buffer locks on the way down the tree at all. So we make VACUUM\nworry about that, rather than making index scans worry about VACUUM\n(though the latter design is a reasonable and common one).\n\nThere is absolutely no reason why we have to delay recycling for very\nlong, even in cases with long running transactions or whatever. I\nagree that it's just an accident that it works that way. VACUUM could\nprobably remember deleted pages, and then revisited those pages at the\nend of the index vacuuming -- that might make a big difference in a\nlot of workloads. Or it could chain them together as a linked list\nwhich can be accessed much more eagerly in some cases.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 Apr 2020 13:28:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-16 13:28:00 -0700, Peter Geoghegan wrote:\n> > And, what's worse, in the INDEX_CLEANUP off case, future VACUUMs with\n> > INDEX_CLEANUP on might not even visit the index. As there very well\n> > might not be many dead heap tuples around anymore (previous vacuums with\n> > cleanup off will have removed them), the\n> > vacuum_cleanup_index_scale_factor logic may prevent index vacuums. In\n> > contrast to the normal situations where the btm_oldest_btpo_xact check\n> > will prevent that from becoming a problem.\n> \n> I guess that they should visit the metapage to see if they need to do\n> that much. That would allow us to fix the problem while mostly\n> honoring INDEX_CLEANUP off, I think.\n\nYea. _bt_vacuum_needs_cleanup() needs to check if\nmetad->btm_oldest_btpo_xact is older than the FreezeLimit computed by\nvacuum_set_xid_limits() and vacuum the index if so even if INDEX_CLEANUP\nfalse.\n\n\n> BWT, a lot of people get confused about what half-dead pages are. I\n> would like to make something clear that may not be obvious: While it's\n> bad that the implementation leaks pages that should go in the FSM,\n> it's not the end of the world. They should get evicted from\n> shared_buffers pretty quickly if there is any pressure, and impose no\n> real cost on index scans.\n\nYea, half-dead pages aren't the main problem. It's pages that contain\nonly dead tuples, but aren't unlinked from the tree. Without a vacuum\nscan we'll never reuse them - even if we know they're all dead.\n\nNote that the page being in the FSM is not protection against wraparound\n:(. We recheck whether a page is recyclable when getting it from the FSM\n(probably required, due to the FSM not being crashsafe). It's of course\nmuch less likely to happen at that stage, because the pages can get\nreused.\n\nI think we really just stop being miserly and update the xid to be\nFrozenTransactionId or InvalidTransactionId when vacuum encounters one\nthat's from before the the xid cutoff used by vacuum (i.e. what could\nbecome the new relfrozenxid). That seems like it'd be a few lines, not\nmore.\n\n\n> Another thing that's worth pointing out is that this whole\n> RecentGlobalXmin business is how we opted to implement what Lanin &\n> Sasha call \"the drain technique\". It is rather different to the usual\n> ways in which we use RecentGlobalXmin. We're only using it as a proxy\n> (an absurdly conservative proxy) for whether or not there might be an\n> in-flight index scan that lands on a concurrently recycled index page\n> and gets completely confused. So it is purely about the integrity of\n> the data structure itself. It is a consequence of doing so little\n> locking when descending the tree -- our index scans don't need to\n> couple buffer locks on the way down the tree at all. So we make VACUUM\n> worry about that, rather than making index scans worry about VACUUM\n> (though the latter design is a reasonable and common one).\n>\n> There is absolutely no reason why we have to delay recycling for very\n> long, even in cases with long running transactions or whatever. I\n> agree that it's just an accident that it works that way. VACUUM could\n> probably remember deleted pages, and then revisited those pages at the\n> end of the index vacuuming -- that might make a big difference in a\n> lot of workloads. Or it could chain them together as a linked list\n> which can be accessed much more eagerly in some cases.\n\nI think it doesn't really help meaningfully for vacuum to be a bit\nsmarter about when to recognize pages as being recyclable. IMO the big\nissue is that vacuum won't be very frequent, so we'll grow the index\nuntil that time, even if there's many \"effectively empty\" pages.\n\nI.e. even if the killtuples logic allows us to recognize that all actual\nindex tuples are fully dead, we'll not benefit from that unless there's\na new insertion that belongs onto the \"empty\" page. That's fine for\nindexes that are updated roughly evenly across the value range, but\nterrible for indexes that \"grow\" mostly on one side, and \"shrink\" on the\nother.\n\nI'd bet it be beneficial if we were to either have scans unlink such\npages directly, or if they just entered the page into the FSM and have\n_bt_getbuf() do the unlinking. I'm not sure if the current locking\nmodel assumes anywhere that there is only one process (vacuum) unlinking\npages though?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Apr 2020 15:49:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 3:49 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we really just stop being miserly and update the xid to be\n> FrozenTransactionId or InvalidTransactionId when vacuum encounters one\n> that's from before the the xid cutoff used by vacuum (i.e. what could\n> become the new relfrozenxid). That seems like it'd be a few lines, not\n> more.\n\nOkay.\n\n> > There is absolutely no reason why we have to delay recycling for very\n> > long, even in cases with long running transactions or whatever. I\n> > agree that it's just an accident that it works that way. VACUUM could\n> > probably remember deleted pages, and then revisited those pages at the\n> > end of the index vacuuming -- that might make a big difference in a\n> > lot of workloads. Or it could chain them together as a linked list\n> > which can be accessed much more eagerly in some cases.\n>\n> I think it doesn't really help meaningfully for vacuum to be a bit\n> smarter about when to recognize pages as being recyclable. IMO the big\n> issue is that vacuum won't be very frequent, so we'll grow the index\n> until that time, even if there's many \"effectively empty\" pages.\n\n(It seems like you're talking about the big picture now, not the\nproblems in Postgres 11 and 12 features in this area -- you're talking\nabout what happens to empty pages, not what happens to deleted pages.)\n\nI'll say some more things about the less ambitious goal of more eager\nrecycling of pages, as they are deleted:\n\nAn individual VACUUM operation cannot recycle a page right after\n_bt_pagedel() is called to delete the page. VACUUM will both set a\ntarget leaf page half dead and delete it all at once in _bt_pagedel()\n(it does that much in the simple and common case). Again, this is\nbecause recycling very soon after the call to _bt_pagedel() will\nintroduce races with concurrent index scans -- they could fail to\nobserve the deletion in the parent (i.e. see its downlink, since child\nisn't even half dead), and land on a concurrently recycled page\n(VACUUM concurrently marks the page half-dead, fully dead/deleted, and\nthen even goes as far as recycling it). So the current design makes a\ncertain amount of sense -- we can't be super aggressive like that.\n(Actually, maybe it doesn't make sense to not just put the page in the\nFSM there and then -- see \"Thinks some more\" below.)\n\nEven still, nothing stops the same VACUUM operation from (for example)\nremembering a list of pages it has deleted during the current scan,\nand then coming back at the end of the bulk scan of the index to\nreconsider if it can recycle the pages now (2 minutes later instead of\n2 months later). With a new RecentGlobalXmin (or something that's\nconceptually like a new RecentGlobalXmin).\n\nSimilarly, we could do limited VACUUMs that only visit previously\ndeleted pages, once VACUUM is taught to chain deleted pages together\nto optimize recycling. We don't have to repeat another pass over the\nentire index to recycle the pages because of this special deleted page\nlinking. This is something that we use when we have to recycle pages,\nbut it's a \" INDEX_CLEANUP = off\" index VACUUM -- we don't really want\nto do most of the stuff that index vacuuming needs to do, but we must\nstill visit the metapage to check btm_oldest_btpo_xact, and then maybe\nwalk the deleted page linked list.\n\n*** Thinks some more ***\n\nAs you pointed out, _bt_getbuf() already distrusts the FSM -- it has\nits own _bt_page_recyclable() check, probably because the FSM isn't\ncrash safe. Maybe we could improve matters by teaching _bt_pagedel()\nto put a page it deleted in the FSM immediately -- no need to wait\nuntil the next index VACUUM for the RecordFreeIndexPage() call. It\nstill isn't quite enough that _bt_getbuf() distrusts the FSM, so we'd\nalso have to teach _bt_getbuf() some heuristics that made it\nunderstand that VACUUM is now designed to put stuff in the FSM\nimmediately, so we don't have to wait for the next VACUUM operation to\nget to it. Maybe _bt_getbuf() should try the FSM a few times before\ngiving up and allocating a new page, etc.\n\nThis wouldn't make VACUUM delete any more pages any sooner, but it\nwould make those pages reclaimable much sooner. Also, it wouldn't\nsolve the wraparound problem, but that is a bug, not a\nperformance/efficiency issue.\n\n> I.e. even if the killtuples logic allows us to recognize that all actual\n> index tuples are fully dead, we'll not benefit from that unless there's\n> a new insertion that belongs onto the \"empty\" page. That's fine for\n> indexes that are updated roughly evenly across the value range, but\n> terrible for indexes that \"grow\" mostly on one side, and \"shrink\" on the\n> other.\n\nThat could be true, but there are certain things about B-Tree space\nutilization that might surprise you:\n\nhttps://www.drdobbs.com/reexamining-b-trees/184408694?pgno=3\n\n> I'd bet it be beneficial if we were to either have scans unlink such\n> pages directly, or if they just entered the page into the FSM and have\n> _bt_getbuf() do the unlinking.\n\nThat won't work, since you're now talking about pages that aren't\ndeleted (or even half-dead) that are just candidates to be deleted\nbecause they're empty. So you'd have to do all the steps in\n_bt_pagedel() within a new _bt_getbuf() path, which would have many\ndeadlock hazards. Unlinking the page from the tree itself (deleting)\nis really complicated.\n\n> I'm not sure if the current locking\n> model assumes anywhere that there is only one process (vacuum) unlinking\n> pages though?\n\nI'm not sure, though _bt_unlink_halfdead_page() has comments supposing\nthat there could be concurrent page deletions like that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 Apr 2020 18:35:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, 17 Apr 2020 at 02:58, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-16 16:30:02 +0900, Masahiko Sawada wrote:\n> > For btree indexes, IIRC skipping index cleanup could not be a cause of\n> > corruption, but be a cause of index bloat since it leaves recyclable\n> > pages which are not marked as recyclable. The index bloat is the main\n> > side effect of skipping index cleanup. When user executes VACUUM with\n> > INDEX_CLEANUP to reclaim index garbage, such pages will also be\n> > recycled sooner or later? Or skipping index cleanup can be a cause of\n> > recyclable page never being recycled?\n>\n> Well, it depends on what you define as \"never\". Once the xids on the\n> pages have wrapped around, the page level xids will appear to be from\n> the future for a long time. And the metapage xid appearing to be from\n> the future will prevent some vacuums from actually doing the scan too,\n> even if INDEX_CLEANUP is reenabled. So a VACUUM, even with\n> INDEX_CLEANUP on, will not be able to recycle those pages anymore. At\n> some point the wrapped around xids will be \"current\" again, if there's\n> enough new xids.\n>\n>\n> It's no ok for vacuumlazy.c to make decisions like this. I think the\n> INDEX_CLEANUP logic clearly needs to be pushed down into the\n> amvacuumcleanup callbacks, and it needs to be left to the index AMs to\n> decide what the correct behaviour is.\n\nI wanted to clarify the impact of this bug. I agree with you.\n\nOn Fri, 17 Apr 2020 at 07:49, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-16 13:28:00 -0700, Peter Geoghegan wrote:\n> > > And, what's worse, in the INDEX_CLEANUP off case, future VACUUMs with\n> > > INDEX_CLEANUP on might not even visit the index. As there very well\n> > > might not be many dead heap tuples around anymore (previous vacuums with\n> > > cleanup off will have removed them), the\n> > > vacuum_cleanup_index_scale_factor logic may prevent index vacuums.\n\nI think this doesn't happen because, in the INDEX_CLEANUP off case,\nvacuum marks linepointers of dead tuples as dead but leaves them.\nTherefore future VACUUMs with INDEX_CLEANUP on will see these dead\nlinepointers and invoke ambulkdelete.\n\n> > I guess that they should visit the metapage to see if they need to do\n> > that much. That would allow us to fix the problem while mostly\n> > honoring INDEX_CLEANUP off, I think.\n>\n> Yea. _bt_vacuum_needs_cleanup() needs to check if\n> metad->btm_oldest_btpo_xact is older than the FreezeLimit computed by\n> vacuum_set_xid_limits() and vacuum the index if so even if INDEX_CLEANUP\n> false.\n\nAgreed. So _bt_vacuum_needs_cleanup() would become something like the following?\n\nif (metad->btm_version < BTREE_NOVAC_VERSION)\n result = true;\nelse if (TransactionIdIsvaid(metad->btm_oldest_btpo_xact) &&\n TransactionIdPrecedes(metad->btm_oldest_btpo_xact,\n FreezeLimit))\n result = true;\nelse if (index_cleanup_disabled)\n result = false;\nelse if (TransactionIdIsValid(metad->btm_oldest_btpo_xact) &&\n TransactionIdPrecedes(metad->btm_oldest_btpo_xact,\n RecentGlobalXmin))\n result = true;\nelse\n result = determine based on vacuum_cleanup_index_scale_factor;\n\nOr perhaps we can change _bt_vacuum_needs_cleanup() so that it does\nindex cleanup if metad->btm_oldest_btpo_xact is older than the\nFreezeLimit *and* it's an aggressive vacuum.\n\nAnyway, if we change IndexVacuumInfo to tell AM that INDEX_CLEANUP\noption is disabled and FreezeLimit a problem is that it would break\ncompatibility\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:18:51 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 6:49 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea. _bt_vacuum_needs_cleanup() needs to check if\n> metad->btm_oldest_btpo_xact is older than the FreezeLimit computed by\n> vacuum_set_xid_limits() and vacuum the index if so even if INDEX_CLEANUP\n> false.\n\nI'm still fairly unclear on what the actual problem is here, and on\nhow we propose to fix it. It seems to me that we probably don't have a\nproblem in the case where we don't advance relfrozenxid or relminmxid,\nbecause in that case there's not much difference between the behavior\ncreated by this patch and a case where we just error out due to an\ninterrupt or something before reaching the index cleanup stage. I\nthink that the problem is that in the case where we do relfrozenxid,\nwe might advance it past some XID value stored in the index metadata.\nIs that right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 14:21:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 12:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> For btree indexes, IIRC skipping index cleanup could not be a cause of\n> corruption, but be a cause of index bloat since it leaves recyclable\n> pages which are not marked as recyclable.\n\nI spotted a bug in \"Skip full index scan during cleanup of B-tree\nindexes when possible\" which is unrelated to the index cleanup issue.\n\nThis code is wrong, because we don't have a buffer lock (or a buffer\npin) on the buffer anymore:\n\n ndel = _bt_pagedel(rel, buf);\n\n /* count only this page, else may double-count parent */\n if (ndel)\n {\n stats->pages_deleted++;\n if (!TransactionIdIsValid(vstate->oldestBtpoXact) ||\n TransactionIdPrecedes(opaque->btpo.xact,\nvstate->oldestBtpoXact))\n vstate->oldestBtpoXact = opaque->btpo.xact;\n }\n\n MemoryContextSwitchTo(oldcontext);\n /* pagedel released buffer, so we shouldn't */\n\n(As the comment says, _bt_pagedel() releases it.)\n\nThere is another, more fundamental issue, though: _bt_pagedel() can\ndelete more than one page. That might be okay if the \"extra\" pages\nwere always internal pages, but they're not -- it says so in the\ncomments above _bt_pagedel(). See the code at the end of\n_bt_pagedel(), that says something about why we delete the right\nsibling page in some cases.\n\nI think that the fix is to push down the vstate into lower level code\nin nbtpage.c. Want to have a go at fixing it?\n\n(It would be nice if we could teach Valgrind to \"poison\" buffers when\nwe don't have a pin held...that would probably have caught this issue\nalmost immediately.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Apr 2020 18:05:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 22, 2020 at 6:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> (It would be nice if we could teach Valgrind to \"poison\" buffers when\n> we don't have a pin held...that would probably have caught this issue\n> almost immediately.)\n\nI can get Valgrind to complain about it when the regression tests are\nrun with the attached patch applied. I see this in the logs at several\npoints when \"make installcheck\" runs:\n\n==1082059== VALGRINDERROR-BEGIN\n==1082059== Invalid read of size 4\n==1082059== at 0x21D8DE: btvacuumpage (nbtree.c:1370)\n==1082059== by 0x21DA61: btvacuumscan (nbtree.c:1039)\n==1082059== by 0x21DBD5: btbulkdelete (nbtree.c:879)\n==1082059== by 0x215821: index_bulk_delete (indexam.c:698)\n==1082059== by 0x20FDCE: lazy_vacuum_index (vacuumlazy.c:2427)\n==1082059== by 0x2103EA: lazy_vacuum_all_indexes (vacuumlazy.c:1794)\n==1082059== by 0x211EA1: lazy_scan_heap (vacuumlazy.c:1681)\n==1082059== by 0x211EA1: heap_vacuum_rel (vacuumlazy.c:510)\n==1082059== by 0x360414: table_relation_vacuum (tableam.h:1457)\n==1082059== by 0x360414: vacuum_rel (vacuum.c:1880)\n==1082059== by 0x361785: vacuum (vacuum.c:449)\n==1082059== by 0x361F0E: ExecVacuum (vacuum.c:249)\n==1082059== by 0x4D979C: standard_ProcessUtility (utility.c:823)\n==1082059== by 0x4D9C7F: ProcessUtility (utility.c:522)\n==1082059== by 0x4D6791: PortalRunUtility (pquery.c:1157)\n==1082059== by 0x4D725F: PortalRunMulti (pquery.c:1303)\n==1082059== by 0x4D7CEF: PortalRun (pquery.c:779)\n==1082059== by 0x4D3BB7: exec_simple_query (postgres.c:1239)\n==1082059== by 0x4D4ABD: PostgresMain (postgres.c:4315)\n==1082059== by 0x45B0C9: BackendRun (postmaster.c:4510)\n==1082059== by 0x45B0C9: BackendStartup (postmaster.c:4202)\n==1082059== by 0x45B0C9: ServerLoop (postmaster.c:1727)\n==1082059== by 0x45C754: PostmasterMain (postmaster.c:1400)\n==1082059== by 0x3BDD68: main (main.c:210)\n==1082059== Address 0x6cc7378 is in a rw- anonymous segment\n==1082059==\n==1082059== VALGRINDERROR-END\n\n(The line numbers might be slightly different to master here, but the\nline from btvacuumpage() is definitely the one that accesses the\nspecial area of the B-Tree page after we drop the pin.)\n\nThis patch is very rough -- it was just the first thing that I tried.\nI don't know how Valgrind remembers the status of shared memory\nregions across backends when they're marked with\nVALGRIND_MAKE_MEM_NOACCESS(). Even still, this idea looks promising. I\nshould try to come up with a committable patch before too long.\n\nThe good news is that the error I showed is the only error that I see,\nat least with this rough patch + \"make installcheck\". It's possible\nthat the patch isn't as effective as it could be, though. For one\nthing, it definitely won't detect incorrect buffer accesses where a\npin is held but a buffer lock is not held. That seems possible, but a\nbit harder.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 22 Apr 2020 20:08:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-22 20:08:42 -0700, Peter Geoghegan wrote:\n> I can get Valgrind to complain about it when the regression tests are\n> run with the attached patch applied.\n\nNice! Have you checked how much of an incremental slowdown this causes?\n\n\n> This patch is very rough -- it was just the first thing that I tried.\n> I don't know how Valgrind remembers the status of shared memory\n> regions across backends when they're marked with\n> VALGRIND_MAKE_MEM_NOACCESS(). Even still, this idea looks promising. I\n> should try to come up with a committable patch before too long.\n\nIIRC valgrind doesn't at all share access markings across processes.\n\n\n> The good news is that the error I showed is the only error that I see,\n> at least with this rough patch + \"make installcheck\". It's possible\n> that the patch isn't as effective as it could be, though. For one\n> thing, it definitely won't detect incorrect buffer accesses where a\n> pin is held but a buffer lock is not held. That seems possible, but a\n> bit harder.\n\nGiven hint bits it seems fairly hard to make that a reliable check.\n\n\n> +#ifdef USE_VALGRIND\n> +\tif (!isLocalBuf)\n> +\t{\n> +\t\tBuffer b = BufferDescriptorGetBuffer(bufHdr);\n> +\t\tVALGRIND_MAKE_MEM_DEFINED(BufferGetPage(b), BLCKSZ);\n> +\t}\n> +#endif\n\nHm. It's a bit annoying that we have to mark the contents defined. It'd\nbe kinda useful to be able to mark unused parts of pages as undefined\ninitially. But there's afaictl no way to just set/unset addressability,\nwhile not touching definedness. So this is probably the best we can do\nwithout adding a lot of complexity.\n\n\n> \tif (isExtend)\n> \t{\n> \t\t/* new buffers are zero-filled */\n> @@ -1039,6 +1047,12 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\tbuf = GetBufferDescriptor(buf_id);\n> \n> \t\tvalid = PinBuffer(buf, strategy);\n> +#ifdef USE_VALGRIND\n> +\t\t{\n> +\t\t\tBuffer b = BufferDescriptorGetBuffer(buf);\n> +\t\t\tVALGRIND_MAKE_MEM_DEFINED(BufferGetPage(b), BLCKSZ);\n> +\t\t}\n> +#endif\n\nWhy aren't we doing this in PinBuffer() and PinBuffer_Locked(), but at\ntheir callsites?\n\n\n> @@ -1633,6 +1653,12 @@ PinBuffer(BufferDesc *buf, BufferAccessStrategy strategy)\n> \t\t\t\t\t\t\t\t\t\t\t buf_state))\n> \t\t\t{\n> \t\t\t\tresult = (buf_state & BM_VALID) != 0;\n> +#ifdef USE_VALGRIND\n> +\t\t\t\t{\n> +\t\t\t\t\tBuffer b = BufferDescriptorGetBuffer(buf);\n> +\t\t\t\t\tVALGRIND_MAKE_MEM_DEFINED(BufferGetPage(b), BLCKSZ);\n> +\t\t\t\t}\n> +#endif\n> \t\t\t\tbreak;\n> \t\t\t}\n> \t\t}\n\nOh, but we actually are doing it in PinBuffer() too?\n\n\n> \t\t/*\n> \t\t * Decrement the shared reference count.\n> @@ -2007,6 +2039,12 @@ BufferSync(int flags)\n> \t\t */\n> \t\tif (pg_atomic_read_u32(&bufHdr->state) & BM_CHECKPOINT_NEEDED)\n> \t\t{\n> +#ifdef USE_VALGRIND\n> +\t\t\t{\n> +\t\t\t\tBuffer b = BufferDescriptorGetBuffer(bufHdr);\n> +\t\t\t\tVALGRIND_MAKE_MEM_DEFINED(BufferGetPage(b), BLCKSZ);\n> +\t\t\t}\n> +#endif\n> \t\t\tif (SyncOneBuffer(buf_id, false, &wb_context) & BUF_WRITTEN)\n> \t\t\t{\n> \t\t\t\tTRACE_POSTGRESQL_BUFFER_SYNC_WRITTEN(buf_id);\n\nShouldn't the pin we finally acquire in SyncOneBuffer() be sufficient?\n\n\n> @@ -2730,6 +2768,12 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln)\n> \t * Run PageGetLSN while holding header lock, since we don't have the\n> \t * buffer locked exclusively in all cases.\n> \t */\n> +#ifdef USE_VALGRIND\n> +\t{\n> +\t\tBuffer b = BufferDescriptorGetBuffer(buf);\n> +\t\tVALGRIND_MAKE_MEM_DEFINED(BufferGetPage(b), BLCKSZ);\n> +\t}\n> +#endif\n> \trecptr = BufferGetLSN(buf);\n\nThis shouldn't be needed, as the caller ought to hold a pin:\n *\n * The caller must hold a pin on the buffer and have share-locked the\n * buffer contents. (Note: a share-lock does not prevent updates of\n * hint bits in the buffer, so the page could change while the write\n * is in progress, but we assume that that will not invalidate the data\n * written.)\n *\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 22 Apr 2020 20:32:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 22, 2020 at 8:33 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-04-22 20:08:42 -0700, Peter Geoghegan wrote:\n> > I can get Valgrind to complain about it when the regression tests are\n> > run with the attached patch applied.\n>\n> Nice! Have you checked how much of an incremental slowdown this causes?\n\nNo, but I didn't notice much of a slowdown.\n\n> > This patch is very rough -- it was just the first thing that I tried.\n> > I don't know how Valgrind remembers the status of shared memory\n> > regions across backends when they're marked with\n> > VALGRIND_MAKE_MEM_NOACCESS(). Even still, this idea looks promising. I\n> > should try to come up with a committable patch before too long.\n>\n> IIRC valgrind doesn't at all share access markings across processes.\n\nI didn't think so.\n\n> > The good news is that the error I showed is the only error that I see,\n> > at least with this rough patch + \"make installcheck\". It's possible\n> > that the patch isn't as effective as it could be, though. For one\n> > thing, it definitely won't detect incorrect buffer accesses where a\n> > pin is held but a buffer lock is not held. That seems possible, but a\n> > bit harder.\n>\n> Given hint bits it seems fairly hard to make that a reliable check.\n\nI don't follow. It doesn't have to be a perfect check. Detecting if\nthere is *any* buffer lock held at all would be a big improvement.\n\n> Why aren't we doing this in PinBuffer() and PinBuffer_Locked(), but at\n> their callsites?\n\nI wrote this patch in a completely careless manner in less than 10\nminutes, just to see how hard it was (I thought that it might have\nbeen much harder). I wasn't expecting you to review it. I thought that\nI was clear about that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 22 Apr 2020 21:05:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 22, 2020 at 9:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Given hint bits it seems fairly hard to make that a reliable check.\n>\n> I don't follow. It doesn't have to be a perfect check. Detecting if\n> there is *any* buffer lock held at all would be a big improvement.\n\nIt is true that the assumptions that heapam makes about what a buffer\npin will prevent (that a pin will prevent any kind of page\ndefragmentation) are not really compatible with marking pages as\nundefined in lower level code like bufmgr.c. There are too many\nexceptions for it to work like that.\n\nThe case I was really thinking about was the nbtree\n_bt_drop_lock_and_maybe_pin() stuff, which is very confusing. The\nconfusing structure of the\nBTScanPosIsPinned()/_bt_drop_lock_and_maybe_pin() code more or less\ncaused the skip scan patch to have numerous bugs involving code\nholding a buffer pin, but not a buffer lock, at least when I last\nlooked at it a couple of months ago. The only thing having a pin on a\nleaf page guarantees is that the TIDs from tuples on the page won't be\nconcurrently recycled by VACUUM. This is a very weak guarantee -- in\nparticular, it's much weaker than the guarantees around buffer pins\nthat apply in heapam. It's certainly not going to prevent any kind of\ndefragmentation of the page -- the page can even split, for example.\nAny code that relies on holding a pin to prevent anything more than\nthat is broken, but possibly only in a subtle way. It's not like page\nsplits happen all that frequently.\n\nGiven that I was concerned about a fairly specific situation, a\nspecific solution seems like it might be the best way to structure the\nextra checks. The attached rough patch shows the kind of approach that\nmight be practical in specific index access methods. This works on top\nof the patch I posted yesterday. The idea is to mark the buffer's page\nas a noaccess region within _bt_drop_lock_and_maybe_pin(), and then\nmark it defined again at either of the two points that we might have\nto relock (but not repin) the buffer to re-read the page. This doesn't\ncause any regression test failures, so maybe there are no bugs like\nthis currently, but it still seems like it might be worth pursuing on\ntop of the buffer pin stuff.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 23 Apr 2020 17:44:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Apr 16, 2020 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> Sure, there is some pre-existing wraparound danger for individual\n> pages. But it's a pretty narrow corner case before INDEX_CLEANUP\n> off.\n>\n> That comment says something about \"shared-memory free space map\", making\n> it sound like any crash would loose the page. But it's a normal FSM\n> these days. Vacuum will insert the deleted page into the free space\n> map. So either the FSM would need to be corrupted to not find the\n> inserted page anymore, or the index would need to grow slow enough to\n> not use a page before the wraparound. And then such wrapped around xids\n> would exist on individual pages. Not on all deleted pages, like with\n> INDEX_CLEANUP false.\n\nIs that really that narrow, even without \"INDEX_CLEANUP false\"? It's\nnot as if the index needs to grow very slowly to have only very few\npage splits hour to hour (it depends on whether the inserts are random\nor not, and so on). Especially if you had a bulk DELETE affecting many\nrows, which is hardly that uncommon.\n\nFundamentally, btvacuumpage() doesn't freeze 32-bit XIDs (from\nbpto.xact) when it recycles deleted pages. It simply puts them in the\nFSM without changing anything about the page itself. This means\nsurprisingly little in the context of nbtree: the\n_bt_page_recyclable() XID check that takes place in btvacuumpage()\nalso takes place in _bt_getbuf(), at the point where the page actually\ngets recycled by the client. That's not great.\n\nIt wouldn't be so unreasonable if btvacuumpage() actually did freeze\nthe bpto.xact value at the point where it puts the page in the FSM. It\ndoesn't need to be crash safe; it can work as a hint. Maybe \"freezing\"\nis the wrong word (too much baggage). More like we'd have VACUUM\nrepresent that \"this deleted B-Tree page is definitely not considered\nto still be a part of the tree by any possible other backend\" using a\npage flag hint -- btvacuumpage() would \"mark the deleted page as\nrecyclable\" explicitly. Note that we still need to keep the original\nbpto.xact XID around for _bt_log_reuse_page() (also, do we need to\nworry _bt_log_reuse_page() with a wrapped-around XID?).\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Apr 2020 11:28:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-29 11:28:00 -0700, Peter Geoghegan wrote:\n> On Thu, Apr 16, 2020 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > Sure, there is some pre-existing wraparound danger for individual\n> > pages. But it's a pretty narrow corner case before INDEX_CLEANUP\n> > off.\n> >\n> > That comment says something about \"shared-memory free space map\", making\n> > it sound like any crash would loose the page. But it's a normal FSM\n> > these days. Vacuum will insert the deleted page into the free space\n> > map. So either the FSM would need to be corrupted to not find the\n> > inserted page anymore, or the index would need to grow slow enough to\n> > not use a page before the wraparound. And then such wrapped around xids\n> > would exist on individual pages. Not on all deleted pages, like with\n> > INDEX_CLEANUP false.\n> \n> Is that really that narrow, even without \"INDEX_CLEANUP false\"? It's\n> not as if the index needs to grow very slowly to have only very few\n> page splits hour to hour (it depends on whether the inserts are random\n> or not, and so on). Especially if you had a bulk DELETE affecting many\n> rows, which is hardly that uncommon.\n\nWell, you'd need to have a workload that has bulk deletes, high xid\nusage *and* doesn't insert new data to use those empty pages\n\n\n> Fundamentally, btvacuumpage() doesn't freeze 32-bit XIDs (from\n> bpto.xact) when it recycles deleted pages. It simply puts them in the\n> FSM without changing anything about the page itself. This means\n> surprisingly little in the context of nbtree: the\n> _bt_page_recyclable() XID check that takes place in btvacuumpage()\n> also takes place in _bt_getbuf(), at the point where the page actually\n> gets recycled by the client. That's not great.\n\nI think it's quite foolish for btvacuumpage() to not freeze xids. If we\nonly do so when necessary (i.e. older than a potential new relfrozenxid,\nand only when the vacuum didn't yet skip pages), the costs are pretty\nminiscule.\n\n\n> It wouldn't be so unreasonable if btvacuumpage() actually did freeze\n> the bpto.xact value at the point where it puts the page in the FSM. It\n> doesn't need to be crash safe; it can work as a hint.\n\nI'd much rather make sure the xid is guaranteed to be removed. As\noutlined above, the cost would be small, and I think the likelihood of\nthe consequences of wrapped around xids getting worse over time is\nsubstantial.\n\n\n> Note that we still need to keep the original bpto.xact XID around for\n> _bt_log_reuse_page() (also, do we need to worry _bt_log_reuse_page()\n> with a wrapped-around XID?).\n\nI'd just WAL log the reuse when freezing the xid. Then there's no worry\nabout wraparounds. And I don't think it'd cause additional conflicts;\nthe vacuum itself (or a prior vacuum) would also have to cause them, I\nthink?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:54:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 29, 2020 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > Fundamentally, btvacuumpage() doesn't freeze 32-bit XIDs (from\n> > bpto.xact) when it recycles deleted pages. It simply puts them in the\n> > FSM without changing anything about the page itself. This means\n> > surprisingly little in the context of nbtree: the\n> > _bt_page_recyclable() XID check that takes place in btvacuumpage()\n> > also takes place in _bt_getbuf(), at the point where the page actually\n> > gets recycled by the client. That's not great.\n>\n> I think it's quite foolish for btvacuumpage() to not freeze xids. If we\n> only do so when necessary (i.e. older than a potential new relfrozenxid,\n> and only when the vacuum didn't yet skip pages), the costs are pretty\n> miniscule.\n\nI wonder if we should just bite the bullet and mark pages that are\nrecycled by VACUUM as explicitly recycled, with WAL-logging and\neverything (this is like freezing, but stronger). That way, the\n_bt_page_recyclable() call within _bt_getbuf() would only be required\nto check that state (while btvacuumpage() would use something like a\n_bt_page_eligible_for_recycling(), which would do the same thing as\nthe current _bt_page_recyclable()).\n\nI'm not sure how low the costs would be, but at least we'd only have\nto do it once per already-deleted page (i.e. only the first time a\nVACUUM operation found _bt_page_eligible_for_recycling() returned true\nfor the page and marked it recycled in a crash safe manner). That\ndesign would be quite a lot simpler, because it expresses the problem\nin terms that make sense to the nbtree code. _bt_getbuf() should not\nhave to make a distinction between \"possibly recycled\" versus\n\"definitely recycled\".\n\nIt makes sense that the FSM is not crash safe, I suppose, but we're\ntalking about relatively large amounts of free space here. Can't we\njust do it properly/reliably?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Apr 2020 13:40:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 29, 2020 at 1:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm not sure how low the costs would be, but at least we'd only have\n> to do it once per already-deleted page (i.e. only the first time a\n> VACUUM operation found _bt_page_eligible_for_recycling() returned true\n> for the page and marked it recycled in a crash safe manner). That\n> design would be quite a lot simpler, because it expresses the problem\n> in terms that make sense to the nbtree code. _bt_getbuf() should not\n> have to make a distinction between \"possibly recycled\" versus\n> \"definitely recycled\".\n\nAs a bonus, we now have an easy correctness cross-check: if some\nrandom piece of nbtree code lands on a page (having followed a\ndownlink or sibling link) that is marked recycled, then clearly\nsomething is very wrong -- throw a \"can't happen\" error.\n\nThis would be especially useful in places like _bt_readpage(), I\nsuppose. Think of extreme cases like cursors, which can have a scan\nthat remembers a block number of a leaf page, that only actually gets\naccessed hours or days later (for whatever reason). If that code was\nbuggy in some way, we might have a hope of figuring it out at some\npoint with this cross-check.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:04:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 29, 2020 at 2:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> As a bonus, we now have an easy correctness cross-check: if some\n> random piece of nbtree code lands on a page (having followed a\n> downlink or sibling link) that is marked recycled, then clearly\n> something is very wrong -- throw a \"can't happen\" error.\n\nIf this doesn't sound that appealing to any of you, then bear this in\nmind: nbtree has a terrifying tendency to *not* produce wrong answers\nto queries when we land on a concurrently-recycled page.\n_bt_moveright() is willing to move right as many times as it takes to\narrive at the correct page, even though in typical cases having to\nmove right once is rare -- twice is exceptional. I suppose that there\nis roughly a 50% chance that we'll end up landing at a point in the\nkey space that is to the left of the point where we're supposed to\narrive at. It might take many, many page accesses before\n_bt_moveright() finds the correct page, but that often won't be very\nnoticeable. Or if it is noticed, corruption won't be suspected --\nwe're still getting a correct answer.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:25:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "Hi,\n\nOn 2020-04-29 13:40:55 -0700, Peter Geoghegan wrote:\n> On Wed, Apr 29, 2020 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Fundamentally, btvacuumpage() doesn't freeze 32-bit XIDs (from\n> > > bpto.xact) when it recycles deleted pages. It simply puts them in the\n> > > FSM without changing anything about the page itself. This means\n> > > surprisingly little in the context of nbtree: the\n> > > _bt_page_recyclable() XID check that takes place in btvacuumpage()\n> > > also takes place in _bt_getbuf(), at the point where the page actually\n> > > gets recycled by the client. That's not great.\n> >\n> > I think it's quite foolish for btvacuumpage() to not freeze xids. If we\n> > only do so when necessary (i.e. older than a potential new relfrozenxid,\n> > and only when the vacuum didn't yet skip pages), the costs are pretty\n> > miniscule.\n>\n> I wonder if we should just bite the bullet and mark pages that are\n> recycled by VACUUM as explicitly recycled, with WAL-logging and\n> everything (this is like freezing, but stronger). That way, the\n> _bt_page_recyclable() call within _bt_getbuf() would only be required\n> to check that state (while btvacuumpage() would use something like a\n> _bt_page_eligible_for_recycling(), which would do the same thing as\n> the current _bt_page_recyclable()).\n\nI'm not sure I see the advantage. Only doing so in the freezing case\nseems unlikely to cause additional conflicts, but I'm less sure about\ndoing it always. btpo.xact will often be quite recent, right? So it'd\nlikely cause more conflicts.\n\n\n> I'm not sure how low the costs would be, but at least we'd only have\n> to do it once per already-deleted page (i.e. only the first time a\n> VACUUM operation found _bt_page_eligible_for_recycling() returned true\n> for the page and marked it recycled in a crash safe manner). That\n> design would be quite a lot simpler, because it expresses the problem\n> in terms that make sense to the nbtree code. _bt_getbuf() should not\n> have to make a distinction between \"possibly recycled\" versus\n> \"definitely recycled\".\n\nI don't really see the problem with the check in _bt_getbuf()? I'd\nactually like to be *more* aggressive about putting pages in the FSM (or\nwhatever), and that'd probably require checks like this. E.g. whenever\nwe unlink a page, we should put it into the FSM (or something different,\nsee below). And then do all the necessary checks in _bt_getbuf().\n\nIt's pretty sad that one right now basically needs to vacuum twice to\nreuse pages in nbtree (once to delete the page, once to record it in the\nfsm). Usually the xmin horizon should advance much more quickly than\nthat, allowing reuse earlier.\n\nAs far as I can tell, even just adding them to the FSM when setting\nISDELETED would be advantageous. There's obviously that we'll cause\nbackends to visit a lot of pages that can't actually be reused... But if\nwe did what I suggest below, that danger probably could largely be\navoided.\n\n\n> It makes sense that the FSM is not crash safe, I suppose, but we're\n> talking about relatively large amounts of free space here. Can't we\n> just do it properly/reliably?\n\nWhat do you mean by that? To have the FSM be crash-safe?\n\nIt could make sense to just not have the FSM, and have a linked-list\nstyle stack of pages reachable from the meta page. That'd be especially\nadvantageous if we kept xids associated with the \"about to be\nrecyclable\" pages in the metapage, so it'd be cheap to evaluate.\n\nThere's possibly some not-entirely-trivial locking concerns around such\na list. Adding entries seems easy enough, because we currently only\ndelete pages from within vacuum. But popping entries could be more\ncomplicated, I'm not exactly sure if there are potential lock nesting\nissues (nor am I actually sure there aren't existing ones).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Apr 2020 14:55:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Apr 29, 2020 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm not sure I see the advantage. Only doing so in the freezing case\n> seems unlikely to cause additional conflicts, but I'm less sure about\n> doing it always. btpo.xact will often be quite recent, right? So it'd\n> likely cause more conflicts.\n\nbtpo.xact values come from a call to ReadNewTransactionId(). There\npretty much has to be one call to ReadNewTransactionId() per page\ndeletion (see comments about it within _bt_unlink_halfdead_page()). So\nyes, I'd say that it could be very recent.\n\nI don't mind continuing to do the conflicts in _bt_getbuf(), which\nwould delay it until the point where we really need the page --\nespecially if we could do that in a way that captures temporal\nlocality (e.g. your recyclable page chaining idea).\n\n> I don't really see the problem with the check in _bt_getbuf()? I'd\n> actually like to be *more* aggressive about putting pages in the FSM (or\n> whatever), and that'd probably require checks like this. E.g. whenever\n> we unlink a page, we should put it into the FSM (or something different,\n> see below). And then do all the necessary checks in _bt_getbuf().\n\nBasically, I would like to have a page state that represents \"it\nshould be impossible for any scan to land on this page, except for\nbtvacuumscan()\". Without relying on 32-bit XIDs, and ideally without\nrelying on any other state to interpret what it really means. In\nprinciple we can set a deleted page to that state at the earliest\npossible point when that becomes true, without it meaning anything\nelse, or requiring that we do anything else at the same time (e.g.\nactually using it for the right half of a page in a page split,\ngenerating recovery conflicts).\n\n> It's pretty sad that one right now basically needs to vacuum twice to\n> reuse pages in nbtree (once to delete the page, once to record it in the\n> fsm). Usually the xmin horizon should advance much more quickly than\n> that, allowing reuse earlier.\n\nYes, that's definitely bad. I like the general idea of making us more\naggressive with recycling. Marking pages as \"recyclable\" (not\n\"recycled\") not too long after they first get deleted in VACUUM, and\nthen separately recycling them in _bt_getbuf() is a cleaner design.\nSeparation of concerns. That would build confidence in a more\naggressive approach -- we could add lots of cross-checks against\nlanding on a recyclable page. Note that we have had a bug in this\nexact mechanism in the past -- see commit d3abbbebe52.\n\nIf there was a bug then we might still land on the page after it gets\nfully recycled, in which case the cross-checks won't detect the bug.\nBut ISTM that we always have a good chance of landing on the page\nbefore that happens, in which case the cross-check complains and we\nget a log message, and possibly even a bug report. We don't have to be\ntruly lucky to see when our approach is buggy when we go on to make\npage deletion more aggressive (in whatever way). And we get the same\ncross-checks on standbys.\n\n> > It makes sense that the FSM is not crash safe, I suppose, but we're\n> > talking about relatively large amounts of free space here. Can't we\n> > just do it properly/reliably?\n>\n> What do you mean by that? To have the FSM be crash-safe?\n\nThat, or the equivalent (pretty much your chaining idea) may well be a\ngood idea. But what I really meant was an explicit \"recyclable\" page\nstate. That's all. We may or may not also continue to rely on the FSM\nin the same way.\n\nI suppose that we should try to get rid of the FSM in nbtree. I see\nthe advantages. It's not essential to my proposal, though.\n\n> It could make sense to just not have the FSM, and have a linked-list\n> style stack of pages reachable from the meta page. That'd be especially\n> advantageous if we kept xids associated with the \"about to be\n> recyclable\" pages in the metapage, so it'd be cheap to evaluate.\n\nI like that idea. But doesn't that also argue for delaying the\nconflicts until we actually recycle a \"recyclable\" page?\n\n> There's possibly some not-entirely-trivial locking concerns around such\n> a list. Adding entries seems easy enough, because we currently only\n> delete pages from within vacuum. But popping entries could be more\n> complicated, I'm not exactly sure if there are potential lock nesting\n> issues (nor am I actually sure there aren't existing ones).\n\nA \"recyclable\" page state might help with this, too. _bt_getbuf() is a\nbag of tricks, even leaving aside generating recovery conflicts.\n\nIf we are going to be more eager, then the cost of dirtying the page a\nsecond time to mark it \"recyclable\" might mostly not matter.\nEspecially if we chain the pages. That is, maybe VACUUM recomputes\nRecentGlobalXmin at the end of its first btvacuumscan() scan (or at\nthe end of the whole VACUUM operation), when it notices that it is\nalready possible to mark many pages as \"recyclable\". Perhaps we won't\nwrite out the page twice much of the time, because it won't have been\nthat long since VACUUM dirtied the page in order to delete it.\n\nYeah, we could be a lot more aggressive here, in a bunch of ways. As\nI've said quite a few times, it seems like our implementation of \"the\ndrain technique\" is way more conservative than it needs to be (i.e. we\nuse ReadNewTransactionId() without considering any of the specifics of\nthe index). But if we mess up, we can't expect amcheck to detect the\nproblems, which would be totally transient. We're talking about\nincredibly subtle concurrency bugs. So IMV it's just not going to\nhappen until the whole thing becomes way less scary.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Apr 2020 16:58:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Sat, 18 Apr 2020 at 03:22, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Apr 16, 2020 at 6:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea. _bt_vacuum_needs_cleanup() needs to check if\n> > metad->btm_oldest_btpo_xact is older than the FreezeLimit computed by\n> > vacuum_set_xid_limits() and vacuum the index if so even if INDEX_CLEANUP\n> > false.\n>\n> I'm still fairly unclear on what the actual problem is here, and on\n> how we propose to fix it. It seems to me that we probably don't have a\n> problem in the case where we don't advance relfrozenxid or relminmxid,\n> because in that case there's not much difference between the behavior\n> created by this patch and a case where we just error out due to an\n> interrupt or something before reaching the index cleanup stage. I\n> think that the problem is that in the case where we do relfrozenxid,\n> we might advance it past some XID value stored in the index metadata.\n> Is that right?\n\nI think advancing relfrozenxid past oldest_btpo_xact actually cannot\nbe a problem. If a subsequent vacuum sees oldest_btpo_xact is an old\nxid, we can recycle pages. Before introducing to INDEX_CLEANUP =\nfalse, we used to invoke either bulkdelete or vaucumcleanup at least\nonce in each vacuum. And thanks to relfrozenxid, a table is\nperiodically vacuumed by an anti-wraparound vacuum. But with this\nfeature, we can unconditionally skip both bulkdelete and\nvacuumcleanup. So IIUC the problem is that since we skip both,\noldst_btpo_xact could be seen as a \"future\" xid during vacuum. Which\nwill be a cause of that vacuum misses pages which can actually be\nrecycled.\n\nI think we can fix this issue by calling vacuumcleanup callback when\nan anti-wraparound vacuum even if INDEX_CLEANUP is false. That way we can\nlet index AM make decisions whether doing cleanup index at least once\nuntil XID wraparound, same as before. Originally the motivation of\ndisabling INDEX_CLEANUP was to skip index full scan when\nanti-wraparound vacuum to reduce the execution time. By this\nchange, we will end up doing an index full scan also in some\nanti-wraparound vacuum case but we still can skip that if there is no\nrecyclable page in an index.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 May 2020 06:51:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Tue, May 5, 2020 at 2:52 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> So IIUC the problem is that since we skip both,\n> oldst_btpo_xact could be seen as a \"future\" xid during vacuum. Which\n> will be a cause of that vacuum misses pages which can actually be\n> recycled.\n\nThis is also my understanding of the problem.\n\n> I think we can fix this issue by calling vacuumcleanup callback when\n> an anti-wraparound vacuum even if INDEX_CLEANUP is false. That way we can\n> let index AM make decisions whether doing cleanup index at least once\n> until XID wraparound, same as before.\n\n+1\n\nCan you work on a patch?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 May 2020 15:13:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, 6 May 2020 at 07:14, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, May 5, 2020 at 2:52 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > So IIUC the problem is that since we skip both,\n> > oldst_btpo_xact could be seen as a \"future\" xid during vacuum. Which\n> > will be a cause of that vacuum misses pages which can actually be\n> > recycled.\n>\n> This is also my understanding of the problem.\n>\n> > I think we can fix this issue by calling vacuumcleanup callback when\n> > an anti-wraparound vacuum even if INDEX_CLEANUP is false. That way we can\n> > let index AM make decisions whether doing cleanup index at least once\n> > until XID wraparound, same as before.\n>\n> +1\n>\n> Can you work on a patch?\n\nYes, I'll submit a bug fix patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 May 2020 07:17:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, 6 May 2020 at 07:17, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 6 May 2020 at 07:14, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Tue, May 5, 2020 at 2:52 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > So IIUC the problem is that since we skip both,\n> > > oldst_btpo_xact could be seen as a \"future\" xid during vacuum. Which\n> > > will be a cause of that vacuum misses pages which can actually be\n> > > recycled.\n> >\n> > This is also my understanding of the problem.\n> >\n> > > I think we can fix this issue by calling vacuumcleanup callback when\n> > > an anti-wraparound vacuum even if INDEX_CLEANUP is false. That way we can\n> > > let index AM make decisions whether doing cleanup index at least once\n> > > until XID wraparound, same as before.\n> >\n> > +1\n> >\n> > Can you work on a patch?\n>\n> Yes, I'll submit a bug fix patch.\n>\n\nI've attached the patch fixes this issue.\n\nWith this patch, we don't skip only index cleanup phase when\nperforming an aggressive vacuum. The reason why I don't skip only\nindex cleanup phase is that index vacuum phase can be called multiple\ntimes, which takes a very long time. Since the purpose of this index\ncleanup is to process recyclable pages it's enough to do only index\ncleanup phase. However it also means we do index cleanup even when\ntable might have garbage whereas we used to call index cleanup only\nwhen there is no garbage on a table. As far as I can think it's no\nproblem but perhaps needs more research.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 6 May 2020 18:27:53 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> I've attached the patch fixes this issue.\n>\n> With this patch, we don't skip only index cleanup phase when\n> performing an aggressive vacuum. The reason why I don't skip only\n> index cleanup phase is that index vacuum phase can be called multiple\n> times, which takes a very long time. Since the purpose of this index\n> cleanup is to process recyclable pages it's enough to do only index\n> cleanup phase.\n\nThat's only true in nbtree, though. The way that I imagined we'd want\nto fix this is by putting control in each index access method. So,\nwe'd revise the way that amvacuumcleanup() worked -- the\namvacuumcleanup routine for each index AM would sometimes be called in\na mode that means \"I don't really want you to do any cleanup, but if\nyou absolutely have to for your own reasons then you can\". This could\nbe represented using something similar to\nIndexVacuumInfo.analyze_only.\n\nThis approach has an obvious disadvantage: the patch really has to\nteach *every* index AM to do something with that state (most will\nsimply do no work). It seems logical to have the index AM control what\nhappens, though. This allows the logic to live inside\n_bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\nplace where we make decisions like this.\n\nMost other AMs don't have this problem. GiST has a similar issue with\nrecyclable pages, except that it doesn't use 32-bit XIDs so it doesn't\nneed to care about this stuff at all. Besides, it seems like it might\nbe a good idea to do other basic maintenance of the index when we're\n\"skipping\" index cleanup. We probably should always do things that\nrequire only a small, fixed amount of work. Things like maintaining\nmetadata in the metapage.\n\nThere may be practical reasons why this approach isn't suitable for\nbackpatch even if it is a superior approach. What do you think? Also,\nwhat do you think about this Robert and Andres?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 May 2020 11:28:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, May 6, 2020 at 11:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> This approach has an obvious disadvantage: the patch really has to\n> teach *every* index AM to do something with that state (most will\n> simply do no work). It seems logical to have the index AM control what\n> happens, though. This allows the logic to live inside\n> _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\n> place where we make decisions like this.\n\nAlso, do we really want to skip summarization of BRIN indexes? This\ncleanup is rather dissimilar to the cleanup that takes place in most\nother AMs -- it isn't really that related to garbage collection (BRIN\nis rather unique in this respect). I think that BRIN might be an\ninappropriate target for \"index_cleanup off\" VACUUMs for that reason.\n\nSee the explanation of how this takes place from the docs:\nhttps://www.postgresql.org/docs/devel/brin-intro.html#BRIN-OPERATION\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 May 2020 12:04:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On 2020-May-06, Peter Geoghegan wrote:\n\n> Also, do we really want to skip summarization of BRIN indexes? This\n> cleanup is rather dissimilar to the cleanup that takes place in most\n> other AMs -- it isn't really that related to garbage collection (BRIN\n> is rather unique in this respect). I think that BRIN might be an\n> inappropriate target for \"index_cleanup off\" VACUUMs for that reason.\n> \n> See the explanation of how this takes place from the docs:\n> https://www.postgresql.org/docs/devel/brin-intro.html#BRIN-OPERATION\n\nGood question. I agree that BRIN summarization is not at all related to\nwhat other index AMs do during the cleanup phase. However, if the\nindex_cleanup feature is there to make it faster to process a table\nthat's nearing wraparound hazards (or at least the warnings), then I\nthink it makes sense to complete the vacuum as fast as possible -- which\nincludes not trying to summarize it for brin indexes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 May 2020 16:06:25 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, May 6, 2020 at 1:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Good question. I agree that BRIN summarization is not at all related to\n> what other index AMs do during the cleanup phase. However, if the\n> index_cleanup feature is there to make it faster to process a table\n> that's nearing wraparound hazards (or at least the warnings), then I\n> think it makes sense to complete the vacuum as fast as possible -- which\n> includes not trying to summarize it for brin indexes.\n\nI forgot about the fact that the AutoVacuumRequestWork() interface\nexists at all until just now. That's how \"autosummarize = on\" makes\nsure that autosummarization takes place. These work items are not\naffected by the fact that the VACUUM happens to be a \"index_cleanup\noff\" VACUUM. Fortunately, the user is required to explicitly opt-in to\nautosummarization (by setting \"autosummarize = on\") in order for\nautovacuum to spend extra time processing work items when it might be\nimportant to advance relfrozenxid ASAP. (My initial assumption was\nthat the autosummarization business happened within\nbrinvacuumcleanup(), but I now see that I was mistaken.)\n\nThere is a separate question (nothing to do with summarization) about\nthe cleanup steps performed in brinvacuumcleanup(), which are unlike\nany of the cleanup/maintenance that we expect for an amvacuumcleanup\nroutine in general. As I said in my last e-mail, these steps have\nnothing to do with garbage tuples. Rather, it's deferred maintenance\nthat we need to do even with append-only workloads (including when\nautosummarization has not been enabled). What about that? Is that\nokay?\n\nISTM that the fundamental issue is that BRIN imagines that it is in\ncontrol, which isn't quite true in light of the \"index_cleanup off\"\nstuff -- a call to brinvacuumcleanup() is expected to take place at\nfairly consistent intervals to take care of revmap processing, which,\nin general, might not happen now. I blame commit a96c41feec6 for this,\nnot BRIN. ISTM that whatever behavior we deem appropriate, the proper\nplace to decide on it is within BRIN. Not within vacuumlazy.c.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 May 2020 14:15:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, 7 May 2020 at 03:28, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > I've attached the patch fixes this issue.\n> >\n> > With this patch, we don't skip only index cleanup phase when\n> > performing an aggressive vacuum. The reason why I don't skip only\n> > index cleanup phase is that index vacuum phase can be called multiple\n> > times, which takes a very long time. Since the purpose of this index\n> > cleanup is to process recyclable pages it's enough to do only index\n> > cleanup phase.\n>\n> That's only true in nbtree, though. The way that I imagined we'd want\n> to fix this is by putting control in each index access method. So,\n> we'd revise the way that amvacuumcleanup() worked -- the\n> amvacuumcleanup routine for each index AM would sometimes be called in\n> a mode that means \"I don't really want you to do any cleanup, but if\n> you absolutely have to for your own reasons then you can\". This could\n> be represented using something similar to\n> IndexVacuumInfo.analyze_only.\n>\n> This approach has an obvious disadvantage: the patch really has to\n> teach *every* index AM to do something with that state (most will\n> simply do no work). It seems logical to have the index AM control what\n> happens, though. This allows the logic to live inside\n> _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\n> place where we make decisions like this.\n>\n> Most other AMs don't have this problem. GiST has a similar issue with\n> recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't\n> need to care about this stuff at all. Besides, it seems like it might\n> be a good idea to do other basic maintenance of the index when we're\n> \"skipping\" index cleanup. We probably should always do things that\n> require only a small, fixed amount of work. Things like maintaining\n> metadata in the metapage.\n>\n> There may be practical reasons why this approach isn't suitable for\n> backpatch even if it is a superior approach. What do you think?\n\nI agree this idea is better. I was thinking the same approach but I\nwas concerned about backpatching. Especially since I was thinking to\nadd one or two fields to IndexVacuumInfo existing index AM might not\nwork with the new VacuumInfo structure.\n\nIf we go with this idea, we need to change lazy vacuum so that it uses\ntwo-pass strategy vacuum even if INDEX_CLEANUP is false. Also in\nparallel vacuum, we need to launch workers. But I think these changes\nare no big problem.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 May 2020 15:40:26 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, 7 May 2020 at 15:40, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 7 May 2020 at 03:28, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > I've attached the patch fixes this issue.\n> > >\n> > > With this patch, we don't skip only index cleanup phase when\n> > > performing an aggressive vacuum. The reason why I don't skip only\n> > > index cleanup phase is that index vacuum phase can be called multiple\n> > > times, which takes a very long time. Since the purpose of this index\n> > > cleanup is to process recyclable pages it's enough to do only index\n> > > cleanup phase.\n> >\n> > That's only true in nbtree, though. The way that I imagined we'd want\n> > to fix this is by putting control in each index access method. So,\n> > we'd revise the way that amvacuumcleanup() worked -- the\n> > amvacuumcleanup routine for each index AM would sometimes be called in\n> > a mode that means \"I don't really want you to do any cleanup, but if\n> > you absolutely have to for your own reasons then you can\". This could\n> > be represented using something similar to\n> > IndexVacuumInfo.analyze_only.\n> >\n> > This approach has an obvious disadvantage: the patch really has to\n> > teach *every* index AM to do something with that state (most will\n> > simply do no work). It seems logical to have the index AM control what\n> > happens, though. This allows the logic to live inside\n> > _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\n> > place where we make decisions like this.\n> >\n> > Most other AMs don't have this problem. GiST has a similar issue with\n> > recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't\n> > need to care about this stuff at all. Besides, it seems like it might\n> > be a good idea to do other basic maintenance of the index when we're\n> > \"skipping\" index cleanup. We probably should always do things that\n> > require only a small, fixed amount of work. Things like maintaining\n> > metadata in the metapage.\n> >\n> > There may be practical reasons why this approach isn't suitable for\n> > backpatch even if it is a superior approach. What do you think?\n>\n> I agree this idea is better. I was thinking the same approach but I\n> was concerned about backpatching. Especially since I was thinking to\n> add one or two fields to IndexVacuumInfo existing index AM might not\n> work with the new VacuumInfo structure.\n\nIt would be ok if we added these fields at the end of VacuumInfo structure?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 May 2020 16:26:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, 7 May 2020 at 16:26, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 7 May 2020 at 15:40, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 7 May 2020 at 03:28, Peter Geoghegan <pg@bowt.ie> wrote:\n> > >\n> > > On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > I've attached the patch fixes this issue.\n> > > >\n> > > > With this patch, we don't skip only index cleanup phase when\n> > > > performing an aggressive vacuum. The reason why I don't skip only\n> > > > index cleanup phase is that index vacuum phase can be called multiple\n> > > > times, which takes a very long time. Since the purpose of this index\n> > > > cleanup is to process recyclable pages it's enough to do only index\n> > > > cleanup phase.\n> > >\n> > > That's only true in nbtree, though. The way that I imagined we'd want\n> > > to fix this is by putting control in each index access method. So,\n> > > we'd revise the way that amvacuumcleanup() worked -- the\n> > > amvacuumcleanup routine for each index AM would sometimes be called in\n> > > a mode that means \"I don't really want you to do any cleanup, but if\n> > > you absolutely have to for your own reasons then you can\". This could\n> > > be represented using something similar to\n> > > IndexVacuumInfo.analyze_only.\n> > >\n> > > This approach has an obvious disadvantage: the patch really has to\n> > > teach *every* index AM to do something with that state (most will\n> > > simply do no work). It seems logical to have the index AM control what\n> > > happens, though. This allows the logic to live inside\n> > > _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\n> > > place where we make decisions like this.\n> > >\n> > > Most other AMs don't have this problem. GiST has a similar issue with\n> > > recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't\n> > > need to care about this stuff at all. Besides, it seems like it might\n> > > be a good idea to do other basic maintenance of the index when we're\n> > > \"skipping\" index cleanup. We probably should always do things that\n> > > require only a small, fixed amount of work. Things like maintaining\n> > > metadata in the metapage.\n> > >\n> > > There may be practical reasons why this approach isn't suitable for\n> > > backpatch even if it is a superior approach. What do you think?\n> >\n> > I agree this idea is better. I was thinking the same approach but I\n> > was concerned about backpatching. Especially since I was thinking to\n> > add one or two fields to IndexVacuumInfo existing index AM might not\n> > work with the new VacuumInfo structure.\n>\n> It would be ok if we added these fields at the end of VacuumInfo structure?\n>\n\nI've attached WIP patch for HEAD. With this patch, the core pass\nindex_cleanup to bulkdelete and vacuumcleanup callbacks so that they\ncan make decision whether run vacuum or not.\n\nIf the direction of this patch seems good, I'll change the patch so\nthat we pass something information to these callbacks indicating\nwhether this vacuum is anti-wraparound vacuum. This is necessary\nbecause it's enough to invoke index cleanup before XID wraparound as\nper discussion so far.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 May 2020 11:32:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Mon, May 18, 2020 at 7:32 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> I've attached WIP patch for HEAD. With this patch, the core pass\n> index_cleanup to bulkdelete and vacuumcleanup callbacks so that they\n> can make decision whether run vacuum or not.\n>\n> If the direction of this patch seems good, I'll change the patch so\n> that we pass something information to these callbacks indicating\n> whether this vacuum is anti-wraparound vacuum. This is necessary\n> because it's enough to invoke index cleanup before XID wraparound as\n> per discussion so far.\n\nIt. seems like the right direction to me. Robert?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 22 May 2020 13:40:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, May 22, 2020 at 1:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It. seems like the right direction to me. Robert?\n\nBTW, this is an interesting report of somebody using the INDEX_CLEANUP\nfeature when they had to deal with a difficult production issue:\n\nhttps://www.buildkitestatus.com/incidents/h0vnx4gp7djx\n\nThis report is not really relevant to our discussion, but I thought\nyou might find it interesting.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 22 May 2020 17:25:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Tue, 19 May 2020 at 11:32, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 7 May 2020 at 16:26, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 7 May 2020 at 15:40, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 7 May 2020 at 03:28, Peter Geoghegan <pg@bowt.ie> wrote:\n> > > >\n> > > > On Wed, May 6, 2020 at 2:28 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > I've attached the patch fixes this issue.\n> > > > >\n> > > > > With this patch, we don't skip only index cleanup phase when\n> > > > > performing an aggressive vacuum. The reason why I don't skip only\n> > > > > index cleanup phase is that index vacuum phase can be called multiple\n> > > > > times, which takes a very long time. Since the purpose of this index\n> > > > > cleanup is to process recyclable pages it's enough to do only index\n> > > > > cleanup phase.\n> > > >\n> > > > That's only true in nbtree, though. The way that I imagined we'd want\n> > > > to fix this is by putting control in each index access method. So,\n> > > > we'd revise the way that amvacuumcleanup() worked -- the\n> > > > amvacuumcleanup routine for each index AM would sometimes be called in\n> > > > a mode that means \"I don't really want you to do any cleanup, but if\n> > > > you absolutely have to for your own reasons then you can\". This could\n> > > > be represented using something similar to\n> > > > IndexVacuumInfo.analyze_only.\n> > > >\n> > > > This approach has an obvious disadvantage: the patch really has to\n> > > > teach *every* index AM to do something with that state (most will\n> > > > simply do no work). It seems logical to have the index AM control what\n> > > > happens, though. This allows the logic to live inside\n> > > > _bt_vacuum_needs_cleanup() in the case of nbtree, so there is only one\n> > > > place where we make decisions like this.\n> > > >\n> > > > Most other AMs don't have this problem. GiST has a similar issue with\n> > > > recyclable pages, except that it doesn't use 32-bit XIDs so it doesn't\n> > > > need to care about this stuff at all. Besides, it seems like it might\n> > > > be a good idea to do other basic maintenance of the index when we're\n> > > > \"skipping\" index cleanup. We probably should always do things that\n> > > > require only a small, fixed amount of work. Things like maintaining\n> > > > metadata in the metapage.\n> > > >\n> > > > There may be practical reasons why this approach isn't suitable for\n> > > > backpatch even if it is a superior approach. What do you think?\n> > >\n> > > I agree this idea is better. I was thinking the same approach but I\n> > > was concerned about backpatching. Especially since I was thinking to\n> > > add one or two fields to IndexVacuumInfo existing index AM might not\n> > > work with the new VacuumInfo structure.\n> >\n> > It would be ok if we added these fields at the end of VacuumInfo structure?\n> >\n>\n> I've attached WIP patch for HEAD. With this patch, the core pass\n> index_cleanup to bulkdelete and vacuumcleanup callbacks so that they\n> can make decision whether run vacuum or not.\n>\n> If the direction of this patch seems good, I'll change the patch so\n> that we pass something information to these callbacks indicating\n> whether this vacuum is anti-wraparound vacuum. This is necessary\n> because it's enough to invoke index cleanup before XID wraparound as\n> per discussion so far.\n>\n\nI've updated the patch so that vacuum passes is_wraparound flag to\nbulkdelete and vacuumcleanup. Therefore I've added two new variables\nin total: index_cleanup and is_wraparound. Index AMs can make the\ndecision of whether to skip bulkdelete and indexcleanup or not.\n\nAlso, I've added this item to Older Bugs so as not to forget.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 23 Jun 2020 15:25:09 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, May 22, 2020 at 4:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, May 18, 2020 at 7:32 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > I've attached WIP patch for HEAD. With this patch, the core pass\n> > index_cleanup to bulkdelete and vacuumcleanup callbacks so that they\n> > can make decision whether run vacuum or not.\n> >\n> > If the direction of this patch seems good, I'll change the patch so\n> > that we pass something information to these callbacks indicating\n> > whether this vacuum is anti-wraparound vacuum. This is necessary\n> > because it's enough to invoke index cleanup before XID wraparound as\n> > per discussion so far.\n>\n> It. seems like the right direction to me. Robert?\n\nSorry, I'm so far behind on my email. Argh.\n\nI think, especially on the blog post you linked, that we should aim to\nhave INDEX_CLEANUP OFF mode do the minimum possible amount of work\nwhile still keeping us safe against transaction ID wraparound. So if,\nfor example, it's desirable but not imperative for BRIN to\nresummarize, then it's OK normally but should be skipped with\nINDEX_CLEANUP OFF.\n\nI find the patch itself confusing and the comments inadequate,\nespecially the changes to lazy_scan_heap(). Before the INDEX_CLEANUP\npatch went into the tree, LVRelStats had a member hasindex indicating\nwhether or not the table had any indexes. The patch changed that\nmember to useindex, indicating whether or not we were going to do\nindex vacuuming; thus, it would be false if either there were no\nindexes or if we were going to ignore them. This patch redefines\nuseindex to mean whether or not the table has any indexes, but without\nrenaming the variable. There's also really no comments anywhere in the\nvacuumlazy.c explaining the overall scheme here; what are we actually\ndoing? Apparently, what we're really doing here is that even when\nINDEX_CLEANUP is OFF, we're still going to keep all the dead tuples.\nThat seems sad, but if it's what we have to do then it at least needs\ncomments explaining it.\n\nAs for the btree portion of the change, I expect you understand this\nbetter than I do, so I'm reluctant to stick my neck out, but it seems\nthat what the patch does is force index cleanup to happen even when\nINDEX_CLEANUP is OFF provided that the vacuum is for wraparound and\nthe btree index has at least 1 recyclable page. My first reaction is\nto wonder whether that doesn't nerf this feature into oblivion. Isn't\nit likely that an index that is being vacuumed for wraparound will\nhave a recyclable page someplace? If the presence of that 1 recyclable\npage causes the \"help, I'm about to run out of XIDs, please do the\nleast possible work\" flag to become a no-op, I don't think users are\ngoing to be too happy with that. Maybe I am misunderstanding.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:21:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Jun 24, 2020 at 10:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Sorry, I'm so far behind on my email. Argh.\n\nThat's okay.\n\n> I think, especially on the blog post you linked, that we should aim to\n> have INDEX_CLEANUP OFF mode do the minimum possible amount of work\n> while still keeping us safe against transaction ID wraparound. So if,\n> for example, it's desirable but not imperative for BRIN to\n> resummarize, then it's OK normally but should be skipped with\n> INDEX_CLEANUP OFF.\n\nI agree that that's very important.\n\n> Apparently, what we're really doing here is that even when\n> INDEX_CLEANUP is OFF, we're still going to keep all the dead tuples.\n> That seems sad, but if it's what we have to do then it at least needs\n> comments explaining it.\n\n+1. Though I think that AMs should technically have the right to\nconsider it advisory.\n\n> As for the btree portion of the change, I expect you understand this\n> better than I do, so I'm reluctant to stick my neck out, but it seems\n> that what the patch does is force index cleanup to happen even when\n> INDEX_CLEANUP is OFF provided that the vacuum is for wraparound and\n> the btree index has at least 1 recyclable page. My first reaction is\n> to wonder whether that doesn't nerf this feature into oblivion. Isn't\n> it likely that an index that is being vacuumed for wraparound will\n> have a recyclable page someplace? If the presence of that 1 recyclable\n> page causes the \"help, I'm about to run out of XIDs, please do the\n> least possible work\" flag to become a no-op, I don't think users are\n> going to be too happy with that. Maybe I am misunderstanding.\n\nI have mixed feelings about it myself. These are valid concerns.\n\nThis is a problem to the extent that the user has a table that they'd\nlike to use INDEX_CLEANUP with, that has indexes that regularly\nrequire cleanup due to page deletion. ISTM, then, that the really\nrelevant high level design questions for this patch are:\n\n1. How often is that likely to happen in The Real World™?\n\n2. If we fail to do cleanup and leak already-deleted pages, how bad is\nthat? ( Both in general, and in the worst case.)\n\nI'll hazard a guess for 1: I think that it might not come up that\noften. Page deletion is often something that we hardly ever need. And,\nunlike some DB systems, we only do it when pages are fully empty\n(which, as it turns out, isn't necessarily better than our simple\napproach [1]). I tend to think it's unlikely to happen in cases where\nINDEX_CLEANUP is used, because those are cases that also must not have\nthat much index churn to begin with.\n\nThen there's question 2. The intuition behind the approach from\nSawada-san's patch was that allowing wraparound here feels wrong -- it\nshould be in the AM's hands. However, it's not like I can point to\nsome ironclad guarantee about not leaking deleted pages that existed\nbefore the INDEX_CLEANUP feature. We know that the FSM is not crash\nsafe, and that's that. Is this really all that different? Maybe it is,\nbut it seems like a quantitative difference to me.\n\nI'm kind of arguing against myself even as I try to advance my\noriginal argument. If workloads that use INDEX_CLEANUP don't need to\ndelete and recycle pages in any case, then why should we care that\nthose same workloads might leak pages on account of the wraparound\nhazard? There's nothing to leak! Maybe some compromise is possible, at\nleast for the backbranches. Perhaps nbtree is told about the\nrequirements in a bit more detail than we'd imagined. It's not just an\nINDEX_CLEANUP boolean. It could be an enum or something. Not sure,\nthough.\n\nThe real reason that I want to push the mechanism down into index\naccess methods is because that design is clearly better overall; it\njust so happens that the specific way in which we currently defer\nrecycling in nbtree makes very little sense, so it's harder to see the\nbig picture. The xid-cleanup design that we have was approximately the\neasiest way to do it, so that's what we got. We should figure out a\nway to recycle the pages at something close to the earliest possible\nopportunity, without having to perform a full scan on the index\nrelation within btvacuumscan(). Maybe we can use the autovacuum work\nitem mechanism for that. For indexes that get VACUUMed once a week on\naverage, it makes zero sense to wait another week to recycle the pages\nthat get deleted, in a staggered fashion. It should be possible to\nrecycle the pages a minute or two after VACUUM proper finishes, with\nextra work that's proportionate to the number of deleted pages. This\nis still conservative. I am currently very busy with an unrelated\nB-Tree prototype, so I might not get around to it this year. Maybe\nSawada-san can think about this?\n\nAlso, handling of the vacuum_cleanup_index_scale_factor stuff (added\nto Postgres 11 by commit 857f9c36) should live next to code for the\nconfusingly-similar INDEX_CLEANUP stuff (added to Postgres 12 by\ncommit a96c41feec6), on general principle. I think that that\norganization is a lot easier to follow.\n\n[1] https://www.sciencedirect.com/science/article/pii/002200009390020W\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:02:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, 25 Jun 2020 at 05:02, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jun 24, 2020 at 10:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Sorry, I'm so far behind on my email. Argh.\n>\n> That's okay.\n>\n> > I think, especially on the blog post you linked, that we should aim to\n> > have INDEX_CLEANUP OFF mode do the minimum possible amount of work\n> > while still keeping us safe against transaction ID wraparound. So if,\n> > for example, it's desirable but not imperative for BRIN to\n> > resummarize, then it's OK normally but should be skipped with\n> > INDEX_CLEANUP OFF.\n>\n> I agree that that's very important.\n>\n> > Apparently, what we're really doing here is that even when\n> > INDEX_CLEANUP is OFF, we're still going to keep all the dead tuples.\n> > That seems sad, but if it's what we have to do then it at least needs\n> > comments explaining it.\n>\n> +1. Though I think that AMs should technically have the right to\n> consider it advisory.\n>\n> > As for the btree portion of the change, I expect you understand this\n> > better than I do, so I'm reluctant to stick my neck out, but it seems\n> > that what the patch does is force index cleanup to happen even when\n> > INDEX_CLEANUP is OFF provided that the vacuum is for wraparound and\n> > the btree index has at least 1 recyclable page. My first reaction is\n> > to wonder whether that doesn't nerf this feature into oblivion. Isn't\n> > it likely that an index that is being vacuumed for wraparound will\n> > have a recyclable page someplace? If the presence of that 1 recyclable\n> > page causes the \"help, I'm about to run out of XIDs, please do the\n> > least possible work\" flag to become a no-op, I don't think users are\n> > going to be too happy with that. Maybe I am misunderstanding.\n>\n> I have mixed feelings about it myself. These are valid concerns.\n>\n> This is a problem to the extent that the user has a table that they'd\n> like to use INDEX_CLEANUP with, that has indexes that regularly\n> require cleanup due to page deletion. ISTM, then, that the really\n> relevant high level design questions for this patch are:\n>\n> 1. How often is that likely to happen in The Real World™?\n>\n> 2. If we fail to do cleanup and leak already-deleted pages, how bad is\n> that? ( Both in general, and in the worst case.)\n>\n> I'll hazard a guess for 1: I think that it might not come up that\n> often. Page deletion is often something that we hardly ever need. And,\n> unlike some DB systems, we only do it when pages are fully empty\n> (which, as it turns out, isn't necessarily better than our simple\n> approach [1]). I tend to think it's unlikely to happen in cases where\n> INDEX_CLEANUP is used, because those are cases that also must not have\n> that much index churn to begin with.\n>\n> Then there's question 2. The intuition behind the approach from\n> Sawada-san's patch was that allowing wraparound here feels wrong -- it\n> should be in the AM's hands. However, it's not like I can point to\n> some ironclad guarantee about not leaking deleted pages that existed\n> before the INDEX_CLEANUP feature. We know that the FSM is not crash\n> safe, and that's that. Is this really all that different? Maybe it is,\n> but it seems like a quantitative difference to me.\n\nI think that with the approach implemented in my patch, it could be a\nproblem for the user that the user cannot easily know in advance\nwhether vacuum with INDEX_CLEANUP false will perform index cleanup,\neven if page deletion doesn’t happen in most cases. They can check the\nvacuum will be a wraparound vacuum but it’s relatively hard for users\nto check in advance there are recyclable pages in the btree index.\nThis will be a restriction for users, especially those who want to use\nINDEX_CLEANUP feature to speed up an impending XID wraparound vacuum\ndescribed on the blog post that Peter shared.\n\nI don’t come up with a good solution to keep us safe against XID\nwraparound yet but it seems to me that it’s better to have an option\nthat forces index cleanup not to happen.\n\n>\n> I'm kind of arguing against myself even as I try to advance my\n> original argument. If workloads that use INDEX_CLEANUP don't need to\n> delete and recycle pages in any case, then why should we care that\n> those same workloads might leak pages on account of the wraparound\n> hazard?\n\nI had the same impression at first.\n\n> The real reason that I want to push the mechanism down into index\n> access methods is because that design is clearly better overall; it\n> just so happens that the specific way in which we currently defer\n> recycling in nbtree makes very little sense, so it's harder to see the\n> big picture. The xid-cleanup design that we have was approximately the\n> easiest way to do it, so that's what we got. We should figure out a\n> way to recycle the pages at something close to the earliest possible\n> opportunity, without having to perform a full scan on the index\n> relation within btvacuumscan().\n\n+1\n\n> Maybe we can use the autovacuum work\n> item mechanism for that. For indexes that get VACUUMed once a week on\n> average, it makes zero sense to wait another week to recycle the pages\n> that get deleted, in a staggered fashion. It should be possible to\n> recycle the pages a minute or two after VACUUM proper finishes, with\n> extra work that's proportionate to the number of deleted pages. This\n> is still conservative. I am currently very busy with an unrelated\n> B-Tree prototype, so I might not get around to it this year. Maybe\n> Sawada-san can think about this?\n\nI thought that btbulkdelete and/or btvacuumcleanup can register an\nautovacuum work item to recycle the page that gets deleted but it\nmight not able to recycle those pages enough because the autovacuum\nwork items could be taken just after vacuum. And if page deletion is\nrelatively a rare case in practice, we might be able to take an\noptimistic approach that vacuum registers deleted pages to FSM on the\ndeletion and a process who takes a free page checks if the page is\nreally recyclable. Anyway, I’ll try to think more about this.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 25 Jun 2020 22:59:08 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Wed, Jun 24, 2020 at 4:02 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Apparently, what we're really doing here is that even when\n> > INDEX_CLEANUP is OFF, we're still going to keep all the dead tuples.\n> > That seems sad, but if it's what we have to do then it at least needs\n> > comments explaining it.\n>\n> +1. Though I think that AMs should technically have the right to\n> consider it advisory.\n\nI'm not really convinced. I agree that from a theoretical point of\nview an index can have arbitrary needs and is the arbiter of its own\nneeds, but when I pull the emergency break, I want the vehicle to\nstop, not think about stopping. There's a fine argument for the idea\nthat depressing the regular brake pedal entitles the vehicle to\nexercise some discretion, and on modern cars it does (think ABS, if\nnothing else). But pulling the emergency break is a statement that I\nwish to override any contrary judgement about whether stopping is a\ngood idea. I think this option is rightly viewed as an emergency\nbreak, and giving AMs the right to decide that we'll instead pull off\nat the next exit doesn't sit well with me. At the end of the day, the\nhuman being should be in charge, not the program.\n\n(Great, now Skynet will be gunning for me...)\n\n> 1. How often is that likely to happen in The Real World™?\n>\n> 2. If we fail to do cleanup and leak already-deleted pages, how bad is\n> that? ( Both in general, and in the worst case.)\n>\n> I'll hazard a guess for 1: I think that it might not come up that\n> often. Page deletion is often something that we hardly ever need. And,\n> unlike some DB systems, we only do it when pages are fully empty\n> (which, as it turns out, isn't necessarily better than our simple\n> approach [1]). I tend to think it's unlikely to happen in cases where\n> INDEX_CLEANUP is used, because those are cases that also must not have\n> that much index churn to begin with.\n\nI don't think I believe this. All you need is one small range-deletion, right?\n\n> Then there's question 2. The intuition behind the approach from\n> Sawada-san's patch was that allowing wraparound here feels wrong -- it\n> should be in the AM's hands. However, it's not like I can point to\n> some ironclad guarantee about not leaking deleted pages that existed\n> before the INDEX_CLEANUP feature. We know that the FSM is not crash\n> safe, and that's that. Is this really all that different? Maybe it is,\n> but it seems like a quantitative difference to me.\n\nI don't think I believe this, either. In the real-world example to\nwhich you linked, the user ran REINDEX afterward to recover from index\nbloat, and we could advise other people who use this option that it\nmay leak space that a subsequent VACUUM may fail to recover, and\ntherefore they too should consider REINDEX. Bloat sucks and I hate it,\nbut in the vehicle analogy from up above, it's the equivalent of\ngetting lost while driving someplace. It is inconvenient and may cause\nyou many problems, but you will not be dead. Running out of XIDs is a\nbrick wall. Either the car stops or you hit the wall. Ideally you can\nmanage to both not get lost and also not hit a brick wall, but in an\nemergency situation where you have to choose either to get lost or to\nhit a brick wall, there's only one right answer. As bad as bloat is,\nand it's really bad, there are users who manage to run incredibly\nbloated databases for long periods of time just because the stuff that\ngets slow is either stuff that they're not doing at all, or only doing\nin batch jobs where it's OK if it runs super-slow and where it may\neven be possible to disable the batch job altogether, at least for a\nwhile. The set of users who can survive running out of XIDs is limited\nto those who can get by with just read-only queries, and that's\npractically nobody. I have yet to encounter a customer who didn't\nconsider running out of XIDs to be an emergency.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:28:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Jun 25, 2020 at 6:59 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> I think that with the approach implemented in my patch, it could be a\n> problem for the user that the user cannot easily know in advance\n> whether vacuum with INDEX_CLEANUP false will perform index cleanup,\n> even if page deletion doesn’t happen in most cases.\n\nI was unclear. I agree that the VACUUM command with \"INDEX_CLEANUP =\noff\" is an emergency mechanism that should be fully respected, even\nwhen that means that we'll leak deleted pages.\n\nPerhaps it would make sense to behave differently when the index is on\na table that has \"vacuum_index_cleanup = off\" set, and the vacuum is\nstarted by autovacuum, and is not an anti-wraparound vacuum. That\ndoesn't seem all that appealing now that I write it down, though,\nbecause it's a non-obvious behavioral difference among cases that\nusers probably expect to behave similarly. On the other hand, what\nuser knows that there is something called an aggressive vacuum, which\nisn't exactly the same thing as an anti-wraparound vacuum?\n\nI find it hard to decide what the least-worst thing is for the\nbackbranches. What do you think?\n\n> I don’t come up with a good solution to keep us safe against XID\n> wraparound yet but it seems to me that it’s better to have an option\n> that forces index cleanup not to happen.\n\nI don't think that there is a good solution that is suitable for\nbackpatching. The real solution is to redesign the recycling along the\nlines I described.\n\nI don't think that it's terrible that we can leak deleted pages,\nespecially considering the way that users are expected to use the\nINDEX_CLEANUP feature. I would like to be sure that the problem is\nwell understood, though -- we should at least have a plan for Postgres\nv14.\n\n> I thought that btbulkdelete and/or btvacuumcleanup can register an\n> autovacuum work item to recycle the page that gets deleted but it\n> might not able to recycle those pages enough because the autovacuum\n> work items could be taken just after vacuum. And if page deletion is\n> relatively a rare case in practice, we might be able to take an\n> optimistic approach that vacuum registers deleted pages to FSM on the\n> deletion and a process who takes a free page checks if the page is\n> really recyclable. Anyway, I’ll try to think more about this.\n\nRight -- just putting the pages in the FSM immediately, and making it\na problem that we deal with within _bt_getbuf() is an alternative\napproach that might be better.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 25 Jun 2020 17:18:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Jun 25, 2020 at 8:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm not really convinced. I agree that from a theoretical point of\n> view an index can have arbitrary needs and is the arbiter of its own\n> needs, but when I pull the emergency break, I want the vehicle to\n> stop, not think about stopping.\n\nMaking this theoretical argument in the context of this discussion was\nprobably a mistake. I agree that this is the emergency break, and it\nneeds to work like one.\n\nIt might be worth considering some compromise in the event of using\nthe \"vacuum_index_cleanup\" reloption (i.e. when the user has set it to\n'off'), provided there is good reason to believe that we're not in an\nemergency -- I mentioned this to Masahiko just now. I admit that that\nisn't very appealing for other reasons, but it's worth considering a\nway of ameliorating the problem in back branches. We really ought to\nchange how recycling works, so that it happens at the tail end of the\nsame VACUUM operation that deleted the pages -- but that cannot be\nbackpatched.\n\nIt might be that the most appropriate mitigation in the back branches\nis a log message that reports on the fact that we've probably leaked\npages due to this issue. Plus some documentation. Though even that\nwould require calling nbtree to check if that is actually true (by\nchecking the metapage), so it still requires backpatching something\nclose to Masahiko's draft patch.\n\n> I don't think I believe this. All you need is one small range-deletion, right?\n\nRight.\n\n> > Then there's question 2. The intuition behind the approach from\n> > Sawada-san's patch was that allowing wraparound here feels wrong -- it\n> > should be in the AM's hands. However, it's not like I can point to\n> > some ironclad guarantee about not leaking deleted pages that existed\n> > before the INDEX_CLEANUP feature. We know that the FSM is not crash\n> > safe, and that's that. Is this really all that different? Maybe it is,\n> > but it seems like a quantitative difference to me.\n>\n> I don't think I believe this, either. In the real-world example to\n> which you linked, the user ran REINDEX afterward to recover from index\n> bloat, and we could advise other people who use this option that it\n> may leak space that a subsequent VACUUM may fail to recover, and\n> therefore they too should consider REINDEX.\n\nI was talking about the intuition behind the design. I did not intend\nto suggest that nbtree should ignore \"INDEX_CLEANUP = off\" regardless\nof the consequences.\n\nI am sure about this much: The design embodied by Masahiko's patch is\nclearly a better one overall, even if it doesn't fix the problem on\nits own. I agree that we cannot allow nbtree to ignore \"INDEX_CLEANUP\n= off\", even if that means leaking pages that could otherwise be\nrecycled. I'm not sure what we should do about any of this in the back\nbranches, though. I wish I had a simple idea about what to do there.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 25 Jun 2020 17:44:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Jun 25, 2020 at 8:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am sure about this much: The design embodied by Masahiko's patch is\n> clearly a better one overall, even if it doesn't fix the problem on\n> its own. I agree that we cannot allow nbtree to ignore \"INDEX_CLEANUP\n> = off\", even if that means leaking pages that could otherwise be\n> recycled. I'm not sure what we should do about any of this in the back\n> branches, though. I wish I had a simple idea about what to do there.\n\nMy opinion is that there's no need to change the code in the\nback-branches, and that I don't really like the approach in master\neither.\n\nI think what we're saying is that there is no worse consequence to\nturning off index_cleanup than some bloat that isn't likely to be\nrecovered unless you REINDEX. If the problem in question were going to\ncause data loss or data corruption or something, we'd have to take\nstronger action, but I don't think anyone's saying that this is the\ncase. Therefore, I think we can handle the back-branches by letting\nusers know about the bloat hazard and suggesting that they avoid this\noption unless it's necessary to avoid running out of XIDs.\n\nNow, what about master? I think it's fine to offer the AM a callback\neven when index_cleanup = false, for example so that it can freeze\nsomething in its metapage, but I don't agree with passing it the TIDs.\nThat seems like it's just inviting it to ignore the emergency brake,\nand it's also incurring real overhead, because remembering all those\nTIDs can use a lot of memory. If that API limitation causes a problem\nfor some future index AM, that will be a good point to discuss when\nthe patch for said AM is submitted for review. I entirely agree with\nyou that the way btree arranges for btree recycling is crude, and I\nwould be delighted if you want to improve it, either for v14 or for\nany future release, or if somebody else wants to do so. However, even\nif that never happens, so what?\n\nIn retrospect, I regret committing this patch without better\nunderstanding the issues in this area. That was a fail on my part. At\nthe same time, it doesn't really sound like the issues are all that\nbad. The potential index bloat does suck, but it can still suck less\nthan the alternatives, and we have evidence that for at least one\nuser, it was worth a major version upgrade just to replace the\nsuckitude they had with the suckitude this patch creates.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 26 Jun 2020 08:39:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Jun 26, 2020 at 5:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> My opinion is that there's no need to change the code in the\n> back-branches, and that I don't really like the approach in master\n> either.\n\nI guess it's hard to see a way that we could fix this in the\nbackbranches, provided we aren't willing to tolerate a big refactor,\nor a cleanup scan of the index (note that I mean btvacuumcleanup(),\nnot btvacuumscan(), which is quite different).\n\n> I think what we're saying is that there is no worse consequence to\n> turning off index_cleanup than some bloat that isn't likely to be\n> recovered unless you REINDEX.\n\nThat's true.\n\n> Now, what about master? I think it's fine to offer the AM a callback\n> even when index_cleanup = false, for example so that it can freeze\n> something in its metapage, but I don't agree with passing it the TIDs.\n> That seems like it's just inviting it to ignore the emergency brake,\n> and it's also incurring real overhead, because remembering all those\n> TIDs can use a lot of memory.\n\nYou don't have to do anything with TIDs passed from vacuumlazy.c to\nrecycle pages that need to be recycled, since you only have to go\nthrough btvacuumcleanup() to avoid the problem that we're talking\nabout (you don't have to call btvacuumscan() to kill TIDs that\nvacuumlazy.c will have pruned). Killing TIDs/tuples in the index was\nnever something that would make sense, even within the confines of the\nexisting flawed nbtree recycling design. However, you do need to scan\nthe entire index to do that much. FWIW, that doesn't seem like it\n*totally* violates the spirit of \"index_cleanup = false\", since you're\nstill not doing most of the usual nbtree vacuuming stuff (even though\nyou have to scan the index, there is still much less work total).\n\n> If that API limitation causes a problem\n> for some future index AM, that will be a good point to discuss when\n> the patch for said AM is submitted for review. I entirely agree with\n> you that the way btree arranges for btree recycling is crude, and I\n> would be delighted if you want to improve it, either for v14 or for\n> any future release, or if somebody else wants to do so. However, even\n> if that never happens, so what?\n\nI think that it's important to be able to describe an ideal (though\nstill realistic) design, even if it might remain aspirational for a\nlong time. I suspect that pushing the mechanism down into index AMs\nhas other non-obvious benefits.\n\n> In retrospect, I regret committing this patch without better\n> understanding the issues in this area. That was a fail on my part. At\n> the same time, it doesn't really sound like the issues are all that\n> bad. The potential index bloat does suck, but it can still suck less\n> than the alternatives, and we have evidence that for at least one\n> user, it was worth a major version upgrade just to replace the\n> suckitude they had with the suckitude this patch creates.\n\nI actually agree -- this is a really important feature, and I'm glad\nthat we have it. Even in this slightly flawed form. I remember a great\nneed for the feature back when I was involved in supporting Postgres\nin production.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Jun 2020 16:00:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Sat, 27 Jun 2020 at 08:00, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jun 26, 2020 at 5:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > My opinion is that there's no need to change the code in the\n> > back-branches, and that I don't really like the approach in master\n> > either.\n>\n> I guess it's hard to see a way that we could fix this in the\n> backbranches, provided we aren't willing to tolerate a big refactor,\n> or a cleanup scan of the index (note that I mean btvacuumcleanup(),\n> not btvacuumscan(), which is quite different).\n\nAgreed.\n\n>\n> > I think what we're saying is that there is no worse consequence to\n> > turning off index_cleanup than some bloat that isn't likely to be\n> > recovered unless you REINDEX.\n>\n> That's true.\n\nRegarding to the extent of the impact, this bug will affect the user\nwho turned vacuum_index_cleanup off or executed manually vacuum with\nINDEX_CLEANUP off for a long time, after some vacuums. On the other\nhand, the user who uses INDEX_CLEANUP off on the spot or turns\nvacuum_index_cleanup off of the table from the start would not be\naffected or less affected.\n\n>\n> > In retrospect, I regret committing this patch without better\n> > understanding the issues in this area. That was a fail on my part. At\n> > the same time, it doesn't really sound like the issues are all that\n> > bad. The potential index bloat does suck, but it can still suck less\n> > than the alternatives, and we have evidence that for at least one\n> > user, it was worth a major version upgrade just to replace the\n> > suckitude they had with the suckitude this patch creates.\n>\n> I actually agree -- this is a really important feature, and I'm glad\n> that we have it. Even in this slightly flawed form. I remember a great\n> need for the feature back when I was involved in supporting Postgres\n> in production.\n\nI apologize for writing this patch without enough consideration. I\nshould have been more careful as I learned the nbtree page recycle\nstrategy when discussing vacuum_cleanup_index_scale_factor patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 27 Jun 2020 14:14:28 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Jun 26, 2020 at 10:15 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> Regarding to the extent of the impact, this bug will affect the user\n> who turned vacuum_index_cleanup off or executed manually vacuum with\n> INDEX_CLEANUP off for a long time, after some vacuums. On the other\n> hand, the user who uses INDEX_CLEANUP off on the spot or turns\n> vacuum_index_cleanup off of the table from the start would not be\n> affected or less affected.\n\nI don't think that it's likely to cause too much trouble. It's already\npossible to leak deleted pages, if only because the FSM isn't crash\nsafe. Actually, the nbtree README says this, and has since 2003:\n\n\"\"\"\n(Note: if we find a deleted page with an extremely old transaction\nnumber, it'd be worthwhile to re-mark it with FrozenTransactionId so that\na later xid wraparound can't cause us to think the page is unreclaimable.\nBut in more normal situations this would be a waste of a disk write.)\n\"\"\"\n\nBut, uh, isn't the btvacuumcleanup() call supposed to avoid\nwraparound? Who knows?!\n\nIt doesn't seem like the recycling aspect of page deletion was\nrigorously designed, possibly because it's harder to test than page\ndeletion itself. This is a problem that we should fix.\n\n> I apologize for writing this patch without enough consideration. I\n> should have been more careful as I learned the nbtree page recycle\n> strategy when discussing vacuum_cleanup_index_scale_factor patch.\n\nWhile it's unfortunate that this was missed, let's not lose\nperspective. Anybody using the INDEX_CLEANUP feature (whether it's\nthrough a direct VACUUM, or by using the reloption) is already asking\nfor an extreme behavior: skipping regular index vacuuming. I imagine\nthat the vast majority of users that are in that position just don't\ncare about the possibility of leaking deleted pages. They care about\navoiding a real disaster from XID wraparound.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 27 Jun 2020 10:44:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Sun, 28 Jun 2020 at 02:44, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jun 26, 2020 at 10:15 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > Regarding to the extent of the impact, this bug will affect the user\n> > who turned vacuum_index_cleanup off or executed manually vacuum with\n> > INDEX_CLEANUP off for a long time, after some vacuums. On the other\n> > hand, the user who uses INDEX_CLEANUP off on the spot or turns\n> > vacuum_index_cleanup off of the table from the start would not be\n> > affected or less affected.\n>\n> I don't think that it's likely to cause too much trouble. It's already\n> possible to leak deleted pages, if only because the FSM isn't crash\n> safe. Actually, the nbtree README says this, and has since 2003:\n>\n> \"\"\"\n> (Note: if we find a deleted page with an extremely old transaction\n> number, it'd be worthwhile to re-mark it with FrozenTransactionId so that\n> a later xid wraparound can't cause us to think the page is unreclaimable.\n> But in more normal situations this would be a waste of a disk write.)\n> \"\"\"\n>\n> But, uh, isn't the btvacuumcleanup() call supposed to avoid\n> wraparound? Who knows?!\n>\n> It doesn't seem like the recycling aspect of page deletion was\n> rigorously designed, possibly because it's harder to test than page\n> deletion itself. This is a problem that we should fix.\n\nAgreed.\n\n>\n> > I apologize for writing this patch without enough consideration. I\n> > should have been more careful as I learned the nbtree page recycle\n> > strategy when discussing vacuum_cleanup_index_scale_factor patch.\n>\n> While it's unfortunate that this was missed, let's not lose\n> perspective. Anybody using the INDEX_CLEANUP feature (whether it's\n> through a direct VACUUM, or by using the reloption) is already asking\n> for an extreme behavior: skipping regular index vacuuming. I imagine\n> that the vast majority of users that are in that position just don't\n> care about the possibility of leaking deleted pages. They care about\n> avoiding a real disaster from XID wraparound.\n\nFor back branches, I'm considering how we let users know about this.\nFor safety, we can let users know that we recommend avoiding\nINDEX_CLEANUP false unless it's necessary to avoid running out of XIDs\non the documentation and/or the release note. But on the other hand,\nsince there is the fact that leaving recyclable pages is already\npossible to happen as you mentioned I'm concerned it gets the user\ninto confusion and might needlessly incite unrest of users. I'm\nthinking what we can do for users, in addition to leaving the summary\nof this discussion as a source code comment. What do you think?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jun 2020 21:50:55 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Mon, Jun 29, 2020 at 9:51 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 28 Jun 2020 at 02:44, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Fri, Jun 26, 2020 at 10:15 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > Regarding to the extent of the impact, this bug will affect the user\n> > > who turned vacuum_index_cleanup off or executed manually vacuum with\n> > > INDEX_CLEANUP off for a long time, after some vacuums. On the other\n> > > hand, the user who uses INDEX_CLEANUP off on the spot or turns\n> > > vacuum_index_cleanup off of the table from the start would not be\n> > > affected or less affected.\n> >\n> > I don't think that it's likely to cause too much trouble. It's already\n> > possible to leak deleted pages, if only because the FSM isn't crash\n> > safe. Actually, the nbtree README says this, and has since 2003:\n> >\n> > \"\"\"\n> > (Note: if we find a deleted page with an extremely old transaction\n> > number, it'd be worthwhile to re-mark it with FrozenTransactionId so that\n> > a later xid wraparound can't cause us to think the page is unreclaimable.\n> > But in more normal situations this would be a waste of a disk write.)\n> > \"\"\"\n> >\n> > But, uh, isn't the btvacuumcleanup() call supposed to avoid\n> > wraparound? Who knows?!\n> >\n> > It doesn't seem like the recycling aspect of page deletion was\n> > rigorously designed, possibly because it's harder to test than page\n> > deletion itself. This is a problem that we should fix.\n>\n> Agreed.\n>\n> >\n> > > I apologize for writing this patch without enough consideration. I\n> > > should have been more careful as I learned the nbtree page recycle\n> > > strategy when discussing vacuum_cleanup_index_scale_factor patch.\n> >\n> > While it's unfortunate that this was missed, let's not lose\n> > perspective. Anybody using the INDEX_CLEANUP feature (whether it's\n> > through a direct VACUUM, or by using the reloption) is already asking\n> > for an extreme behavior: skipping regular index vacuuming. I imagine\n> > that the vast majority of users that are in that position just don't\n> > care about the possibility of leaking deleted pages. They care about\n> > avoiding a real disaster from XID wraparound.\n>\n> For back branches, I'm considering how we let users know about this.\n> For safety, we can let users know that we recommend avoiding\n> INDEX_CLEANUP false unless it's necessary to avoid running out of XIDs\n> on the documentation and/or the release note. But on the other hand,\n> since there is the fact that leaving recyclable pages is already\n> possible to happen as you mentioned I'm concerned it gets the user\n> into confusion and might needlessly incite unrest of users. I'm\n> thinking what we can do for users, in addition to leaving the summary\n> of this discussion as a source code comment. What do you think?\n>\n\nSeveral months passed from the discussion. We decided not to do\nanything on back branches but IIUC the fundamental issue is not fixed\nyet. The issue pointed out by Andres that we should leave the index AM\nto decide whether doing vacuum cleanup or not when INDEX_CLEANUP is\nspecified is still valid. Is that right?\n\nFor HEAD, there was a discussion that we change lazy vacuum and\nbulkdelete and vacuumcleanup APIs so that it calls these APIs even\nwhen INDEX_CLEANUP is specified. That is, when INDEX_CLEANUP false is\nspecified, it collects dead tuple TIDs into maintenance_work_mem space\nand passes the flag indicating INDEX_CLEANUP is specified or not to\nindex AMs. Index AM decides whether doing bulkdelete/vacuumcleanup. A\ndownside of this idea would be that we will end up using\nmaintenance_work_mem even if all index AMs of the table don't do\nbulkdelete/vacuumcleanup at all.\n\nThe second idea I came up with is to add an index AM API (say,\namcanskipindexcleanup = true/false) telling index cleanup is skippable\nor not. Lazy vacuum checks this flag for each index on the table\nbefore starting. If index cleanup is skippable in all indexes, it can\nchoose one-pass vacuum, meaning no need to collect dead tuple TIDs in\nmaintenance_work_mem. All in-core index AM will set to true. Perhaps\nit’s true (skippable) by default for backward compatibility.\n\nThe in-core AMs including btree indexes will work same as before. This\nfix is to make it more desirable behavior and possibly to help other\nAMs that require to call vacuumcleanup in all cases. Once we fix it I\nwonder if we can disable index cleanup when autovacuum’s\nanti-wraparound vacuum.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 20 Nov 2020 10:57:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Nov 19, 2020 at 5:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Several months passed from the discussion. We decided not to do\n> anything on back branches but IIUC the fundamental issue is not fixed\n> yet. The issue pointed out by Andres that we should leave the index AM\n> to decide whether doing vacuum cleanup or not when INDEX_CLEANUP is\n> specified is still valid. Is that right?\n\nI don't remember if Andres actually said that publicly, but I\ndefinitely did. I do remember discussing this with him privately, at\nwhich point he agreed with what I said. Which you just summarized\nwell.\n\n> For HEAD, there was a discussion that we change lazy vacuum and\n> bulkdelete and vacuumcleanup APIs so that it calls these APIs even\n> when INDEX_CLEANUP is specified. That is, when INDEX_CLEANUP false is\n> specified, it collects dead tuple TIDs into maintenance_work_mem space\n> and passes the flag indicating INDEX_CLEANUP is specified or not to\n> index AMs.\n\nRight.\n\n> Index AM decides whether doing bulkdelete/vacuumcleanup. A\n> downside of this idea would be that we will end up using\n> maintenance_work_mem even if all index AMs of the table don't do\n> bulkdelete/vacuumcleanup at all.\n\nThat is a downside, but I don't think that it's a serious downside.\nBut it may not matter, because there are lots of reasons to move in\nthis direction.\n\n> The second idea I came up with is to add an index AM API (say,\n> amcanskipindexcleanup = true/false) telling index cleanup is skippable\n> or not. Lazy vacuum checks this flag for each index on the table\n> before starting. If index cleanup is skippable in all indexes, it can\n> choose one-pass vacuum, meaning no need to collect dead tuple TIDs in\n> maintenance_work_mem. All in-core index AM will set to true. Perhaps\n> it’s true (skippable) by default for backward compatibility.\n\n(The terminology here is very confusing, because the goal of the\nINDEX_CLEANUP feature in v12 is not really to skip a call to\nbtvacuumcleanup(). The goal is really to skip a call to\nbtbulkdelete(). I will try to be careful.)\n\nI think that the ideal design here would be a new hybrid of two\nexisting features:\n\n1.) Your INDEX_CLEANUP feature from Postgres 12.\n\nand:\n\n2.) Your vacuum_cleanup_index_scale_factor feature from Postgres 11.\n\nThe INDEX_CLEANUP feature is very useful, because skipping indexes\nentirely can be very helpful for many reasons (e.g. faster manual\nVACUUM in the event of wraparound related emergencies). But\nINDEX_CLEANUP has 2 problems today:\n\nA. It doesn't interact well with vacuum_cleanup_index_scale_factor.\nThis is the problem that has been discussed on this thread.\n\nand:\n\nB. It is an \"all or nothing\" thing. Unlike the\nvacuum_cleanup_index_scale_factor feature, it does not care about what\nthe index AM/individual index wants. But it should.\n\n(**Thinks some more***)\n\nActually, on second thought, maybe INDEX_CLEANUP only has one problem.\nProblem A is actually just a special case of problem B. There are many\ninteresting opportunities created by solving problem B\ncomprehensively.\n\nSo, what useful enhancements to VACUUM are possible once we have\nsomething like INDEX_CLEANUP, that is sensitive to the needs of\nindexes? Well, you already identified one yourself, so obviously\nyou're thinking about this in a similar way already:\n\n> The in-core AMs including btree indexes will work same as before. This\n> fix is to make it more desirable behavior and possibly to help other\n> AMs that require to call vacuumcleanup in all cases. Once we fix it I\n> wonder if we can disable index cleanup when autovacuum’s\n> anti-wraparound vacuum.\n\nObviously this is a good idea. The fact that anti-wraparound vacuum\nisn't really special compared to regular autovacuum is *bad*.\nObviously anti-wraparound is in some sense more important than regular\nvacuum. Making it as similar as possible to vacuum simply isn't\nhelpful. Maybe it is slightly more elegant in theory, but in the real\nworld it is a poor design. (See also: every single PostgreSQL post\nmortem that has ever been written.)\n\nBut why stop with this? There are other big advantages to allowing\nindividual indexes/index AMs influence of the INDEX_CLEANUP behavior.\nEspecially if they're sensitive to the needs of particular indexes on\na table (not just all of the indexes on the table taken together).\n\nAs you may know, my bottom-up index deletion patch can more or less\neliminate index bloat in indexes that don't get \"logically changed\" by\nmany non-HOT updates. It's *very* effective with non-HOT updates and\nlots of indexes. See this benchmark result for a recent example:\n\nhttps://postgr.es/m/CAGnEbohYF_K6b0v=2uc289=v67qNhc3n01Ftic8X94zP7kKqtw@mail.gmail.com\n\nThe feature is effective enough to make it almost unnecessary to\nVACUUM certain indexes -- though not necessarily other indexes on the\nsame table. Of course, in the long term it will eventually be\nnecessary to really vacuum these indexes. Not because the indexes\nthemselves care, though -- they really don't (if they don't receive\nlogical changes from non-HOT updates, and so benefit very well from\nthe proposed bottom-up index deletion mechanism, they really have no\nselfish reason to care if they ever get vacuumed by autovacuum).\n\nThe reason we eventually need to call ambulkdelete() with these\nindexes (with all indexes, actually) even with these enhancements is\nrelated to the heap. We eventually want to make LP_DEAD line pointers\nin the heap LP_UNUSED. But we should be lazy about it, and wait until\nit becomes a real problem. Maybe we can only do a traditional VACUUM\n(with a second pass of the heap for heap LP_UNUSED stuff) much much\nless frequently than today. At the same time, we can set the FSM for\nindex-only scans much more frequently.\n\nIt's also important that we really make index vacuuming a per-index\nthing. You can see this in the example benchmark I linked to, which\nwas posted by Victor: no page splits in one never-logically-modified\nindex, and some page splits in other indexes that were actually\nchanged by UPDATEs again and again. Clearly you can have several\ndifferent indexes on the same table that have very different needs.\n\nWith some indexes we want to be extra lazy (these are indexes that\nmake good use of bottom-up deletion). But with other indexes on the\nsame table we want to be eager. Maybe even very eager. If we can make\nper-index decisions, then every individual part of the system works\nwell. Currently, the problem with autovacuum scheduling is that it\nprobably makes sense for the heap with the defaults (or something like\nthem), and probably doesn't make any sense for indexes (though it also\nvaries among indexes). So today we have maybe 7 different things\n(maybe 6 indexes + 1 table), and we pretend that they are only one\nthing. It's just a fantasy. The reality is that we have 7 things that\nhave only a very loose and complicated relationship to each other. We\nneed to stop believing in this fantasy, and start paying attention to\nthe more complicated reality. The only way to do that is ask each\nindex directly, while being prepared to get very different answers\nfrom each index on the same table.\n\nHere is what I mean by that: it would also probably be very useful to\ndo something like a ambulkdelete() call for only a subset of indexes\nthat really need it. So you aggressively vacuum the one index that\nreally does get logically modified by an UPDATE, and not the other 6\nthat don't. (Of course it's still true that we cannot have a second\nheap pass to make LP_DEAD line pointers in the heap LP_UNUSED --\nobviously that's unsafe unless we're 100% sure that nothing in any\nindex points to the now-LP_UNUSED line pointer. But many big\nimprovements are possible without violating this basic invariant.)\n\nIf you are able to pursue this project, in whole or in part, I would\ndefinitely be supportive of that. I may be able to commit it. I think\nthat this project has many benefits, not just one or two. It seems\nstrategic.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Nov 2020 19:56:18 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 12:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Nov 19, 2020 at 5:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Several months passed from the discussion. We decided not to do\n> > anything on back branches but IIUC the fundamental issue is not fixed\n> > yet. The issue pointed out by Andres that we should leave the index AM\n> > to decide whether doing vacuum cleanup or not when INDEX_CLEANUP is\n> > specified is still valid. Is that right?\n>\n> I don't remember if Andres actually said that publicly, but I\n> definitely did. I do remember discussing this with him privately, at\n> which point he agreed with what I said. Which you just summarized\n> well.\n>\n> > For HEAD, there was a discussion that we change lazy vacuum and\n> > bulkdelete and vacuumcleanup APIs so that it calls these APIs even\n> > when INDEX_CLEANUP is specified. That is, when INDEX_CLEANUP false is\n> > specified, it collects dead tuple TIDs into maintenance_work_mem space\n> > and passes the flag indicating INDEX_CLEANUP is specified or not to\n> > index AMs.\n>\n> Right.\n>\n> > Index AM decides whether doing bulkdelete/vacuumcleanup. A\n> > downside of this idea would be that we will end up using\n> > maintenance_work_mem even if all index AMs of the table don't do\n> > bulkdelete/vacuumcleanup at all.\n>\n> That is a downside, but I don't think that it's a serious downside.\n> But it may not matter, because there are lots of reasons to move in\n> this direction.\n>\n> > The second idea I came up with is to add an index AM API (say,\n> > amcanskipindexcleanup = true/false) telling index cleanup is skippable\n> > or not. Lazy vacuum checks this flag for each index on the table\n> > before starting. If index cleanup is skippable in all indexes, it can\n> > choose one-pass vacuum, meaning no need to collect dead tuple TIDs in\n> > maintenance_work_mem. All in-core index AM will set to true. Perhaps\n> > it’s true (skippable) by default for backward compatibility.\n>\n> (The terminology here is very confusing, because the goal of the\n> INDEX_CLEANUP feature in v12 is not really to skip a call to\n> btvacuumcleanup(). The goal is really to skip a call to\n> btbulkdelete(). I will try to be careful.)\n>\n> I think that the ideal design here would be a new hybrid of two\n> existing features:\n>\n> 1.) Your INDEX_CLEANUP feature from Postgres 12.\n>\n> and:\n>\n> 2.) Your vacuum_cleanup_index_scale_factor feature from Postgres 11.\n>\n> The INDEX_CLEANUP feature is very useful, because skipping indexes\n> entirely can be very helpful for many reasons (e.g. faster manual\n> VACUUM in the event of wraparound related emergencies). But\n> INDEX_CLEANUP has 2 problems today:\n>\n> A. It doesn't interact well with vacuum_cleanup_index_scale_factor.\n> This is the problem that has been discussed on this thread.\n>\n> and:\n>\n> B. It is an \"all or nothing\" thing. Unlike the\n> vacuum_cleanup_index_scale_factor feature, it does not care about what\n> the index AM/individual index wants. But it should.\n\nAgreed.\n\n>\n> (**Thinks some more***)\n>\n> Actually, on second thought, maybe INDEX_CLEANUP only has one problem.\n> Problem A is actually just a special case of problem B. There are many\n> interesting opportunities created by solving problem B\n> comprehensively.\n>\n> So, what useful enhancements to VACUUM are possible once we have\n> something like INDEX_CLEANUP, that is sensitive to the needs of\n> indexes? Well, you already identified one yourself, so obviously\n> you're thinking about this in a similar way already:\n>\n> > The in-core AMs including btree indexes will work same as before. This\n> > fix is to make it more desirable behavior and possibly to help other\n> > AMs that require to call vacuumcleanup in all cases. Once we fix it I\n> > wonder if we can disable index cleanup when autovacuum’s\n> > anti-wraparound vacuum.\n>\n> Obviously this is a good idea. The fact that anti-wraparound vacuum\n> isn't really special compared to regular autovacuum is *bad*.\n> Obviously anti-wraparound is in some sense more important than regular\n> vacuum. Making it as similar as possible to vacuum simply isn't\n> helpful. Maybe it is slightly more elegant in theory, but in the real\n> world it is a poor design. (See also: every single PostgreSQL post\n> mortem that has ever been written.)\n>\n> But why stop with this? There are other big advantages to allowing\n> individual indexes/index AMs influence of the INDEX_CLEANUP behavior.\n> Especially if they're sensitive to the needs of particular indexes on\n> a table (not just all of the indexes on the table taken together).\n>\n> As you may know, my bottom-up index deletion patch can more or less\n> eliminate index bloat in indexes that don't get \"logically changed\" by\n> many non-HOT updates. It's *very* effective with non-HOT updates and\n> lots of indexes. See this benchmark result for a recent example:\n>\n> https://postgr.es/m/CAGnEbohYF_K6b0v=2uc289=v67qNhc3n01Ftic8X94zP7kKqtw@mail.gmail.com\n>\n> The feature is effective enough to make it almost unnecessary to\n> VACUUM certain indexes -- though not necessarily other indexes on the\n> same table. Of course, in the long term it will eventually be\n> necessary to really vacuum these indexes. Not because the indexes\n> themselves care, though -- they really don't (if they don't receive\n> logical changes from non-HOT updates, and so benefit very well from\n> the proposed bottom-up index deletion mechanism, they really have no\n> selfish reason to care if they ever get vacuumed by autovacuum).\n>\n> The reason we eventually need to call ambulkdelete() with these\n> indexes (with all indexes, actually) even with these enhancements is\n> related to the heap. We eventually want to make LP_DEAD line pointers\n> in the heap LP_UNUSED. But we should be lazy about it, and wait until\n> it becomes a real problem. Maybe we can only do a traditional VACUUM\n> (with a second pass of the heap for heap LP_UNUSED stuff) much much\n> less frequently than today. At the same time, we can set the FSM for\n> index-only scans much more frequently.\n>\n> It's also important that we really make index vacuuming a per-index\n> thing. You can see this in the example benchmark I linked to, which\n> was posted by Victor: no page splits in one never-logically-modified\n> index, and some page splits in other indexes that were actually\n> changed by UPDATEs again and again. Clearly you can have several\n> different indexes on the same table that have very different needs.\n>\n> With some indexes we want to be extra lazy (these are indexes that\n> make good use of bottom-up deletion). But with other indexes on the\n> same table we want to be eager. Maybe even very eager. If we can make\n> per-index decisions, then every individual part of the system works\n> well. Currently, the problem with autovacuum scheduling is that it\n> probably makes sense for the heap with the defaults (or something like\n> them), and probably doesn't make any sense for indexes (though it also\n> varies among indexes). So today we have maybe 7 different things\n> (maybe 6 indexes + 1 table), and we pretend that they are only one\n> thing. It's just a fantasy. The reality is that we have 7 things that\n> have only a very loose and complicated relationship to each other. We\n> need to stop believing in this fantasy, and start paying attention to\n> the more complicated reality. The only way to do that is ask each\n> index directly, while being prepared to get very different answers\n> from each index on the same table.\n>\n> Here is what I mean by that: it would also probably be very useful to\n> do something like a ambulkdelete() call for only a subset of indexes\n> that really need it. So you aggressively vacuum the one index that\n> really does get logically modified by an UPDATE, and not the other 6\n> that don't. (Of course it's still true that we cannot have a second\n> heap pass to make LP_DEAD line pointers in the heap LP_UNUSED --\n> obviously that's unsafe unless we're 100% sure that nothing in any\n> index points to the now-LP_UNUSED line pointer. But many big\n> improvements are possible without violating this basic invariant.)\n\nI had missed your bottom-up index deletion patch but it's a promising\nimprovement. With that patch, the number of dead tuples in individual\nindexes may differ. So it's important that we make index vacuuming a\nper-index thing.\n\nGiven that patch, it seems to me that it would be better to ask\nindividual index AM before calling to bulkdelete about the needs of\nbulkdelete. That is, passing VACUUM options and the number of\ncollected dead tuples etc. to index AM, we ask index AM via a new\nindex AM API whether it wants to do bulkdelete or not. We call\nbulkdelete for only indexes that answered 'yes'. If we got 'no' from\nany one of the indexes, we cannot have a second heap pass.\nINDEX_CLEANUP is not enforceable. When INDEX_CLEANUP is set to false,\nwe expect index AMs to return 'no' unless they have a special reason\nfor the needs of bulkdelete.\n\nOne possible benefit of this idea even without bottom-up index\ndeleteion patch would be something like\nvacuum_index_cleanup_scale_factor for bulkdelete. For example, in the\ncase where the amount of dead tuple is slightly larger than\nmaitenance_work_mem the second time calling to bulkdelete will be\ncalled with a small amount of dead tuples, which is less efficient. If\nan index AM is able to determine not to do bulkdelete by comparing the\nnumber of dead tuples to a threshold, it can avoid such bulkdelete\ncalling.\n\nAlso, as a future extension, once we have retail index deletion\nfeature, we might be able to make that API return a ternary value:\n'no', 'do_bulkdelete', ‘do_indexscandelete, so that index AM can\nchoose the appropriate method of index deletion based on the\nstatistics.\n\nBut for making index vacuuming per-index thing, we need to deal with\nthe problem that we cannot know which indexes of the table still has\nindex tuples pointing to the collected dead tuple. For example, if an\nindex always says 'no' (not need bulkdelete therefore we need to keep\ndead line pointers), the collected dead tuples might already be marked\nas LP_DEAD and there might already not be index tuples pointing to\nthem in other index AMs. In that case we don't want to call to\nbulkdelete for other indexes. Probably we can have additional\nstatistics like the number of dead tuples in individual indexes so\nthat they can determine the needs of bulkdelete. But it’s not a\ncomprehensive solution.\n\n>\n> If you are able to pursue this project, in whole or in part, I would\n> definitely be supportive of that. I may be able to commit it. I think\n> that this project has many benefits, not just one or two. It seems\n> strategic.\n\nThanks, that’s really helpful. I’m going to work on that. Since things\nbecame complicated by these two features that I proposed I’ll do my\nbest to sort out the problem and improve it in PG14.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 20 Nov 2020 20:16:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Thu, Nov 19, 2020 at 8:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> For HEAD, there was a discussion that we change lazy vacuum and\n> bulkdelete and vacuumcleanup APIs so that it calls these APIs even\n> when INDEX_CLEANUP is specified. That is, when INDEX_CLEANUP false is\n> specified, it collects dead tuple TIDs into maintenance_work_mem space\n> and passes the flag indicating INDEX_CLEANUP is specified or not to\n> index AMs. Index AM decides whether doing bulkdelete/vacuumcleanup. A\n> downside of this idea would be that we will end up using\n> maintenance_work_mem even if all index AMs of the table don't do\n> bulkdelete/vacuumcleanup at all.\n>\n> The second idea I came up with is to add an index AM API (say,\n> amcanskipindexcleanup = true/false) telling index cleanup is skippable\n> or not. Lazy vacuum checks this flag for each index on the table\n> before starting. If index cleanup is skippable in all indexes, it can\n> choose one-pass vacuum, meaning no need to collect dead tuple TIDs in\n> maintenance_work_mem. All in-core index AM will set to true. Perhaps\n> it’s true (skippable) by default for backward compatibility.\n>\n> The in-core AMs including btree indexes will work same as before. This\n> fix is to make it more desirable behavior and possibly to help other\n> AMs that require to call vacuumcleanup in all cases. Once we fix it I\n> wonder if we can disable index cleanup when autovacuum’s\n> anti-wraparound vacuum.\n\nIt (still) doesn't seem very sane to me to have an index that requires\ncleanup in all cases. I mean, VACUUM could error or be killed just\nbefore the index cleanup hase happens anyway, so it's not like an\nindex AM can licitly depend on getting called just because we visited\nthe heap. It could, of course, depend on getting called before\nrelfrozenxid is advanced, or before the heap's dead line pointers are\nmarked unused, or something like that, but it can't just be like, hey,\nyou have to call me.\n\nI think this whole discussion is to some extent a product of the\ncontract between the index AM and the table AM being more than\nslightly unclear. Maybe we need to clear up the definitional problems\nfirst.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 20 Nov 2020 10:59:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 3:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I had missed your bottom-up index deletion patch but it's a promising\n> improvement. With that patch, the number of dead tuples in individual\n> indexes may differ. So it's important that we make index vacuuming a\n> per-index thing.\n\nRight, but it's important to remember that even bottom-up index\ndeletion isn't really special in theory. Bottom-up index deletion is\n\"just\" a reliable version of the existing LP_DEAD index deletion thing\n(which has been around since Postgres 8.2). In theory it doesn't\nchange the fundamental nature of the problem. In practice it does,\nbecause it makes it very obvious to pgsql-hackers that indexes on the\nsame table can have very different needs from VACUUM. And the actual\ndifferences we see are much bigger now. Even still, the fact that you\nhad certain big differences across indexes on the same table is not a\nnew thing. (Actually, you can even see this on the master branch in\nVictor's bottom-up deletion benchmark, where the primary key index\nactually doesn't grow on the master branch, even after 8 hours.)\n\nThe bottom-up index deletion patch (and the enhancements we're talking\nabout here, for VACUUM itself) are based on \"the generational\nhypothesis\" that underlies generational garbage collection. The\nphilosophy is the same. See:\n\nhttps://plumbr.io/handbook/garbage-collection-in-java/generational-hypothesis\n\nIn theory, \"most garbage comes from new objects\" is \"just\" an\nempirical observation, that may or may not be true with each\nworkload/Java program/whatever. In practice it is important enough to\nbe a big part of how every modern garbage collector works -- it's\nalmost always true, and even when it isn't true it doesn't actually\nhurt to make the assumption that it is true and then be wrong. I\nbelieve that we have to take a holistic view of the problem to make\nreal progress.\n\nAndres said something similar in a recent blog post:\n\nhttps://techcommunity.microsoft.com/t5/azure-database-for-postgresql/improving-postgres-connection-scalability-snapshots/ba-p/1806462#interlude-removing-the-need-for-recentglobalxminhorizon\n\n\"In most workloads the majority of accesses are to live tuples, and\nwhen encountering non-live tuple versions they are either very old, or\nvery new.\"\n\n(This was just a coincidence, but it was good to see that he made the\nsame observation.)\n\n> Given that patch, it seems to me that it would be better to ask\n> individual index AM before calling to bulkdelete about the needs of\n> bulkdelete. That is, passing VACUUM options and the number of\n> collected dead tuples etc. to index AM, we ask index AM via a new\n> index AM API whether it wants to do bulkdelete or not. We call\n> bulkdelete for only indexes that answered 'yes'. If we got 'no' from\n> any one of the indexes, we cannot have a second heap pass.\n> INDEX_CLEANUP is not enforceable. When INDEX_CLEANUP is set to false,\n> we expect index AMs to return 'no' unless they have a special reason\n> for the needs of bulkdelete.\n\nI don't have a very detailed idea of the interface or anything. There\nare a few questions that naturally present themselves, that I don't\nhave good answers to right now. Obviously vacuumlazy.c will only treat\nthis feedback from each index as an advisory thing. So what happens\nwhen 50% of the indexes say yes and 50% say no? This is a subproblem\nthat must be solved as part of this work. Ideally it will be solved by\nyou. :-)\n\n> One possible benefit of this idea even without bottom-up index\n> deleteion patch would be something like\n> vacuum_index_cleanup_scale_factor for bulkdelete. For example, in the\n> case where the amount of dead tuple is slightly larger than\n> maitenance_work_mem the second time calling to bulkdelete will be\n> called with a small amount of dead tuples, which is less efficient. If\n> an index AM is able to determine not to do bulkdelete by comparing the\n> number of dead tuples to a threshold, it can avoid such bulkdelete\n> calling.\n\nI agree. Actually, I thought the same thing myself, even before I\nrealized that bottom-up index deletion was possible.\n\n> Also, as a future extension, once we have retail index deletion\n> feature, we might be able to make that API return a ternary value:\n> 'no', 'do_bulkdelete', ‘do_indexscandelete, so that index AM can\n> choose the appropriate method of index deletion based on the\n> statistics.\n\nI agree again!\n\nWe may eventually be able to make autovacuum run very frequently\nagainst each table in many important cases, with each VACUUM taking\nvery little wall-clock time. We don't have to change the fundamental\ndesign to fix most of the current problems. I suspect that the \"top\ndown\" nature of VACUUM is sometimes helpful. We just need to\ncompensate when this design is insufficient. Getting the \"best of both\nworlds\" is possible.\n\n> But for making index vacuuming per-index thing, we need to deal with\n> the problem that we cannot know which indexes of the table still has\n> index tuples pointing to the collected dead tuple. For example, if an\n> index always says 'no' (not need bulkdelete therefore we need to keep\n> dead line pointers), the collected dead tuples might already be marked\n> as LP_DEAD and there might already not be index tuples pointing to\n> them in other index AMs. In that case we don't want to call to\n> bulkdelete for other indexes. Probably we can have additional\n> statistics like the number of dead tuples in individual indexes so\n> that they can determine the needs of bulkdelete. But it’s not a\n> comprehensive solution.\n\nRight. Maybe we don't ask the index AMs for discrete yes/no answers.\nMaybe we can ask them for a continuous answer, such as a value between\n0.0 and 1.0 that represents the urgency/bloat, or something like that.\nAnd so the final yes/no answer that really does have to be made for\nthe table as a whole (does VACUUM do a second pass over the heap to\nmake LP_DEAD items into LP_UNUSED items?) can at least consider the\nworst case for each index. And maybe the average case, too.\n\n(I am just making random suggestions to stimulate discussion. Don't\ntake these specific suggestions about the am interface too seriously.)\n\n> > If you are able to pursue this project, in whole or in part, I would\n> > definitely be supportive of that. I may be able to commit it. I think\n> > that this project has many benefits, not just one or two. It seems\n> > strategic.\n>\n> Thanks, that’s really helpful. I’m going to work on that. Since things\n> became complicated by these two features that I proposed I’ll do my\n> best to sort out the problem and improve it in PG14.\n\nExcellent! Thank you.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Nov 2020 11:37:22 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 2:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Right. Maybe we don't ask the index AMs for discrete yes/no answers.\n> Maybe we can ask them for a continuous answer, such as a value between\n> 0.0 and 1.0 that represents the urgency/bloat, or something like that.\n> And so the final yes/no answer that really does have to be made for\n> the table as a whole (does VACUUM do a second pass over the heap to\n> make LP_DEAD items into LP_UNUSED items?) can at least consider the\n> worst case for each index. And maybe the average case, too.\n\nThat's an interesting idea. We should think about the needs of brin\nindexes when designing something better than the current system. They\nhave the interesting property that the heap deciding to change LP_DEAD\nto LP_UNUSED doesn't break anything even if nothing's been done to the\nindex, because they don't store TIDs anyway. So that's an example of\nan index AM that might want to do some work to keep performance up,\nbut it's not actually required. This might be orthogonal to the\n0.0-1.0 scale you were thinking about, but it might be good to factor\nit into the thinking somehow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Nov 2020 15:04:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> That's an interesting idea. We should think about the needs of brin\n> indexes when designing something better than the current system. They\n> have the interesting property that the heap deciding to change LP_DEAD\n> to LP_UNUSED doesn't break anything even if nothing's been done to the\n> index, because they don't store TIDs anyway. So that's an example of\n> an index AM that might want to do some work to keep performance up,\n> but it's not actually required. This might be orthogonal to the\n> 0.0-1.0 scale you were thinking about, but it might be good to factor\n> it into the thinking somehow.\n\nI actually made this exact suggestion about BRIN myself, several years ago.\n\nAs I've said, it seems like it would be a good idea to ask the exact\nsame generic question of each index in turn (which is answered using\nlocal smarts added to the index AM). Again, the question is: How\nimportant is it that you get vacuumed now, from your own\nnarrow/selfish point of view? The way that BRIN answers this question\nis not the novel thing about BRIN among other index access methods,\nthough. (Not that you claimed otherwise -- just framing the discussion\ncarefully.)\n\nBRIN has no selfish reason to care if the table never gets to have its\nLP_DEAD line pointers set to LP_UNUSED -- that's just not something\nthat it can be expected to understand directly. But all index access\nmethods should be thought of as not caring about this, because it's\njust not their problem. (Especially with bottom-up index deletion, but\neven without it.)\n\nThe interesting and novel thing about BRIN here is this: lazyvacuum.c\ncan be taught that a BRIN index alone is no reason to have to do a\nsecond pass over the heap (to make the LP_DEAD/pruned-by-VACUUM line\npointers LP_UNUSED). A BRIN index never gets affected by the usual\nconsiderations about the heapam invariant (the usual thing about TIDs\nin an index not pointing to a line pointer that is at risk of being\nrecycled), which presents us with a unique-to-BRIN opportunity. Which\nis exactly what you said.\n\n(***Thinks some more***)\n\nActually, now I think that BRIN shouldn't be special to vacuumlazy.c\nin any way. It doesn't make sense as part of this future world in\nwhich index vacuuming can be skipped for individual indexes (which is\nwhat I talked to Sawada-san about a little earlier in this thread).\nWhy should it be useful to exploit the \"no-real-TIDs\" property of BRIN\nin this future world? It can only solve a problem that the main\nenhancement is itself expected to solve without any special help from\nBRIN (just the generic am callback that asks the same generic question\nabout index vacuuming urgency).\n\nThe only reason we press ahead with a second scan (the\nLP_DEAD-to-LP_UNUSED thing) in this ideal world is a heap/table\nproblem. The bloat eventually gets out of hand *in the table*. We have\nnow conceptually decoupled the problems experienced in the table/heap\nfrom the problems for each index (mostly), so this actually makes\nsense. The theory behind AV scheduling becomes much closer to reality\n-- by changing the reality! (The need to \"prune the table to VACUUM\nany one index\" notwithstanding -- that's still necessary, of course,\nbut we still basically decouple table bloat from index bloat at the\nconceptual level.)\n\nDoes that make sense?\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Nov 2020 13:21:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 4:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, now I think that BRIN shouldn't be special to vacuumlazy.c\n> in any way. It doesn't make sense as part of this future world in\n> which index vacuuming can be skipped for individual indexes (which is\n> what I talked to Sawada-san about a little earlier in this thread).\n> Why should it be useful to exploit the \"no-real-TIDs\" property of BRIN\n> in this future world? It can only solve a problem that the main\n> enhancement is itself expected to solve without any special help from\n> BRIN (just the generic am callback that asks the same generic question\n> about index vacuuming urgency).\n>\n> The only reason we press ahead with a second scan (the\n> LP_DEAD-to-LP_UNUSED thing) in this ideal world is a heap/table\n> problem. The bloat eventually gets out of hand *in the table*. We have\n> now conceptually decoupled the problems experienced in the table/heap\n> from the problems for each index (mostly), so this actually makes\n> sense. The theory behind AV scheduling becomes much closer to reality\n> -- by changing the reality! (The need to \"prune the table to VACUUM\n> any one index\" notwithstanding -- that's still necessary, of course,\n> but we still basically decouple table bloat from index bloat at the\n> conceptual level.)\n>\n> Does that make sense?\n\nI *think* so. For me the point is that the index never has a right to\ninsist on being vacuumed, but it can offer an opinion on how helpful\nit would be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Nov 2020 17:16:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Fri, Nov 20, 2020 at 2:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Does that make sense?\n>\n> I *think* so. For me the point is that the index never has a right to\n> insist on being vacuumed, but it can offer an opinion on how helpful\n> it would be.\n\nRight, that might be the single most important point. It's a somewhat\nmore bottom-up direction for VACUUM, that is still fundamentally\ntop-down. Because that's still necessary.\n\nOpportunistic heap pruning is usually very effective, so today we\nrealistically have these 4 byte line pointers accumulating in heap\npages. The corresponding \"bloatum\" in index pages is an index tuple +\nline pointer (at least 16 bytes + 4 bytes). Meaning that we accumulate\nthat *at least* 20 bytes for each 4 bytes in the table. And, indexes\ncare about *where* items go, making the problem even worse. So in the\nabsence of index tuple LP_DEAD setting/deletion (or bottom-up index\ndeletion in Postgres 14), the problem in indexes is probably at least\n5x worse.\n\nThe precise extent to which this is true will vary. It's a mistake to\ntry to reason about it at a high level, because there is just too much\nvariation for that approach to work. We should just give index access\nmethods *some* say. Sometimes this allows index vacuuming to be very\nlazy, other times it allows index vacuuming to be very eager. Often\nthis variation exists among indexes on the same table.\n\nOf course, vacuumlazy.c is still responsible for not letting the\naccumulation of LP_DEAD heap line pointers get out of hand (without\nallowing index TIDs to point to the wrong thing due to dangerous TID\nrecycling issues/bugs). The accumulation of LP_DEAD heap line pointers\nwill often take a very long time to get out of hand. But when it does\nfinally get out of hand, index access methods don't get to veto being\nvacuumed. Because this isn't actually about their needs anymore.\n\nActually, the index access methods never truly veto anything. They\nmerely give some abstract signal about how urgent it is to them (like\nthe 0.0 - 1.0 thing). This difference actually matters. One index\namong many on a table saying \"meh, I guess I could benefit from some\nindex vacuuming if it's no real trouble to you vacuumlazy.c\" rather\nthan saying \"it's absolutely unnecessary, don't waste CPU cycles\nvacuumlazy.c\" may actually shift how vacuumlazy.c processes the heap\n(at least occasionally). Maybe the high level VACUUM operation decides\nthat it is worth taking care of everything all at once -- if all the\nindexes together either say \"meh\" or \"now would be a good time\", and\nvacuumlazy.c then notices that the accumulation of LP_DEAD line\npointers is *also* becoming a problem (it's also a \"meh\" situation),\nthen it can be *more* ambitious. It can do a traditional VACUUM early.\nWhich might still make sense.\n\nThis also means that vacuumlazy.c would ideally think about this as an\noptimization problem. It may be lazy or eager for the whole table,\njust as it may be lazy or eager for individual indexes. (Though the\neagerness/laziness dynamic is probably much more noticeable with\nindexes in practice.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Nov 2020 15:03:21 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Sat, Nov 21, 2020 at 8:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Nov 20, 2020 at 2:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Does that make sense?\n> >\n> > I *think* so. For me the point is that the index never has a right to\n> > insist on being vacuumed, but it can offer an opinion on how helpful\n> > it would be.\n>\n> Right, that might be the single most important point. It's a somewhat\n> more bottom-up direction for VACUUM, that is still fundamentally\n> top-down. Because that's still necessary.\n>\n> Opportunistic heap pruning is usually very effective, so today we\n> realistically have these 4 byte line pointers accumulating in heap\n> pages. The corresponding \"bloatum\" in index pages is an index tuple +\n> line pointer (at least 16 bytes + 4 bytes). Meaning that we accumulate\n> that *at least* 20 bytes for each 4 bytes in the table. And, indexes\n> care about *where* items go, making the problem even worse. So in the\n> absence of index tuple LP_DEAD setting/deletion (or bottom-up index\n> deletion in Postgres 14), the problem in indexes is probably at least\n> 5x worse.\n>\n> The precise extent to which this is true will vary. It's a mistake to\n> try to reason about it at a high level, because there is just too much\n> variation for that approach to work. We should just give index access\n> methods *some* say. Sometimes this allows index vacuuming to be very\n> lazy, other times it allows index vacuuming to be very eager. Often\n> this variation exists among indexes on the same table.\n>\n> Of course, vacuumlazy.c is still responsible for not letting the\n> accumulation of LP_DEAD heap line pointers get out of hand (without\n> allowing index TIDs to point to the wrong thing due to dangerous TID\n> recycling issues/bugs). The accumulation of LP_DEAD heap line pointers\n> will often take a very long time to get out of hand. But when it does\n> finally get out of hand, index access methods don't get to veto being\n> vacuumed. Because this isn't actually about their needs anymore.\n>\n> Actually, the index access methods never truly veto anything. They\n> merely give some abstract signal about how urgent it is to them (like\n> the 0.0 - 1.0 thing). This difference actually matters. One index\n> among many on a table saying \"meh, I guess I could benefit from some\n> index vacuuming if it's no real trouble to you vacuumlazy.c\" rather\n> than saying \"it's absolutely unnecessary, don't waste CPU cycles\n> vacuumlazy.c\" may actually shift how vacuumlazy.c processes the heap\n> (at least occasionally). Maybe the high level VACUUM operation decides\n> that it is worth taking care of everything all at once -- if all the\n> indexes together either say \"meh\" or \"now would be a good time\", and\n> vacuumlazy.c then notices that the accumulation of LP_DEAD line\n> pointers is *also* becoming a problem (it's also a \"meh\" situation),\n> then it can be *more* ambitious. It can do a traditional VACUUM early.\n> Which might still make sense.\n>\n> This also means that vacuumlazy.c would ideally think about this as an\n> optimization problem. It may be lazy or eager for the whole table,\n> just as it may be lazy or eager for individual indexes. (Though the\n> eagerness/laziness dynamic is probably much more noticeable with\n> indexes in practice.)\n>\n\nI discussed this topic off-list with Peter Geoghegan. And we think\nthat we can lead this fix to future improvement. Let me summarize the\nproposals.\n\nThe first proposal is the fix of this inappropriate behavior discussed\non this thread. We pass a new flag in calling bulkdelete(), indicating\nwhether or not the index can safely skip this bulkdelete() call. This\nis equivalent to whether or not lazy vacuum will do the heap clean\n(making LP_DEAD LP_UNUSED in lazy_vacuum_heap()). If it's true\n(meaning to do heap clean), since dead tuples referenced by index\ntuples will be physically removed, index AM would have to delete the\nindex tuples. If it's false, we call to bulkdelete() with this flag so\nthat index AM can safely skip bulkdelete(). Of course index AM also\ncan dare not to skip it because of its personal reason. Index AM\nincluding BRIN that doesn't store heap TID can decide whether or not\nto do regardless of this flag.\n\nThe next proposal upon the above proposal is to add a new index AM\nAPI, say ambulkdeletestrategy(), which is called before bulkdelete()\nfor each index and asks the index bulk-deletion strategy. In this API,\nlazy vacuum asks, \"Hey index X, I collected garbage heap tuples during\nheap scanning, how urgent is vacuuming for you?\", and the index\nanswers either \"it's urgent\" when it wants to do bulk-deletion or\n\"it's not urgent, I can skip it\". The point of this proposal is to\nisolate heap vacuum and index vacuum for each index so that we can\nemploy different strategies for each index. Lazy vacuum can decide\nwhether or not to do heap clean based on the answers from the indexes.\nLazy vacuum can set the flag I proposed above according to the\ndecision. If all indexes answer 'yes' (meaning it will do\nbulkdelete()), lazy vacuum can do heap clean. On the other hand, if\neven one index answers 'no' (meaning it will not do bulkdelete()),\nlazy vacuum cannot do the heap clean. On the other hand, lazy vacuum\nwould also be able to require indexes to do bulkdelete() for its\npersonal reason. It’s something like saying \"Hey index X, you answered\nnot to do bulkdelete() but since heap clean is necessary for me please\ndon't skip bulkdelete()\".\n\nIn connection with this change, we would need to rethink the meaning\nof the INDEX_CLEANUP option. As of now, if it's not set (i.g.\nVACOPT_TERNARY_DEFAULT in the code), it's treated as true and will do\nheap clean. But I think we can make it something like a neutral state\nby default. This neutral state could be \"on\" and \"off\" depending on\nseveral factors including the answers of ambulkdeletestrategy(), the\ntable status, and user's request. In this context, specifying\nINDEX_CLEANUP would mean making the neutral state \"on\" or \"off\" by\nuser's request. The table status that could influence the decision\ncould concretely be, for instance:\n\n* Removing LP_DEAD accumulation due to skipping bulkdelete() for a long time.\n* Making pages all-visible for index-only scan.\n\nWe would not benefit much from the bulkdeletestrategy() idea for now.\nBut there are potential enhancements using this API:\n\n* If bottom-up index deletion feature[1] is introduced, individual\nindexes could be a different situation in terms of dead tuple\naccumulation; some indexes on the table can delete its garbage index\ntuples without bulkdelete(). A problem will appear that doing\nbulkdelete() for such indexes would not be efficient. This problem is\nsolved by this proposal because we can do bulkdelete() for a subset of\nindexes on the table.\n\n* If retail index deletion feature[2] is introduced, we can make the\nreturn value of bulkdeletestrategy() a ternary value: \"do_bulkdete\",\n\"do_indexscandelete\", and \"no\".\n\n* We probably can introduce a threshold of the number of dead tuples\nto control whether or not to do index tuple bulk-deletion (like\nbulkdelete() version of vacuum_cleanup_index_scale_factor). In the\ncase where the amount of dead tuples is slightly larger than\nmaitenance_work_mem the second time calling to bulkdelete will be\ncalled with a small number of dead tuples, which is inefficient. This\nproblem is also solved by this proposal by allowing a subset of\nindexes to skip bulkdelete() if the number of dead tuple doesn't\nexceed the threshold.\n\nAny thoughts?\n\nI'm writing a PoC patch so will share it.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAH2-Wzm%2BmaE3apHB8NOtmM%3Dp-DO65j2V5GzAWCOEEuy3JZgb2g%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/425db134-8bba-005c-b59d-56e50de3b41e%40postgrespro.ru\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 16 Dec 2020 11:43:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" }, { "msg_contents": "On Tue, Dec 15, 2020 at 6:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> In connection with this change, we would need to rethink the meaning\n> of the INDEX_CLEANUP option. As of now, if it's not set (i.g.\n> VACOPT_TERNARY_DEFAULT in the code), it's treated as true and will do\n> heap clean. But I think we can make it something like a neutral state\n> by default. This neutral state could be \"on\" and \"off\" depending on\n> several factors including the answers of ambulkdeletestrategy(), the\n> table status, and user's request. In this context, specifying\n> INDEX_CLEANUP would mean making the neutral state \"on\" or \"off\" by\n> user's request.\n\nI think a new value such as \"smart\" should be introduced, which can\nbecome the default.\n\n> The table status that could influence the decision\n> could concretely be, for instance:\n>\n> * Removing LP_DEAD accumulation due to skipping bulkdelete() for a long time.\n> * Making pages all-visible for index-only scan.\n>\n> We would not benefit much from the bulkdeletestrategy() idea for now.\n> But there are potential enhancements using this API:\n>\n> * If bottom-up index deletion feature[1] is introduced, individual\n> indexes could be a different situation in terms of dead tuple\n> accumulation; some indexes on the table can delete its garbage index\n> tuples without bulkdelete(). A problem will appear that doing\n> bulkdelete() for such indexes would not be efficient. This problem is\n> solved by this proposal because we can do bulkdelete() for a subset of\n> indexes on the table.\n\nThe chances of the bottom-up index deletion being committed for\nPostgreSQL 14 are very high. While it hasn't received too much review,\nthere seems to be very little downside, and lots of upside.\n\n> * If retail index deletion feature[2] is introduced, we can make the\n> return value of bulkdeletestrategy() a ternary value: \"do_bulkdete\",\n> \"do_indexscandelete\", and \"no\".\n\nMakes sense.\n\n> * We probably can introduce a threshold of the number of dead tuples\n> to control whether or not to do index tuple bulk-deletion (like\n> bulkdelete() version of vacuum_cleanup_index_scale_factor). In the\n> case where the amount of dead tuples is slightly larger than\n> maitenance_work_mem the second time calling to bulkdelete will be\n> called with a small number of dead tuples, which is inefficient. This\n> problem is also solved by this proposal by allowing a subset of\n> indexes to skip bulkdelete() if the number of dead tuple doesn't\n> exceed the threshold.\n\nGood idea. Maybe this won't be possible for PostgreSQL 14, but this is\nthe kind of possibility that we should try to unlock. I had a\nsimilar-yet-different idea to this idea of Masahiko's, actually, which\nis to use LSN to determine (unreliably) if a B-Tree leaf page is\nlikely to have garbage tuples within VACUUM.\n\nThis other idea probably also won't happen for PostgreSQL. That's not\nimportant. The truly important thing is that we come up with the right\n*general* design, that can support either technique in the future. I'm\nnot sure which precise design will work best, but I am confident that\n*some* combination of these two ideas (or other ideas) will work very\nwell. Right now we don't have the appropriate general framework.\n\n> Any thoughts?\n\nNothing to add to what you said, really. I agree that it makes sense\nto think of all of these things at the same time.\n\nIt'll be easier to see how far these different ideas can be pushed\nonce a prototype is available.\n\n> I'm writing a PoC patch so will share it.\n\nGreat! I suggest starting a new thread for that.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 17 Dec 2020 23:46:27 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: xid wraparound danger due to INDEX_CLEANUP false" } ]
[ { "msg_contents": "Hi,\n\nWhile testing for a feature I got this tablespace related error while\nrunning script.\nThe Issue is not reproducible everytime, but If I am running the same set\nof commands after 2-3 runs I am able to reproduce the same error.\n\n--first run - pass\n# master slave setup\n+ mkdir /tmp/test_bkp/tblsp01\n+ ./psql postgres -p 5432 -c 'create tablespace tblsp01 location\n'\\''/tmp/test_bkp/tblsp01'\\'';'\nCREATE TABLESPACE\n+ ./psql postgres -p 5432 -c 'create table test (a text) tablespace\ntblsp01;'\nCREATE TABLE\n#cleanup\n\n--next\n#master-slave setup\n+ mkdir /tmp/test_bkp/tblsp01\n+ ./psql postgres -p 5432 -c 'create tablespace tblsp01 location\n'\\''/tmp/test_bkp/tblsp01'\\'';'\nCREATE TABLESPACE\n+ ./psql postgres -p 5432 -c 'create table test (a text) tablespace\ntblsp01;'\nERROR: could not open file \"pg_tblspc/16384/PG_13_202004074/13530/16388\":\nNo such file or directory\n\n\nAttaching command and script which help to reproduce it.\n[edb@localhost bin]$ while sh pg_tblsp_wal.sh; do :; done\n\nThanks & Regards,\nRajkumar Raghuwanshi", "msg_date": "Thu, 16 Apr 2020 13:56:47 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "ERROR: could not open file \"pg_tblspc/ issue with replication setup." }, { "msg_contents": "On Thu, Apr 16, 2020 at 01:56:47PM +0530, Rajkumar Raghuwanshi wrote:\n> While testing for a feature I got this tablespace related error while\n> running script.\n\nPrimary and standby are running on the same host, so they would\ninteract with each other as the tablespace path used by both clusters\nwould be the same (primary uses the path defined by the DDL, which is\nregistered in the WAL record the standby replays). What you are\nlooking for here is to create the tablespace before taking the base\nbackup, and then use the option --tablespace-mapping with\npg_basebackup to avoid the issue.\n--\nMichael", "msg_date": "Fri, 17 Apr 2020 13:21:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: could not open file \"pg_tblspc/ issue with replication\n setup." }, { "msg_contents": "On Fri, Apr 17, 2020 at 9:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Apr 16, 2020 at 01:56:47PM +0530, Rajkumar Raghuwanshi wrote:\n> > While testing for a feature I got this tablespace related error while\n> > running script.\n>\n> Primary and standby are running on the same host, so they would\n> interact with each other as the tablespace path used by both clusters\n> would be the same (primary uses the path defined by the DDL, which is\n> registered in the WAL record the standby replays). What you are\n> looking for here is to create the tablespace before taking the base\n> backup, and then use the option --tablespace-mapping with\n> pg_basebackup to avoid the issue.\n>\nThanks for the help.\n\n\n\n> --\n> Michael\n>\n\nOn Fri, Apr 17, 2020 at 9:51 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Apr 16, 2020 at 01:56:47PM +0530, Rajkumar Raghuwanshi wrote:\n> While testing for a feature I got this tablespace related error while\n> running script.\n\nPrimary and standby are running on the same host, so they would\ninteract with each other as the tablespace path used by both clusters\nwould be the same (primary uses the path defined by the DDL, which is\nregistered in the WAL record the standby replays).  What you are\nlooking for here is to create the tablespace before taking the base\nbackup, and then use the option --tablespace-mapping with\npg_basebackup to avoid the issue.Thanks for the help. \n--\nMichael", "msg_date": "Fri, 17 Apr 2020 12:09:51 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: could not open file \"pg_tblspc/ issue with replication\n setup." } ]
[ { "msg_contents": "While try to setup a cascading replication, I have observed that if we\nset the REPLICA IDENTITY to FULL on the subscriber side then there is\nan Assert hit.\n\nAfter analysis I have found that, when we set the REPLICA IDENTITY to\nFULL on subscriber side (because I wanted to make this a publisher for\nanother subscriber).\nthen it will set relation->rd_replidindex to InvalidOid refer below code snippet\nRelationGetIndexList()\n{\n....\nif (replident == REPLICA_IDENTITY_DEFAULT && OidIsValid(pkeyIndex))\nrelation->rd_replidindex = pkeyIndex;\nelse if (replident == REPLICA_IDENTITY_INDEX && OidIsValid(candidateIndex))\nrelation->rd_replidindex = candidateIndex;\nelse\nrelation->rd_replidindex = InvalidOid;\n}\n\nBut, while appying the update and if the table have an index we have\nthis assert in build_replindex_scan_key\n\nstatic bool\nbuild_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\nTupleTableSlot *searchslot)\n{\n...\nAssert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n}\n\nTo me it appears like this assert is not correct. Attached patch has\nremoved this assert and things works fine.\n\n\n#0 0x00007ff2a0c8d5d7 in raise () from /lib64/libc.so.6\n#1 0x00007ff2a0c8ecc8 in abort () from /lib64/libc.so.6\n#2 0x0000000000aa7c7d in ExceptionalCondition (conditionName=0xc1bb30\n\"RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel)\",\nerrorType=0xc1bad9 \"FailedAssertion\",\n fileName=0xc1bb1c \"execReplication.c\", lineNumber=60) at assert.c:67\n#3 0x00000000007153c3 in build_replindex_scan_key\n(skey=0x7fff25711560, rel=0x7ff2a1b0b800, idxrel=0x7ff2a1b0bd98,\nsearchslot=0x21328c8) at execReplication.c:60\n#4 0x00000000007156ac in RelationFindReplTupleByIndex\n(rel=0x7ff2a1b0b800, idxoid=16387, lockmode=LockTupleExclusive,\nsearchslot=0x21328c8, outslot=0x2132bb0) at execReplication.c:141\n#5 0x00000000008aeba5 in FindReplTupleInLocalRel (estate=0x2150170,\nlocalrel=0x7ff2a1b0b800, remoterel=0x214a7c8, remoteslot=0x21328c8,\nlocalslot=0x7fff25711f28) at worker.c:989\n#6 0x00000000008ae6f2 in apply_handle_update_internal\n(relinfo=0x21327b0, estate=0x2150170, remoteslot=0x21328c8,\nnewtup=0x7fff25711fd0, relmapentry=0x214a7c8) at worker.c:820\n#7 0x00000000008ae609 in apply_handle_update (s=0x7fff25719560) at worker.c:788\n#8 0x00000000008af8b1 in apply_dispatch (s=0x7fff25719560) at worker.c:1362\n#9 0x00000000008afd52 in LogicalRepApplyLoop (last_received=22926832)\nat worker.c:1570\n#10 0x00000000008b0c3a in ApplyWorkerMain (main_arg=0) at worker.c:2114\n#11 0x0000000000869c15 in StartBackgroundWorker () at bgworker.c:813\n#12 0x000000000087d28f in do_start_bgworker (rw=0x20a07a0) at postmaster.c:5852\n#13 0x000000000087d63d in maybe_start_bgworkers () at postmaster.c:6078\n#14 0x000000000087c685 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5247\n#15 <signal handler called>\n#16 0x00007ff2a0d458d3 in __select_nocancel () from /lib64/libc.so.6\n#17 0x0000000000878153 in ServerLoop () at postmaster.c:1691\n#18 0x0000000000877b42 in PostmasterMain (argc=3, argv=0x2079120) at\npostmaster.c:1400\n#19 0x000000000077f256 in main (argc=3, argv=0x2079120) at main.c:210\n\nTo reproduce this issue\nrun start1.sh\nthen execute below commands on publishers.\ninsert into pgbench_accounts values(1,2);\nupdate pgbench_accounts set b=30 where a=1;\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Apr 2020 14:18:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Problem with logical replication" }, { "msg_contents": "Confirmed and added to opened items - this is a live bug back to v10.\n\nfor a in data data1; do ./tmp_install/usr/local/pgsql/bin/initdb -D $a --no-sync& done; wait\necho \"wal_level = logical\">> data/postgresql.conf; echo \"port=5433\" >> data1/postgresql.conf\nfor a in data data1; do ./tmp_install/usr/local/pgsql/bin/postmaster -D $a& done\nfor a in \"CREATE TABLE pgbench_accounts(a int primary key, b int)\" \"ALTER TABLE pgbench_accounts REPLICA IDENTITY FULL\" \"CREATE PUBLICATION mypub FOR TABLE pgbench_accounts\"; do \\\n\tfor p in 5432 5433; do psql -d postgres -h /tmp -p \"$p\" -c \"$a\"; done; done\n\npsql -d postgres -h /tmp -p 5433 -c \"CREATE SUBSCRIPTION mysub CONNECTION 'host=127.0.0.1 port=5432 dbname=postgres' PUBLICATION mypub\"\npsql -d postgres -h /tmp -p 5432 -c \"insert into pgbench_accounts values(1,2); update pgbench_accounts set b=30 where a=1;\"\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 16 Apr 2020 19:27:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication (crash with REPLICA IDENTITY\n FULL and cascading replication)" }, { "msg_contents": "On Thu, 16 Apr 2020 at 17:48, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> While try to setup a cascading replication, I have observed that if we\n> set the REPLICA IDENTITY to FULL on the subscriber side then there is\n> an Assert hit.\n>\n> After analysis I have found that, when we set the REPLICA IDENTITY to\n> FULL on subscriber side (because I wanted to make this a publisher for\n> another subscriber).\n> then it will set relation->rd_replidindex to InvalidOid refer below code snippet\n> RelationGetIndexList()\n> {\n> ....\n> if (replident == REPLICA_IDENTITY_DEFAULT && OidIsValid(pkeyIndex))\n> relation->rd_replidindex = pkeyIndex;\n> else if (replident == REPLICA_IDENTITY_INDEX && OidIsValid(candidateIndex))\n> relation->rd_replidindex = candidateIndex;\n> else\n> relation->rd_replidindex = InvalidOid;\n> }\n>\n> But, while appying the update and if the table have an index we have\n> this assert in build_replindex_scan_key\n>\n> static bool\n> build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n> TupleTableSlot *searchslot)\n> {\n> ...\n> Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n> }\n>\n> To me it appears like this assert is not correct. Attached patch has\n> removed this assert and things works fine.\n>\n>\n> #0 0x00007ff2a0c8d5d7 in raise () from /lib64/libc.so.6\n> #1 0x00007ff2a0c8ecc8 in abort () from /lib64/libc.so.6\n> #2 0x0000000000aa7c7d in ExceptionalCondition (conditionName=0xc1bb30\n> \"RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel)\",\n> errorType=0xc1bad9 \"FailedAssertion\",\n> fileName=0xc1bb1c \"execReplication.c\", lineNumber=60) at assert.c:67\n> #3 0x00000000007153c3 in build_replindex_scan_key\n> (skey=0x7fff25711560, rel=0x7ff2a1b0b800, idxrel=0x7ff2a1b0bd98,\n> searchslot=0x21328c8) at execReplication.c:60\n> #4 0x00000000007156ac in RelationFindReplTupleByIndex\n> (rel=0x7ff2a1b0b800, idxoid=16387, lockmode=LockTupleExclusive,\n> searchslot=0x21328c8, outslot=0x2132bb0) at execReplication.c:141\n> #5 0x00000000008aeba5 in FindReplTupleInLocalRel (estate=0x2150170,\n> localrel=0x7ff2a1b0b800, remoterel=0x214a7c8, remoteslot=0x21328c8,\n> localslot=0x7fff25711f28) at worker.c:989\n> #6 0x00000000008ae6f2 in apply_handle_update_internal\n> (relinfo=0x21327b0, estate=0x2150170, remoteslot=0x21328c8,\n> newtup=0x7fff25711fd0, relmapentry=0x214a7c8) at worker.c:820\n> #7 0x00000000008ae609 in apply_handle_update (s=0x7fff25719560) at worker.c:788\n> #8 0x00000000008af8b1 in apply_dispatch (s=0x7fff25719560) at worker.c:1362\n> #9 0x00000000008afd52 in LogicalRepApplyLoop (last_received=22926832)\n> at worker.c:1570\n> #10 0x00000000008b0c3a in ApplyWorkerMain (main_arg=0) at worker.c:2114\n> #11 0x0000000000869c15 in StartBackgroundWorker () at bgworker.c:813\n> #12 0x000000000087d28f in do_start_bgworker (rw=0x20a07a0) at postmaster.c:5852\n> #13 0x000000000087d63d in maybe_start_bgworkers () at postmaster.c:6078\n> #14 0x000000000087c685 in sigusr1_handler (postgres_signal_arg=10) at\n> postmaster.c:5247\n> #15 <signal handler called>\n> #16 0x00007ff2a0d458d3 in __select_nocancel () from /lib64/libc.so.6\n> #17 0x0000000000878153 in ServerLoop () at postmaster.c:1691\n> #18 0x0000000000877b42 in PostmasterMain (argc=3, argv=0x2079120) at\n> postmaster.c:1400\n> #19 0x000000000077f256 in main (argc=3, argv=0x2079120) at main.c:210\n>\n> To reproduce this issue\n> run start1.sh\n> then execute below commands on publishers.\n> insert into pgbench_accounts values(1,2);\n> update pgbench_accounts set b=30 where a=1;\n>\n\nI could reproduce this issue by the steps you shared. For the bug fix\npatch, I basically agree to remove that assertion from\nbuild_replindex_scan_key() but I think it's better to update the\nassertion instead of removal and update the following comment:\n\n* This is not generic routine, it expects the idxrel to be replication\n* identity of a rel and meet all limitations associated with that.\n*/\nstatic bool\nbuild_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n TupleTableSlot *searchslot)\n{\n\nAn alternative solution would be that logical replication worker\ndetermines the access path based on its replica identity instead of\nseeking the chance to use the primary key as follows:\n\n@@ -981,7 +981,7 @@ FindReplTupleInLocalRel(EState *estate, Relation localrel,\n\n *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n\n- idxoid = GetRelationIdentityOrPK(localrel);\n+ idxoid = RelationGetReplicaIndex(localrel);\n Assert(OidIsValid(idxoid) ||\n (remoterel->replident == REPLICA_IDENTITY_FULL));\n\nThat way, we can avoid such mismatch between replica identity and an\nindex for index scans. But a downside is that it will end up with a\nsequential scan even if the local table has the primary key. IIUC if\nthe table has the primary key, a logical replication worker can use\nthe primary key for the update and delete even if its replica identity\nis FULL, because the columns of the primary key are always a subset of\nall columns. So I'll look at this closely but I agree with your idea.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 22:24:40 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Mon, 20 Apr 2020 at 10:25, Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Thu, 16 Apr 2020 at 17:48, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I could reproduce this issue by the steps you shared. For the bug fix\n> patch, I basically agree to remove that assertion from\n> build_replindex_scan_key() but I think it's better to update the\n> assertion instead of removal and update the following comment:\n>\n> IMO the assertion is using the wrong function because it should test a\nreplica\nidentity or primary key (GetRelationIdentityOrPK). RelationGetReplicaIndex\nreturns InvalidOid even though the table has a primary key.\nGetRelationIdentityOrPK tries to obtain a replica identity and if it fails,\nit\ntries a primary key. That's exact what this assertion should use. We should\nalso notice that FindReplTupleInLocalRel uses GetRelationIdentityOrPK and\nafter\na code path like RelationFindReplTupleByIndex -> build_replindex_scan_key it\nshould also use the same function.\n\nSince GetRelationIdentityOrPK is a fallback function that\nuses RelationGetReplicaIndex and RelationGetPrimaryKeyIndex, I propose that\nwe\nmove this static function to execReplication.c.\n\nI attached a patch with the described solution. I also included a test that\ncovers this scenario.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 10 May 2020 19:08:03 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Sun, May 10, 2020 at 07:08:03PM -0300, Euler Taveira wrote:\n> I attached a patch with the described solution. I also included a test that\n> covers this scenario.\n\n- Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n+ Assert(GetRelationIdentityOrPK(rel) == RelationGetRelid(idxrel));\n\nNot much a fan of adding a routine to relcache.c to do the work of two\nroutines already present, so I think that we had better add an extra\ncondition based on RelationGetPrimaryKeyIndex, and give up on\nGetRelationIdentityOrPK() in execReplication.c. Wouldn't it also be\nbetter to cross-check the replica identity here depending on if\nRelationGetReplicaIndex() returns an invalid OID or not?\n--\nMichael", "msg_date": "Mon, 11 May 2020 16:28:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Mon, 11 May 2020 at 16:28, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, May 10, 2020 at 07:08:03PM -0300, Euler Taveira wrote:\n> > I attached a patch with the described solution. I also included a test that\n> > covers this scenario.\n>\n> - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n> + Assert(GetRelationIdentityOrPK(rel) == RelationGetRelid(idxrel));\n>\n> Not much a fan of adding a routine to relcache.c to do the work of two\n> routines already present, so I think that we had better add an extra\n> condition based on RelationGetPrimaryKeyIndex, and give up on\n> GetRelationIdentityOrPK() in execReplication.c.\n\n+1\n\nIn any case, it seems to me that the comment of\nbuild_replindex_scan_key needs to be updated.\n\n * This is not generic routine, it expects the idxrel to be replication\n * identity of a rel and meet all limitations associated with that.\n\nFor example, we can update the above:\n\n * This is not generic routine, it expects the idxrel to be replication\n * identity or primary key of a rel and meet all limitations associated\n* with that.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 18:35:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Tue, 12 May 2020 at 06:36, Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Mon, 11 May 2020 at 16:28, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, May 10, 2020 at 07:08:03PM -0300, Euler Taveira wrote:\n> > > I attached a patch with the described solution. I also included a test\n> that\n> > > covers this scenario.\n> >\n> > - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n> > + Assert(GetRelationIdentityOrPK(rel) == RelationGetRelid(idxrel));\n> >\n> > Not much a fan of adding a routine to relcache.c to do the work of two\n> > routines already present, so I think that we had better add an extra\n> > condition based on RelationGetPrimaryKeyIndex, and give up on\n> > GetRelationIdentityOrPK() in execReplication.c.\n>\n> Although, I think this solution is fragile, I updated the patch\naccordingly.\n(When/If someone changed GetRelationIdentityOrPK() it will break this\nassert)\n\n\n> In any case, it seems to me that the comment of\n> build_replindex_scan_key needs to be updated.\n>\n> * This is not generic routine, it expects the idxrel to be replication\n> * identity of a rel and meet all limitations associated with that.\n>\n> It is implicit that a primary key can be a replica identity so I think this\ncomment is fine.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 12 May 2020 21:45:45 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Wed, May 13, 2020 at 6:15 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Tue, 12 May 2020 at 06:36, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Mon, 11 May 2020 at 16:28, Michael Paquier <michael@paquier.xyz> wrote:\n>> >\n>> > On Sun, May 10, 2020 at 07:08:03PM -0300, Euler Taveira wrote:\n>> > > I attached a patch with the described solution. I also included a test that\n>> > > covers this scenario.\n>> >\n>> > - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel));\n>> > + Assert(GetRelationIdentityOrPK(rel) == RelationGetRelid(idxrel));\n>> >\n>> > Not much a fan of adding a routine to relcache.c to do the work of two\n>> > routines already present, so I think that we had better add an extra\n>> > condition based on RelationGetPrimaryKeyIndex, and give up on\n>> > GetRelationIdentityOrPK() in execReplication.c.\n>>\n> Although, I think this solution is fragile, I updated the patch accordingly.\n> (When/If someone changed GetRelationIdentityOrPK() it will break this assert)\n>\n>>\n>> In any case, it seems to me that the comment of\n>> build_replindex_scan_key needs to be updated.\n>>\n>> * This is not generic routine, it expects the idxrel to be replication\n>> * identity of a rel and meet all limitations associated with that.\n>>\n> It is implicit that a primary key can be a replica identity so I think this\n> comment is fine.\n\nI like your idea of modifying the assert instead of completely removing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 May 2020 10:29:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Tue, May 12, 2020 at 09:45:45PM -0300, Euler Taveira wrote:\n>> In any case, it seems to me that the comment of\n>> build_replindex_scan_key needs to be updated.\n>>\n>> * This is not generic routine, it expects the idxrel to be replication\n>> * identity of a rel and meet all limitations associated with that.\n>\n> It is implicit that a primary key can be a replica identity so I think this\n> comment is fine.\n\nAgreed. I don't think either that we need to update this comment. I\nwas playing with this patch and what you have here looks fine by me.\nTwo nits: the extra parenthesis in the assert are not necessary, and\nthe indentation had some diffs. Tom has just reindented the whole\ntree, so let's keep things clean.\n--\nMichael", "msg_date": "Fri, 15 May 2020 14:47:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Fri, 15 May 2020 at 02:47, Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Agreed. I don't think either that we need to update this comment. I\n> was playing with this patch and what you have here looks fine by me.\n> Two nits: the extra parenthesis in the assert are not necessary, and\n> the indentation had some diffs. Tom has just reindented the whole\n> tree, so let's keep things clean.\n>\n>\nLGTM.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 15 May 2020 at 02:47, Michael Paquier <michael@paquier.xyz> wrote:\nAgreed.  I don't think either that we need to update this comment.  I\nwas playing with this patch and what you have here looks fine by me.\nTwo nits: the extra parenthesis in the assert are not necessary, and\nthe indentation had some diffs.  Tom has just reindented the whole\ntree, so let's keep things clean.\nLGTM.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 08:48:53 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" }, { "msg_contents": "On Fri, May 15, 2020 at 08:48:53AM -0300, Euler Taveira wrote:\n> On Fri, 15 May 2020 at 02:47, Michael Paquier <michael@paquier.xyz> wrote:\n>> Agreed. I don't think either that we need to update this comment. I\n>> was playing with this patch and what you have here looks fine by me.\n>> Two nits: the extra parenthesis in the assert are not necessary, and\n>> the indentation had some diffs. Tom has just reindented the whole\n>> tree, so let's keep things clean.\n>\n> LGTM.\n\nThanks for double-checking. Applied and back-patched down to 10\nthen.\n--\nMichael", "msg_date": "Sat, 16 May 2020 18:20:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with logical replication" } ]
[ { "msg_contents": "Commit 896fcdb230e72 (sorry for chiming in too late, I missed that thread)\nadded a TLS init hook which is OpenSSL specific: openssl_tls_init_hook. Since\nthe rest of the TLS support in the backend is library agnostic, we should IMO\nmake this hook follow that pattern, else this will make a non-OpenSSL backend\nnot compile.\n\nIf we make the hook generic, extension authors must have a way to tell which\nbackend invoked it, so maybe the best option is to simply wrap this hook in\nUSE_OPENSSL ifdefs and keep the name/signature? Looking at the Secure\nTransport patch I wrote, there is really no equivalent callsite; the same goes\nfor a libnss patch which I haven't yet submitted.\n\nThe attached adds USE_OPENSSL guards.\n\ncheers ./daniel", "msg_date": "Thu, 16 Apr 2020 14:17:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Making openssl_tls_init_hook OpenSSL specific" }, { "msg_contents": "On Thu, Apr 16, 2020 at 02:17:33PM +0200, Daniel Gustafsson wrote:\n> Commit 896fcdb230e72 (sorry for chiming in too late, I missed that thread)\n> added a TLS init hook which is OpenSSL specific: openssl_tls_init_hook. Since\n> the rest of the TLS support in the backend is library agnostic, we should IMO\n> make this hook follow that pattern, else this will make a non-OpenSSL backend\n> not compile.\n\nBetter sooner than later, thanks for the report.\n\n> If we make the hook generic, extension authors must have a way to tell which\n> backend invoked it, so maybe the best option is to simply wrap this hook in\n> USE_OPENSSL ifdefs and keep the name/signature? Looking at the Secure\n> Transport patch I wrote, there is really no equivalent callsite; the same goes\n> for a libnss patch which I haven't yet submitted.\n> \n> The attached adds USE_OPENSSL guards.\n\nI agree that this looks like an oversight of the original commit\nintroducing the hook as it gets called in the OpenSSL code path of\nbe_tls_init(), so I think that your patch is right (though I would\nhave just used #ifdef USE_OPENSSL here). And if the future proves\nthat this hook has more uses for other SSL implementations, we could\nalways rework it at this point, if necessary. Andrew, would you\nprefer fixing that yourself?\n--\nMichael", "msg_date": "Fri, 17 Apr 2020 10:57:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Making openssl_tls_init_hook OpenSSL specific" }, { "msg_contents": "\nOn 4/16/20 9:57 PM, Michael Paquier wrote:\n> On Thu, Apr 16, 2020 at 02:17:33PM +0200, Daniel Gustafsson wrote:\n>> Commit 896fcdb230e72 (sorry for chiming in too late, I missed that thread)\n>> added a TLS init hook which is OpenSSL specific: openssl_tls_init_hook. Since\n>> the rest of the TLS support in the backend is library agnostic, we should IMO\n>> make this hook follow that pattern, else this will make a non-OpenSSL backend\n>> not compile.\n> Better sooner than later, thanks for the report.\n>\n>> If we make the hook generic, extension authors must have a way to tell which\n>> backend invoked it, so maybe the best option is to simply wrap this hook in\n>> USE_OPENSSL ifdefs and keep the name/signature? Looking at the Secure\n>> Transport patch I wrote, there is really no equivalent callsite; the same goes\n>> for a libnss patch which I haven't yet submitted.\n>>\n>> The attached adds USE_OPENSSL guards.\n> I agree that this looks like an oversight of the original commit\n> introducing the hook as it gets called in the OpenSSL code path of\n> be_tls_init(), so I think that your patch is right (though I would\n> have just used #ifdef USE_OPENSSL here). And if the future proves\n> that this hook has more uses for other SSL implementations, we could\n> always rework it at this point, if necessary. Andrew, would you\n> prefer fixing that yourself?\n\n\n\n\nSure, I'll do it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 12:01:27 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Making openssl_tls_init_hook OpenSSL specific" } ]
[ { "msg_contents": "Hi !\r\n\r\nWhen using sepgsql module, I got warning \"WARNING: cache reference leak”.\r\n```\r\npostgres=# UPDATE range_parted set c = 95 WHERE a = 'b' and b > 10 and c > 100 returning (range_parted), *;\r\nWARNING: cache reference leak: cache pg_attribute (7), tuple 38/54 has count 1\r\nWARNING: cache reference leak: cache pg_attribute (7), tuple 39/56 has count 1\r\nWARNING: cache reference leak: cache pg_attribute (7), tuple 53/51 has count 1\r\nWARNING: cache reference leak: cache pg_attribute (7), tuple 53/50 has count 1\r\n range_parted | a | b | c | d | e\r\n---------------+---+----+----+----+---\r\n (b,15,95,16,) | b | 15 | 95 | 16 |\r\n (b,17,95,19,) | b | 17 | 95 | 19 |\r\n(2 rows)\r\n\r\nUPDATE 2\r\npostgres=#\r\n```\r\nI am using the codes of Postgres REL_12_STABLE branch.\r\nThis issue can be reproduced by the SQLs below, and I test that on CentOS7 with “permissive” mode of SeLinux.\r\n\r\n```\r\nCREATE TABLE range_parted (a text, b bigint, c numeric, d int, e varchar) PARTITION BY RANGE (b);\r\nCREATE TABLE part_b_10_b_20 (e varchar, c numeric, a text, b bigint, d int) PARTITION BY RANGE (c);\r\nALTER TABLE range_parted ATTACH PARTITION part_b_10_b_20 FOR VALUES FROM (10) TO (20);\r\nCREATE TABLE part_c_100_200 (e varchar, c numeric, a text, b bigint, d int);\r\nALTER TABLE part_c_100_200 DROP COLUMN e, DROP COLUMN c, DROP COLUMN a;\r\nALTER TABLE part_c_100_200 ADD COLUMN c numeric, ADD COLUMN e varchar, ADD COLUMN a text;\r\nALTER TABLE part_c_100_200 DROP COLUMN b;\r\nALTER TABLE part_c_100_200 ADD COLUMN b bigint;\r\nCREATE TABLE part_c_1_100 (e varchar, d int, c numeric, b bigint, a text);\r\nALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_1_100 FOR VALUES FROM (1) TO (100);\r\nALTER TABLE part_b_10_b_20 ATTACH PARTITION part_c_100_200 FOR VALUES FROM (100) TO (200);\r\n\r\n\\set init_range_parted 'truncate range_parted; insert into range_parted VALUES(''b'', 12, 96, 1), (''b'', 13, 97, 2), (''b'', 15, 105, 16), (''b'', 17, 105, 19)'\r\n:init_range_parted;\r\nUPDATE range_parted set c = 95 WHERE a = 'b' and b > 10 and c > 100 returning (range_parted), *;\r\n```\r\n\r\nThe patch attached to fix this issue, please check it.\r\n\r\n```\r\n--- a/contrib/sepgsql/dml.c\r\n+++ b/contrib/sepgsql/dml.c\r\n@@ -69,7 +69,10 @@ fixup_whole_row_references(Oid relOid, Bitmapset *columns)\r\n continue;\r\n\r\n if (((Form_pg_attribute) GETSTRUCT(tuple))->attisdropped)\r\n+ {\r\n+ ReleaseSysCache(tuple);\r\n continue;\r\n+ }\r\n\r\n index = attno - FirstLowInvalidHeapAttributeNumber;\r\n````\r\n\r\n\r\n\r\n\r\n\r\n骆政丞 / Michael Luo\r\n成都文武信息技术有限公司 / ChengDu WenWu Information Technology Co.,Ltd.\r\n地址:成都高新区天府软件园 D 区 5 栋 1705 官网:http://w3.ww-it.cn.", "msg_date": "Thu, 16 Apr 2020 13:46:12 +0000", "msg_from": "Michael Luo <mkluo666@outlook.com>", "msg_from_op": true, "msg_subject": "\"cache reference leak\" issue happened when using sepgsql module" }, { "msg_contents": "Michael Luo <mkluo666@outlook.com> writes:\n> When using sepgsql module, I got warning \"WARNING: cache reference leak”.\n> ...\n> The patch attached to fix this issue, please check it.\n\nRight you are, fix pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 14:47:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"cache reference leak\" issue happened when using sepgsql module" } ]
[ { "msg_contents": "Hi,\n\nI'm starting a new thread for this, because the recent discussion of\nproblems with old_snapshot_threshold[1] touched on a lot of separate\nissues, and I think it will be too confusing if we discuss all of them\non one thread. Attached are three patches.\n\n0001 makes oldSnapshotControl \"extern\" rather than \"static\" and\nexposes the struct definition via a header.\n\n0002 adds a contrib module called old_snapshot which makes it possible\nto examine the time->XID mapping via SQL. As Andres said, the comments\nare not really adequate in the existing code, and the code itself is\nbuggy, so it was a little hard to be sure that I was understanding the\nintended meaning of the different fields correctly. However, I gave it\na shot.\n\n0003 attempts to fix bugs in MaintainOldSnapshotTimeMapping() so that\nit produces a sensible mapping. I encountered and tried to fix two\nissues here:\n\nFirst, as previously discussed, the branch that advances the mapping\nshould not categorically do \"oldSnapshotControl->head_timestamp = ts;\"\nassuming that the head_timestamp is supposed to be the timestamp for\nthe oldest bucket rather than the newest one. Rather, there are three\ncases: (1) resetting the mapping resets head_timestamp, (2) extending\nthe mapping by an entry without dropping an entry leaves\nhead_timestamp alone, and (3) overwriting the previous head with a new\nentry advances head_timestamp by 1 minute.\n\nSecond, the calculation of the number of entries by which the mapping\nshould advance is incorrect. It thinks that it should advance by the\nnumber of minutes between the current head_timestamp and the incoming\ntimestamp. That would be correct if head_timestamp were the most\nrecent entry in the mapping, but it's actually the oldest. As a\nresult, without this fix, every time we move into a new minute, we\nadvance the mapping much further than we actually should. Instead of\nadvancing by 1, we advance by the number of entries that already exist\nin the mapping - which means we now have entries that correspond to\ntimes which are in the future, and don't advance the mapping again\nuntil those future timestamps are in the past.\n\nWith these fixes, I seem to get reasonably sensible mappings, at least\nin light testing. I tried running this in one window with \\watch 10:\n\nselect *, age(newest_xmin), clock_timestamp() from\npg_old_snapshot_time_mapping();\n\nAnd in another window I ran:\n\npgbench -T 300 -R 10\n\nAnd the age does in fact advance by ~600 transactions per minute.\n\nI'm not proposing to commit anything here right now. These patches\nhaven't had enough testing for that, and their interaction with other\nbugs in the feature needs to be considered before we do anything.\nHowever, I thought it might be useful to put them out for review and\ncomment, and I also thought that having the contrib module from 0002\nmight permit other people to do some better testing of this feature\nand these fixes.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] http://postgr.es/m/20200401064008.qob7bfnnbu4w5cw4@alap3.anarazel.de", "msg_date": "Thu, 16 Apr 2020 12:41:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "Hi,\n\nOn 2020-04-16 12:41:55 -0400, Robert Haas wrote:\n> I'm starting a new thread for this, because the recent discussion of\n> problems with old_snapshot_threshold[1] touched on a lot of separate\n> issues, and I think it will be too confusing if we discuss all of them\n> on one thread. Attached are three patches.\n\nCool.\n\n\n> 0003 attempts to fix bugs in MaintainOldSnapshotTimeMapping() so that\n> it produces a sensible mapping. I encountered and tried to fix two\n> issues here:\n> \n> First, as previously discussed, the branch that advances the mapping\n> should not categorically do \"oldSnapshotControl->head_timestamp = ts;\"\n> assuming that the head_timestamp is supposed to be the timestamp for\n> the oldest bucket rather than the newest one. Rather, there are three\n> cases: (1) resetting the mapping resets head_timestamp, (2) extending\n> the mapping by an entry without dropping an entry leaves\n> head_timestamp alone, and (3) overwriting the previous head with a new\n> entry advances head_timestamp by 1 minute.\n\n> Second, the calculation of the number of entries by which the mapping\n> should advance is incorrect. It thinks that it should advance by the\n> number of minutes between the current head_timestamp and the incoming\n> timestamp. That would be correct if head_timestamp were the most\n> recent entry in the mapping, but it's actually the oldest. As a\n> result, without this fix, every time we move into a new minute, we\n> advance the mapping much further than we actually should. Instead of\n> advancing by 1, we advance by the number of entries that already exist\n> in the mapping - which means we now have entries that correspond to\n> times which are in the future, and don't advance the mapping again\n> until those future timestamps are in the past.\n> \n> With these fixes, I seem to get reasonably sensible mappings, at least\n> in light testing. I tried running this in one window with \\watch 10:\n> \n> select *, age(newest_xmin), clock_timestamp() from\n> pg_old_snapshot_time_mapping();\n> \n> And in another window I ran:\n> \n> pgbench -T 300 -R 10\n> \n> And the age does in fact advance by ~600 transactions per minute.\n\nI still think we need a way to test this without waiting for hours to\nhit various edge cases. You argued against a fixed binning of\nold_snapshot_threshold/100 arguing its too coarse. How about a 1000 or\nso? For 60 days, the current max for old_snapshot_threshold, that'd be a\ngranularity of 01:26:24, which seems fine. The best way I can think of\nthat'd keep current GUC values sensible is to change\nold_snapshot_threshold to be float. Ugly, but ...?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:14:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Thu, Apr 16, 2020 at 1:14 PM Andres Freund <andres@anarazel.de> wrote:\n> I still think we need a way to test this without waiting for hours to\n> hit various edge cases. You argued against a fixed binning of\n> old_snapshot_threshold/100 arguing its too coarse. How about a 1000 or\n> so? For 60 days, the current max for old_snapshot_threshold, that'd be a\n> granularity of 01:26:24, which seems fine. The best way I can think of\n> that'd keep current GUC values sensible is to change\n> old_snapshot_threshold to be float. Ugly, but ...?\n\nYeah, 1000 would be a lot better. However, if we switch to a fixed\nnumber of bins, it's going to be a lot more code churn. What did you\nthink of my suggestion of making head_timestamp artificially move\nbackward to simulate the passage of time?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 16 Apr 2020 13:34:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "Hi,\n\nOn 2020-04-16 13:34:39 -0400, Robert Haas wrote:\n> On Thu, Apr 16, 2020 at 1:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > I still think we need a way to test this without waiting for hours to\n> > hit various edge cases. You argued against a fixed binning of\n> > old_snapshot_threshold/100 arguing its too coarse. How about a 1000 or\n> > so? For 60 days, the current max for old_snapshot_threshold, that'd be a\n> > granularity of 01:26:24, which seems fine. The best way I can think of\n> > that'd keep current GUC values sensible is to change\n> > old_snapshot_threshold to be float. Ugly, but ...?\n> \n> Yeah, 1000 would be a lot better. However, if we switch to a fixed\n> number of bins, it's going to be a lot more code churn.\n\nGiven the number of things that need to be addressed around the feature,\nI am not too concerned about that.\n\n\n> What did you think of my suggestion of making head_timestamp\n> artificially move backward to simulate the passage of time?\n\nI don't think it allows to exercise the various cases well enough. We\nneed to be able to test this feature both interactively as well as in a\nscripted manner. Edge cases like wrapping around in the time mapping imo\ncan not easily be tested by moving the head timestamp back.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Apr 2020 10:46:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Fri, Apr 17, 2020 at 5:46 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-04-16 13:34:39 -0400, Robert Haas wrote:\n> > On Thu, Apr 16, 2020 at 1:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I still think we need a way to test this without waiting for hours to\n> > > hit various edge cases. You argued against a fixed binning of\n> > > old_snapshot_threshold/100 arguing its too coarse. How about a 1000 or\n> > > so? For 60 days, the current max for old_snapshot_threshold, that'd be a\n> > > granularity of 01:26:24, which seems fine. The best way I can think of\n> > > that'd keep current GUC values sensible is to change\n> > > old_snapshot_threshold to be float. Ugly, but ...?\n> >\n> > Yeah, 1000 would be a lot better. However, if we switch to a fixed\n> > number of bins, it's going to be a lot more code churn.\n>\n> Given the number of things that need to be addressed around the feature,\n> I am not too concerned about that.\n>\n>\n> > What did you think of my suggestion of making head_timestamp\n> > artificially move backward to simulate the passage of time?\n>\n> I don't think it allows to exercise the various cases well enough. We\n> need to be able to test this feature both interactively as well as in a\n> scripted manner. Edge cases like wrapping around in the time mapping imo\n> can not easily be tested by moving the head timestamp back.\n\nWhat about a contrib function that lets you clobber\noldSnapshotControl->current_timestamp? It looks like all times in\nthis system come ultimately from GetSnapshotCurrentTimestamp(), which\nuses that variable to make sure that time never goes backwards.\nPerhaps you could abuse that, like so, from test scripts:\n\npostgres=# select * from pg_old_snapshot_time_mapping();\n array_offset | end_timestamp | newest_xmin\n--------------+------------------------+-------------\n 0 | 3000-01-01 13:00:00+13 | 490\n(1 row)\n\npostgres=# select pg_clobber_current_snapshot_timestamp('3000-01-01 00:01:00Z');\n pg_clobber_current_snapshot_timestamp\n---------------------------------------\n\n(1 row)\n\npostgres=# select * from pg_old_snapshot_time_mapping();\n array_offset | end_timestamp | newest_xmin\n--------------+------------------------+-------------\n 0 | 3000-01-01 13:01:00+13 | 490\n 1 | 3000-01-01 13:02:00+13 | 490\n(2 rows)", "msg_date": "Fri, 17 Apr 2020 14:12:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Fri, Apr 17, 2020 at 2:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> What about a contrib function that lets you clobber\n> oldSnapshotControl->current_timestamp? It looks like all times in\n> this system come ultimately from GetSnapshotCurrentTimestamp(), which\n> uses that variable to make sure that time never goes backwards.\n\nHere's a draft TAP test that uses that technique successfully, as a\nPOC. It should probably be extended to cover more cases, but I\nthought I'd check what people thought of the concept first before\ngoing further. I didn't see a way to do overlapping transactions with\nPostgresNode.pm, so I invented one (please excuse the bad perl); am I\nmissing something? Maybe it'd be better to do 002 with an isolation\ntest instead, but I suppose 001 can't be in an isolation test, since\nit needs to connect to multiple databases, and it seemed better to do\nthem both the same way. It's also not entirely clear to me that\nisolation tests can expect a database to be fresh and then mess with\ndangerous internal state, whereas TAP tests set up and tear down a\ncluster each time.\n\nI think I found another bug in MaintainOldSnapshotTimeMapping(): if\nyou make time jump by more than old_snapshot_threshold in one go, then\nthe map gets cleared and then no early pruning or snapshot-too-old\nerrors happen. That's why in 002_too_old.pl it currently advances\ntime by 10 minutes twice, instead of 20 minutes once. To be\ncontinued.", "msg_date": "Sat, 18 Apr 2020 18:16:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Sat, Apr 18, 2020 at 11:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Apr 17, 2020 at 2:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > What about a contrib function that lets you clobber\n> > oldSnapshotControl->current_timestamp? It looks like all times in\n> > this system come ultimately from GetSnapshotCurrentTimestamp(), which\n> > uses that variable to make sure that time never goes backwards.\n>\n> Here's a draft TAP test that uses that technique successfully, as a\n> POC. It should probably be extended to cover more cases, but I\n> thought I'd check what people thought of the concept first before\n> going further. I didn't see a way to do overlapping transactions with\n> PostgresNode.pm, so I invented one (please excuse the bad perl); am I\n> missing something? Maybe it'd be better to do 002 with an isolation\n> test instead, but I suppose 001 can't be in an isolation test, since\n> it needs to connect to multiple databases, and it seemed better to do\n> them both the same way. It's also not entirely clear to me that\n> isolation tests can expect a database to be fresh and then mess with\n> dangerous internal state, whereas TAP tests set up and tear down a\n> cluster each time.\n>\n> I think I found another bug in MaintainOldSnapshotTimeMapping(): if\n> you make time jump by more than old_snapshot_threshold in one go, then\n> the map gets cleared and then no early pruning or snapshot-too-old\n> errors happen. That's why in 002_too_old.pl it currently advances\n> time by 10 minutes twice, instead of 20 minutes once. To be\n> continued.\n\nIMHO that doesn't seems to be a problem. Because even if we jump more\nthan old_snapshot_threshold in one go we don't clean complete map\nright. The latest snapshot timestamp will become the headtimestamp.\nSo in TransactionIdLimitedForOldSnapshots if (current_ts -\nold_snapshot_threshold) is still >= head_timestap then we can still do\nearly pruning.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Apr 2020 14:57:35 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "Hi,\n\nOn 2020-04-17 14:12:44 +1200, Thomas Munro wrote:\n> What about a contrib function that lets you clobber\n> oldSnapshotControl->current_timestamp? It looks like all times in\n> this system come ultimately from GetSnapshotCurrentTimestamp(), which\n> uses that variable to make sure that time never goes backwards.\n\nIt'd be better than the current test situation, and probably would be\ngood to have as part of testing anyway (since it'd allow to make the\ntests not take long / be racy on slow machines). But I still don't think\nit really allows to test the feature in a natural way. It makes it\neasier to test for know edge cases / problems, but not really discover\nunknown ones. For that I think we need more granular bins.\n\n- Andres\n\n\n", "msg_date": "Sat, 18 Apr 2020 13:17:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Thu, Apr 16, 2020 at 10:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi,\n>\n> I'm starting a new thread for this, because the recent discussion of\n> problems with old_snapshot_threshold[1] touched on a lot of separate\n> issues, and I think it will be too confusing if we discuss all of them\n> on one thread. Attached are three patches.\n>\n> 0001 makes oldSnapshotControl \"extern\" rather than \"static\" and\n> exposes the struct definition via a header.\n>\n> 0002 adds a contrib module called old_snapshot which makes it possible\n> to examine the time->XID mapping via SQL. As Andres said, the comments\n> are not really adequate in the existing code, and the code itself is\n> buggy, so it was a little hard to be sure that I was understanding the\n> intended meaning of the different fields correctly. However, I gave it\n> a shot.\n>\n> 0003 attempts to fix bugs in MaintainOldSnapshotTimeMapping() so that\n> it produces a sensible mapping. I encountered and tried to fix two\n> issues here:\n>\n> First, as previously discussed, the branch that advances the mapping\n> should not categorically do \"oldSnapshotControl->head_timestamp = ts;\"\n> assuming that the head_timestamp is supposed to be the timestamp for\n> the oldest bucket rather than the newest one. Rather, there are three\n> cases: (1) resetting the mapping resets head_timestamp, (2) extending\n> the mapping by an entry without dropping an entry leaves\n> head_timestamp alone, and (3) overwriting the previous head with a new\n> entry advances head_timestamp by 1 minute.\n>\n> Second, the calculation of the number of entries by which the mapping\n> should advance is incorrect. It thinks that it should advance by the\n> number of minutes between the current head_timestamp and the incoming\n> timestamp. That would be correct if head_timestamp were the most\n> recent entry in the mapping, but it's actually the oldest. As a\n> result, without this fix, every time we move into a new minute, we\n> advance the mapping much further than we actually should. Instead of\n> advancing by 1, we advance by the number of entries that already exist\n> in the mapping - which means we now have entries that correspond to\n> times which are in the future, and don't advance the mapping again\n> until those future timestamps are in the past.\n>\n> With these fixes, I seem to get reasonably sensible mappings, at least\n> in light testing. I tried running this in one window with \\watch 10:\n>\n> select *, age(newest_xmin), clock_timestamp() from\n> pg_old_snapshot_time_mapping();\n>\n> And in another window I ran:\n>\n> pgbench -T 300 -R 10\n>\n> And the age does in fact advance by ~600 transactions per minute.\n\nI have started reviewing these patches. I think, the fixes looks right to me.\n\n+ LWLockAcquire(OldSnapshotTimeMapLock, LW_SHARED);\n+ mapping->head_offset = oldSnapshotControl->head_offset;\n+ mapping->head_timestamp = oldSnapshotControl->head_timestamp;\n+ mapping->count_used = oldSnapshotControl->count_used;\n+ for (int i = 0; i < OLD_SNAPSHOT_TIME_MAP_ENTRIES; ++i)\n+ mapping->xid_by_minute[i] = oldSnapshotControl->xid_by_minute[i];\n+ LWLockRelease(OldSnapshotTimeMapLock);\n\nI think memcpy would be a better choice instead of looping it for all\nthe entries, since we are doing this under a lock?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 09:40:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Sat, Apr 18, 2020 at 9:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Sat, Apr 18, 2020 at 11:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I think I found another bug in MaintainOldSnapshotTimeMapping(): if\n> > you make time jump by more than old_snapshot_threshold in one go, then\n> > the map gets cleared and then no early pruning or snapshot-too-old\n> > errors happen. That's why in 002_too_old.pl it currently advances\n> > time by 10 minutes twice, instead of 20 minutes once. To be\n> > continued.\n>\n> IMHO that doesn't seems to be a problem. Because even if we jump more\n> than old_snapshot_threshold in one go we don't clean complete map\n> right. The latest snapshot timestamp will become the headtimestamp.\n> So in TransactionIdLimitedForOldSnapshots if (current_ts -\n> old_snapshot_threshold) is still >= head_timestap then we can still do\n> early pruning.\n\nRight, thanks. I got confused about that, and misdiagnosed something\nI was seeing.\n\nHere's a new version:\n\n0004: Instead of writing a new kind of TAP test to demonstrate\nsnapshot-too-old errors, I adjusted the existing isolation tests to\nuse the same absolute time control technique. Previously I had\ninvented a way to do isolation tester-like stuff in TAP tests, which\nmight be interesting but strange new perl is not necessary for this.\n\n0005: Truncates the time map when the CLOG is truncated. Its test is\nnow under src/test/module/snapshot_too_old/t/001_truncate.sql.\n\nThese apply on top of Robert's patches, but the only dependency is on\nhis patch 0001 \"Expose oldSnapshotControl.\", because now I have stuff\nin src/test/module/snapshot_too_old/test_sto.c that wants to mess with\nthat object too.\n\nIs this an improvement? I realise that there is still nothing to\nactually verify that early pruning has actually happened. I haven't\nthought of a good way to do that yet (stats, page inspection, ...).", "msg_date": "Mon, 20 Apr 2020 17:54:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 11:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Apr 18, 2020 at 9:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sat, Apr 18, 2020 at 11:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > I think I found another bug in MaintainOldSnapshotTimeMapping(): if\n> > > you make time jump by more than old_snapshot_threshold in one go, then\n> > > the map gets cleared and then no early pruning or snapshot-too-old\n> > > errors happen. That's why in 002_too_old.pl it currently advances\n> > > time by 10 minutes twice, instead of 20 minutes once. To be\n> > > continued.\n> >\n> > IMHO that doesn't seems to be a problem. Because even if we jump more\n> > than old_snapshot_threshold in one go we don't clean complete map\n> > right. The latest snapshot timestamp will become the headtimestamp.\n> > So in TransactionIdLimitedForOldSnapshots if (current_ts -\n> > old_snapshot_threshold) is still >= head_timestap then we can still do\n> > early pruning.\n>\n> Right, thanks. I got confused about that, and misdiagnosed something\n> I was seeing.\n>\n> Here's a new version:\n>\n> 0004: Instead of writing a new kind of TAP test to demonstrate\n> snapshot-too-old errors, I adjusted the existing isolation tests to\n> use the same absolute time control technique. Previously I had\n> invented a way to do isolation tester-like stuff in TAP tests, which\n> might be interesting but strange new perl is not necessary for this.\n>\n> 0005: Truncates the time map when the CLOG is truncated. Its test is\n> now under src/test/module/snapshot_too_old/t/001_truncate.sql.\n>\n> These apply on top of Robert's patches, but the only dependency is on\n> his patch 0001 \"Expose oldSnapshotControl.\", because now I have stuff\n> in src/test/module/snapshot_too_old/test_sto.c that wants to mess with\n> that object too.\n>\n> Is this an improvement? I realise that there is still nothing to\n> actually verify that early pruning has actually happened. I haven't\n> thought of a good way to do that yet (stats, page inspection, ...).\n\nCould we test the early pruning using xid-burn patch? Basically, in\nxid_by_minute we have some xids with the current epoch. Now, we burns\nmore than 2b xid and then if we try to vacuum we might hit the case of\nearly pruning no. Do you wnated to this case or you had some other\ncase in mind which you wnated to test?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 12:04:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 6:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Mon, Apr 20, 2020 at 11:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Sat, Apr 18, 2020 at 9:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > On Sat, Apr 18, 2020 at 11:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Is this an improvement? I realise that there is still nothing to\n> > actually verify that early pruning has actually happened. I haven't\n> > thought of a good way to do that yet (stats, page inspection, ...).\n>\n> Could we test the early pruning using xid-burn patch? Basically, in\n> xid_by_minute we have some xids with the current epoch. Now, we burns\n> more than 2b xid and then if we try to vacuum we might hit the case of\n> early pruning no. Do you wnated to this case or you had some other\n> case in mind which you wnated to test?\n\nI mean I want to verify that VACUUM or heap prune actually removed a\ntuple that was visible to an old snapshot. An idea I just had: maybe\nsto_using_select.spec should check the visibility map (somehow). For\nexample, the sto_using_select.spec (the version in the patch I just\nposted) just checks that after time 00:11, the old snapshot gets a\nsnapshot-too-old error. Perhaps we could add a VACUUM before that,\nand then check that the page has become all visible, meaning that the\ndead tuple our snapshot could see has now been removed.\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:58:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 12:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Apr 20, 2020 at 6:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Mon, Apr 20, 2020 at 11:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >\n> > > On Sat, Apr 18, 2020 at 9:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > On Sat, Apr 18, 2020 at 11:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Is this an improvement? I realise that there is still nothing to\n> > > actually verify that early pruning has actually happened. I haven't\n> > > thought of a good way to do that yet (stats, page inspection, ...).\n> >\n> > Could we test the early pruning using xid-burn patch? Basically, in\n> > xid_by_minute we have some xids with the current epoch. Now, we burns\n> > more than 2b xid and then if we try to vacuum we might hit the case of\n> > early pruning no. Do you wnated to this case or you had some other\n> > case in mind which you wnated to test?\n>\n> I mean I want to verify that VACUUM or heap prune actually removed a\n> tuple that was visible to an old snapshot. An idea I just had: maybe\n> sto_using_select.spec should check the visibility map (somehow). For\n> example, the sto_using_select.spec (the version in the patch I just\n> posted) just checks that after time 00:11, the old snapshot gets a\n> snapshot-too-old error. Perhaps we could add a VACUUM before that,\n> and then check that the page has become all visible, meaning that the\n> dead tuple our snapshot could see has now been removed.\n\nOkay, got your point. Can we try to implement some test functions\nthat can just call visibilitymap_get_status function internally? I\nagree that we will have to pass the correct block number but that we\ncan find using TID. Or for testing, we can create a very small\nrelation that just has 1 block?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:31:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 12:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have started reviewing these patches. I think, the fixes looks right to me.\n>\n> + LWLockAcquire(OldSnapshotTimeMapLock, LW_SHARED);\n> + mapping->head_offset = oldSnapshotControl->head_offset;\n> + mapping->head_timestamp = oldSnapshotControl->head_timestamp;\n> + mapping->count_used = oldSnapshotControl->count_used;\n> + for (int i = 0; i < OLD_SNAPSHOT_TIME_MAP_ENTRIES; ++i)\n> + mapping->xid_by_minute[i] = oldSnapshotControl->xid_by_minute[i];\n> + LWLockRelease(OldSnapshotTimeMapLock);\n>\n> I think memcpy would be a better choice instead of looping it for all\n> the entries, since we are doing this under a lock?\n\nWhen I did it that way, it complained about \"const\" and I couldn't\nimmediately figure out how to fix it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:01:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 8:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Mon, Apr 20, 2020 at 12:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I mean I want to verify that VACUUM or heap prune actually removed a\n> > tuple that was visible to an old snapshot. An idea I just had: maybe\n> > sto_using_select.spec should check the visibility map (somehow). For\n> > example, the sto_using_select.spec (the version in the patch I just\n> > posted) just checks that after time 00:11, the old snapshot gets a\n> > snapshot-too-old error. Perhaps we could add a VACUUM before that,\n> > and then check that the page has become all visible, meaning that the\n> > dead tuple our snapshot could see has now been removed.\n>\n> Okay, got your point. Can we try to implement some test functions\n> that can just call visibilitymap_get_status function internally? I\n> agree that we will have to pass the correct block number but that we\n> can find using TID. Or for testing, we can create a very small\n> relation that just has 1 block?\n\nI think it's enough to check SELECT EVERY(all_visible) FROM\npg_visibility_map('sto1'::regclass). I realised that\nsrc/test/module/snapshot_too_old is allowed to install\ncontrib/pg_visibility with EXTRA_INSTALL, so here's a new version to\ntry that idea. It extends sto_using_select.spec to VACUUM and check\nthe vis map at key times. That allows us to check that early pruning\nreally happens once the snapshot becomes too old. There are other\nways you could check that but this seems quite \"light touch\" compared\nto something based on page inspection.\n\nI also changed src/test/module/snapshot_too_old/t/001_truncate.pl back\nto using Robert's contrib/old_snapshot extension to know the size of\nthe time/xid map, allowing an introspection function to be dropped\nfrom test_sto.c.\n\nAs before, these two apply on top of Robert's patches (or at least his\n0001 and 0002).", "msg_date": "Tue, 21 Apr 2020 14:05:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Mon, Apr 20, 2020 at 11:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 20, 2020 at 12:10 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have started reviewing these patches. I think, the fixes looks right to me.\n> >\n> > + LWLockAcquire(OldSnapshotTimeMapLock, LW_SHARED);\n> > + mapping->head_offset = oldSnapshotControl->head_offset;\n> > + mapping->head_timestamp = oldSnapshotControl->head_timestamp;\n> > + mapping->count_used = oldSnapshotControl->count_used;\n> > + for (int i = 0; i < OLD_SNAPSHOT_TIME_MAP_ENTRIES; ++i)\n> > + mapping->xid_by_minute[i] = oldSnapshotControl->xid_by_minute[i];\n> > + LWLockRelease(OldSnapshotTimeMapLock);\n> >\n> > I think memcpy would be a better choice instead of looping it for all\n> > the entries, since we are doing this under a lock?\n>\n> When I did it that way, it complained about \"const\" and I couldn't\n> immediately figure out how to fix it.\n\nI think we can typecast to (const void *). After below change, I did\nnot get the warning.\n\ndiff --git a/contrib/old_snapshot/time_mapping.c\nb/contrib/old_snapshot/time_mapping.c\nindex 37e0055..cc53bdd 100644\n--- a/contrib/old_snapshot/time_mapping.c\n+++ b/contrib/old_snapshot/time_mapping.c\n@@ -94,8 +94,9 @@ GetOldSnapshotTimeMapping(void)\n mapping->head_offset = oldSnapshotControl->head_offset;\n mapping->head_timestamp = oldSnapshotControl->head_timestamp;\n mapping->count_used = oldSnapshotControl->count_used;\n- for (int i = 0; i < OLD_SNAPSHOT_TIME_MAP_ENTRIES; ++i)\n- mapping->xid_by_minute[i] =\noldSnapshotControl->xid_by_minute[i];\n+ memcpy(mapping->xid_by_minute,\n+ (const void *) oldSnapshotControl->xid_by_minute,\n+ sizeof(TransactionId) * OLD_SNAPSHOT_TIME_MAP_ENTRIES);\n LWLockRelease(OldSnapshotTimeMapLock);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 09:54:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Tue, Apr 21, 2020 at 2:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As before, these two apply on top of Robert's patches (or at least his\n> 0001 and 0002).\n\nWhile trying to figure out if Robert's 0003 patch was correct, I added\nyet another patch to this stack to test it. 0006 does basic xid map\nmaintenance that exercises the cases 0003 fixes, and I think it\ndemonstrates that they now work correctly. Also some minor perl\nimprovements to 0005. I'll attach 0001-0004 again but they are\nunchanged.\n\nSince confusion about head vs tail seems to have been at the root of\nthe bugs addressed by 0003, I wonder if we should also rename\nhead_{timestamp,offset} to oldest_{timestamp,offset}.", "msg_date": "Tue, 21 Apr 2020 22:14:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Tue, Apr 21, 2020 at 3:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Apr 21, 2020 at 2:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > As before, these two apply on top of Robert's patches (or at least his\n> > 0001 and 0002).\n>\n> While trying to figure out if Robert's 0003 patch was correct, I added\n> yet another patch to this stack to test it. 0006 does basic xid map\n> maintenance that exercises the cases 0003 fixes, and I think it\n> demonstrates that they now work correctly.\n\n+1, I think we should also add a way to test the case, where we\nadvance the timestamp by multiple slots. I see that you have such\ncase\ne.g\n+# test adding minutes while the map is not full\n+set_time('3000-01-01 02:01:00Z');\n+is(summarize_mapping(), \"2|02:00:00|02:01:00\");\n+set_time('3000-01-01 02:05:00Z');\n+is(summarize_mapping(), \"6|02:00:00|02:05:00\");\n+set_time('3000-01-01 02:19:00Z');\n+is(summarize_mapping(), \"20|02:00:00|02:19:00\");\n\nBut, I think we should try to extend it to test that we have put the\nnew xid only in those slots where we suppose to and not in other\nslots?.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Apr 2020 16:52:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Tue, Apr 21, 2020 at 4:52 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Apr 21, 2020 at 3:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Tue, Apr 21, 2020 at 2:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > As before, these two apply on top of Robert's patches (or at least his\n> > > 0001 and 0002).\n> >\n> > While trying to figure out if Robert's 0003 patch was correct, I added\n> > yet another patch to this stack to test it. 0006 does basic xid map\n> > maintenance that exercises the cases 0003 fixes, and I think it\n> > demonstrates that they now work correctly.\n>\n> +1, I think we should also add a way to test the case, where we\n> advance the timestamp by multiple slots. I see that you have such\n> case\n> e.g\n> +# test adding minutes while the map is not full\n> +set_time('3000-01-01 02:01:00Z');\n> +is(summarize_mapping(), \"2|02:00:00|02:01:00\");\n> +set_time('3000-01-01 02:05:00Z');\n> +is(summarize_mapping(), \"6|02:00:00|02:05:00\");\n> +set_time('3000-01-01 02:19:00Z');\n> +is(summarize_mapping(), \"20|02:00:00|02:19:00\");\n>\n> But, I think we should try to extend it to test that we have put the\n> new xid only in those slots where we suppose to and not in other\n> slots?.\n\nI feel that we should. probably fix this check as well? Because if ts\n> update_ts then it will go to else part then there it will finally\nend up in the last slot only so I think we can use this case also as\nfast exit.\n\ndiff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c\nindex 93a0c04..644d9b1 100644\n--- a/src/backend/utils/time/snapmgr.c\n+++ b/src/backend/utils/time/snapmgr.c\n@@ -1831,7 +1831,7 @@\nTransactionIdLimitedForOldSnapshots(TransactionId recentXmin,\n\n if (!same_ts_as_threshold)\n {\n- if (ts == update_ts)\n+ if (ts >= update_ts)\n {\n xlimit = latest_xmin;\n if (NormalTransactionIdFollows(xlimit,\nrecentXmin))\n\nThis patch can be applied on top of other v5 patches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Apr 2020 11:09:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Wed, Apr 22, 2020 at 5:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> - if (ts == update_ts)\n> + if (ts >= update_ts)\n\nHi Dilip, I didn't follow this bit -- could you explain?\n\nHere's a rebase. In the 0004 patch I chose to leave behind some\nunnecessary braces to avoid reindenting a bunch of code after removing\nan if branch, just for ease of review, but I'd probably remove those\nin a committed version. I'm going to add this stuff to the next CF so\nwe don't lose track of it.", "msg_date": "Fri, 14 Aug 2020 12:52:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Fri, Aug 14, 2020 at 12:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a rebase.\n\nAnd another, since I was too slow and v6 is already in conflict...\nsorry for the high frequency patches.", "msg_date": "Fri, 14 Aug 2020 13:04:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Fri, Aug 14, 2020 at 1:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Aug 14, 2020 at 12:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a rebase.\n>\n> And another, since I was too slow and v6 is already in conflict...\n> sorry for the high frequency patches.\n\nAnd ... now that this has a commitfest entry, cfbot told me about a\nsmall problem in a makefile. Third time lucky?", "msg_date": "Sat, 15 Aug 2020 10:09:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Sat, Aug 15, 2020 at 10:09:15AM +1200, Thomas Munro wrote:\n> And ... now that this has a commitfest entry, cfbot told me about a\n> small problem in a makefile. Third time lucky?\n\nStill lucky since then, and the CF bot does not complain. So... The\nmeat of the patch is in 0003 which is fixing an actual bug. Robert,\nThomas, anything specific you are waiting for here? As this is a bug\nfix, perhaps it would be better to just move on with some portions of\nthe set?\n\nKevin, I really think that you should chime in here. This is\noriginally your feature.\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 14:47:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Thu, Sep 17, 2020 at 1:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Aug 15, 2020 at 10:09:15AM +1200, Thomas Munro wrote:\n> > And ... now that this has a commitfest entry, cfbot told me about a\n> > small problem in a makefile. Third time lucky?\n>\n> Still lucky since then, and the CF bot does not complain. So... The\n> meat of the patch is in 0003 which is fixing an actual bug. Robert,\n> Thomas, anything specific you are waiting for here? As this is a bug\n> fix, perhaps it would be better to just move on with some portions of\n> the set?\n\nYeah, I plan to push forward with 0001 through 0003 soon, but 0001\nneeds to be revised with a PGDLLIMPORT marking, I think, and 0002\nneeds documentation. Not sure whether there's going to be adequate\nsupport for back-patching given that it's adding a new contrib module\nfor observability and not just fixing a bug, so my tentative plan is\nto just push into master. If there is a great clamor for back-patching\nthen I can, but I'm not very excited about pushing the bug fix into\nthe back-branches without the observability stuff, because then if\nsomebody claims that it's not working properly, it'll be almost\nimpossible to understand why.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Sep 2020 10:40:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Thu, Sep 17, 2020 at 10:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, I plan to push forward with 0001 through 0003 soon, but 0001\n> needs to be revised with a PGDLLIMPORT marking, I think, and 0002\n> needs documentation.\n\nSo here's an updated version of those three, with proposed commit\nmessages, a PGDLLIMPORT for 0001, and docs for 0002.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 18 Sep 2020 20:19:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Sat, Sep 19, 2020 at 12:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 17, 2020 at 10:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Yeah, I plan to push forward with 0001 through 0003 soon, but 0001\n> > needs to be revised with a PGDLLIMPORT marking, I think, and 0002\n> > needs documentation.\n>\n> So here's an updated version of those three, with proposed commit\n> messages, a PGDLLIMPORT for 0001, and docs for 0002.\n\nLGTM.\n\n\n", "msg_date": "Thu, 24 Sep 2020 13:15:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nPatch looks good to me.", "msg_date": "Thu, 24 Sep 2020 13:14:28 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Wed, Sep 23, 2020 at 9:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> LGTM.\n\nCommitted.\n\nThomas, with respect to your part of this patch set, I wonder if we\ncan make the functions that you're using to write tests safe enough\nthat we could add them to contrib/old_snapshot and let users run them\nif they want. As you have them, they are hedged around with vague and\nscary warnings, but is that really justified? And if so, can it be\nfixed? It would be nicer not to end up with two loadable modules here,\nand maybe the right sorts of functions could even have some practical\nuse.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 24 Sep 2020 15:46:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Thu, Sep 24, 2020 at 03:46:14PM -0400, Robert Haas wrote:\n> Committed.\n\nCool, thanks.\n\n> Thomas, with respect to your part of this patch set, I wonder if we\n> can make the functions that you're using to write tests safe enough\n> that we could add them to contrib/old_snapshot and let users run them\n> if they want. As you have them, they are hedged around with vague and\n> scary warnings, but is that really justified? And if so, can it be\n> fixed? It would be nicer not to end up with two loadable modules here,\n> and maybe the right sorts of functions could even have some practical\n> use.\n\nI have switched this item as waiting on author in the CF app then, as\nwe are not completely done yet.\n--\nMichael", "msg_date": "Fri, 25 Sep 2020 11:00:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On Fri, Sep 25, 2020 at 2:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Sep 24, 2020 at 03:46:14PM -0400, Robert Haas wrote:\n> > Committed.\n>\n> Cool, thanks.\n\n+1\n\n> > Thomas, with respect to your part of this patch set, I wonder if we\n> > can make the functions that you're using to write tests safe enough\n> > that we could add them to contrib/old_snapshot and let users run them\n> > if they want. As you have them, they are hedged around with vague and\n> > scary warnings, but is that really justified? And if so, can it be\n> > fixed? It would be nicer not to end up with two loadable modules here,\n> > and maybe the right sorts of functions could even have some practical\n> > use.\n\nYeah, you may be right. I am thinking about that. In the meantime,\nhere is a rebase. A quick recap of these remaining patches:\n\n0001 replaces the current \"magic test mode\" that didn't really test\nanything with a new test mode that verifies pruning and STO behaviour.\n0002 fixes a separate bug that Andres reported: the STO XID map\nsuffers from wraparound-itis.\n0003 adds a simple smoke test for Robert's commit 55b7e2f4. Before\nthat fix landed, it failed.\n\n> I have switched this item as waiting on author in the CF app then, as\n> we are not completely done yet.\n\nThanks. For the record, I think there is still one more complaint\nfrom Andres that remains unaddressed even once these are in the tree:\nthere are thought to be some more places that lack\nTestForOldSnapshot() calls.", "msg_date": "Tue, 6 Oct 2020 18:32:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" }, { "msg_contents": "On 06.10.2020 08:32, Thomas Munro wrote:\n> On Fri, Sep 25, 2020 at 2:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Thu, Sep 24, 2020 at 03:46:14PM -0400, Robert Haas wrote:\n>>\n>>> Thomas, with respect to your part of this patch set, I wonder if we\n>>> can make the functions that you're using to write tests safe enough\n>>> that we could add them to contrib/old_snapshot and let users run them\n>>> if they want. As you have them, they are hedged around with vague and\n>>> scary warnings, but is that really justified? And if so, can it be\n>>> fixed? It would be nicer not to end up with two loadable modules here,\n>>> and maybe the right sorts of functions could even have some practical\n>>> use.\n> Yeah, you may be right. I am thinking about that. In the meantime,\n> here is a rebase. A quick recap of these remaining patches:\n>\n> 0001 replaces the current \"magic test mode\" that didn't really test\n> anything with a new test mode that verifies pruning and STO behaviour.\n> 0002 fixes a separate bug that Andres reported: the STO XID map\n> suffers from wraparound-itis.\n> 0003 adds a simple smoke test for Robert's commit 55b7e2f4. Before\n> that fix landed, it failed.\n>\n>> I have switched this item as waiting on author in the CF app then, as\n>> we are not completely done yet.\n> Thanks. For the record, I think there is still one more complaint\n> from Andres that remains unaddressed even once these are in the tree:\n> there are thought to be some more places that lack\n> TestForOldSnapshot() calls.\n\nStatus update for a commitfest entry.\n\nThis entry is \"Waiting on author\" and the thread was inactive for a \nwhile.  As far as I see, part of the fixes is already committed. Is \nthere anything left to work on or this patch set needs review now?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 26 Nov 2020 13:03:56 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: fixing old_snapshot_threshold's time->xid mapping" } ]
[ { "msg_contents": "Hi,\n\nAvoiding some calls and set vars, when it is not necessary.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 16 Apr 2020 19:59:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Tiny optimization on nbtinsert.c" } ]
[ { "msg_contents": "Hi,\n\nWhen multiplying variables, the overflow will take place anyway, and only\nthen will the meaningless product be explicitly promoted to type int64.\nIt is one of the operands that should have been cast instead to avoid the\noverflow.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 16 Apr 2020 20:54:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix possible overflow on tuplesort.c" }, { "msg_contents": "On 2020-Apr-16, Ranier Vilela wrote:\n\n> When multiplying variables, the overflow will take place anyway, and only\n> then will the meaningless product be explicitly promoted to type int64.\n> It is one of the operands that should have been cast instead to avoid the\n> overflow.\n>\n> - if (state->availMem < (int64) ((newmemtupsize - memtupsize) * sizeof(SortTuple)))\n> + if (state->availMem < ((int64) (newmemtupsize - memtupsize) * sizeof(SortTuple)))\n\nDoesn't sizeof() return a 64-bit wide value already?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:43:14 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix possible overflow on tuplesort.c" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> When multiplying variables, the overflow will take place anyway, and only\n>> then will the meaningless product be explicitly promoted to type int64.\n>> It is one of the operands that should have been cast instead to avoid the\n>> overflow.\n>> \n>> - if (state->availMem < (int64) ((newmemtupsize - memtupsize) * sizeof(SortTuple)))\n>> + if (state->availMem < ((int64) (newmemtupsize - memtupsize) * sizeof(SortTuple)))\n\n> Doesn't sizeof() return a 64-bit wide value already?\n\nNot on 32-bit machines. However, on a 32-bit machine the clamp just\nabove here would prevent overflow anyway. In general, said clamp\nensures that the value computed here is less than MaxAllocHugeSize,\nso computing it in size_t width is enough. So in fact an overflow is\nimpossible here, but it requires looking at more than this one line of\ncode to see it. I would expect a static analyzer to understand it though.\n\nI think the actual point of this cast is to ensure that the comparison to\navailMem is done in signed not unsigned arithmetic --- which is critical\nbecause availMem might be negative. The proposed change would indeed\nbreak that, since multiplying a signed value by size_t is presumably going\nto produce an unsigned value. We could use two casts, but I don't see the\npoint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 15:57:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix possible overflow on tuplesort.c" }, { "msg_contents": "Em qui., 23 de abr. de 2020 às 16:43, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Apr-16, Ranier Vilela wrote:\n>\n> > When multiplying variables, the overflow will take place anyway, and only\n> > then will the meaningless product be explicitly promoted to type int64.\n> > It is one of the operands that should have been cast instead to avoid the\n> > overflow.\n> >\n> > - if (state->availMem < (int64) ((newmemtupsize - memtupsize) *\n> sizeof(SortTuple)))\n> > + if (state->availMem < ((int64) (newmemtupsize - memtupsize) *\n> sizeof(SortTuple)))\n>\n> Doesn't sizeof() return a 64-bit wide value already?\n>\nSizeof return size_t.\nBoth versions are constant expressions of type std::size_t\n<https://en.cppreference.com/w/cpp/types/size_t>.\n\nregards,\nRanier Vilela\n\nEm qui., 23 de abr. de 2020 às 16:43, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Apr-16, Ranier Vilela wrote:\n\n> When multiplying variables, the overflow will take place anyway, and only\n> then will the meaningless product be explicitly promoted to type int64.\n> It is one of the operands that should have been cast instead to avoid the\n> overflow.\n>\n> -   if (state->availMem < (int64) ((newmemtupsize - memtupsize) * sizeof(SortTuple)))\n> +   if (state->availMem < ((int64) (newmemtupsize - memtupsize) * sizeof(SortTuple)))\n\nDoesn't sizeof() return a 64-bit wide value already?Sizeof return size_t.\nBoth versions are constant expressions of type std::size_t. \n\n\n regards,Ranier Vilela", "msg_date": "Thu, 23 Apr 2020 17:03:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix possible overflow on tuplesort.c" } ]
[ { "msg_contents": "Hi, \n\nThe document(high-availability.sgml) says that there are only two ways \nto exit standby mode.\n\n 26.2.2. Standby Server Operation\n Standby mode is exited and the server switches to normal operation when \npg_ctl promote is run or a trigger file is found (promote_trigger_file).\n\nBut there is another way, by calling pg_promote function.\nI think we need to document it, doesn't it?\n\nI attached a patch. Please review and let me know your thoughts.\n\nRegards,\nMasahiro Ikeda", "msg_date": "Fri, 17 Apr 2020 13:11:31 +0900", "msg_from": "ikedamsh <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "\n\nOn 2020/04/17 13:11, ikedamsh wrote:\n> Hi,\n> \n> The document(high-availability.sgml) says that there are only two ways to exit standby mode.\n> \n>  26.2.2. Standby Server Operation\n>  Standby mode is exited and the server switches to normal operation when pg_ctl promote is run or a trigger file is found (promote_trigger_file).\n> \n> But there is another way, by calling pg_promote function.\n> I think we need to document it, doesn't it?\n> \n> I attached a patch. Please review and let me know your thoughts.\n\nThanks for the report and the patch! It looks good to me.\nBarring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Apr 2020 13:40:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "On Fri, Apr 17, 2020 at 01:40:02PM +0900, Fujii Masao wrote:\n> Thanks for the report and the patch! It looks good to me.\n> Barring any objection, I will commit this patch.\n\n+1.\n--\nMichael", "msg_date": "Fri, 17 Apr 2020 13:54:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "On Fri, 2020-04-17 at 13:54 +0900, Michael Paquier wrote:\n> On Fri, Apr 17, 2020 at 01:40:02PM +0900, Fujii Masao wrote:\n> > Thanks for the report and the patch! It looks good to me.\n> > Barring any objection, I will commit this patch.\n> \n> +1.\n\n+1. That was my omission in the original patch.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 08:51:15 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Thanks for the report and the patch! It looks good to me.\n> Barring any objection, I will commit this patch.\n\nIt might be worth writing \"<function>pg_promote()</function> is called\"\n(adding parentheses) to make it clearer that a function is being\nreferred to. No objection otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 13:46:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "On 2020/04/18 2:46, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> Thanks for the report and the patch! It looks good to me.\n>> Barring any objection, I will commit this patch.\n> \n> It might be worth writing \"<function>pg_promote()</function> is called\"\n> (adding parentheses) to make it clearer that a function is being\n> referred to. No objection otherwise.\n\nYes. Also Masahiro-san reported me, off-list, that there are other places\nwhere pg_promote is mentioned without parentheses. I think it's better to\nadd parentheses there. Attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 20 Apr 2020 20:38:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "\n\nOn 2020/04/20 20:38, Fujii Masao wrote:\n> \n> \n> On 2020/04/18 2:46, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> Thanks for the report and the patch! It looks good to me.\n>>> Barring any objection, I will commit this patch.\n>>\n>> It might be worth writing \"<function>pg_promote()</function> is called\"\n>> (adding parentheses) to make it clearer that a function is being\n>> referred to.  No objection otherwise.\n> \n> Yes. Also Masahiro-san reported me, off-list, that there are other places\n> where pg_promote is mentioned without parentheses. I think it's better to\n> add parentheses there. Attached is the updated version of the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Apr 2020 14:07:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: It is not documented that pg_promote can exit standby mode" }, { "msg_contents": "Hi,\n\nThere is the comment which related function name is not same.\nI attached the patch to fix it. Please review.\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 07 Jul 2020 11:50:10 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "change a function name in a comment correctly" }, { "msg_contents": "\n\nOn 2020/07/07 11:50, Masahiro Ikeda wrote:\n> Hi,\n> \n> There is the comment which related function name is not same.\n> I attached the patch to fix it. Please review.\n\nThanks for the report and patch! LGTM.\nI will commit this later.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 7 Jul 2020 12:00:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: change a function name in a comment correctly" }, { "msg_contents": ">> There is the comment which related function name is not same.\n>> I attached the patch to fix it. Please review.\n> \n> Thanks for the report and patch! LGTM.\n> I will commit this later.\n\nThanks for checking.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 08 Jul 2020 08:12:45 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: change a function name in a comment correctly" }, { "msg_contents": "\n\nOn 2020/07/08 8:12, Masahiro Ikeda wrote:\n>>> There is the comment which related function name is not same.\n>>> I attached the patch to fix it. Please review.\n>>\n>> Thanks for the report and patch! LGTM.\n>> I will commit this later.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:01:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: change a function name in a comment correctly" } ]
[ { "msg_contents": "Hello.\n\nRecently a cache reference leak was reported then fixed [1].\n\nI happened to notice a similar possible leakage in\nremoveEtObjInitPriv. I haven't found a way to reach the code, but can\nbe forcibly caused by tweaking the condition.\n\nPlease find the attached.\n\nregards.\n\n[1] https://www.postgresql.org/message-id/BYAPR08MB5606D1453D7F50E2AF4D2FD29AD80@BYAPR08MB5606.namprd08.prod.outlook.com", "msg_date": "Fri, 17 Apr 2020 15:18:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Possible cache reference leak by removeExtObjInitPriv" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Recently a cache reference leak was reported then fixed [1].\n> I happened to notice a similar possible leakage in\n> removeEtObjInitPriv. I haven't found a way to reach the code, but can\n> be forcibly caused by tweaking the condition.\n> Please find the attached.\n\nUgh. recordExtObjInitPriv has the same problem.\n\nI wonder whether there is any way to teach Coverity, or some other\nstatic analyzer, to look for code paths that leak cache refcounts.\nIt seems isomorphic to detecting memory leaks, which Coverity is\nreasonably good at.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 13:07:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible cache reference leak by removeExtObjInitPriv" }, { "msg_contents": "At Fri, 17 Apr 2020 13:07:15 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > Recently a cache reference leak was reported then fixed [1].\n> > I happened to notice a similar possible leakage in\n> > removeEtObjInitPriv. I haven't found a way to reach the code, but can\n> > be forcibly caused by tweaking the condition.\n> > Please find the attached.\n> \n> Ugh. recordExtObjInitPriv has the same problem.\n\nThanks for commit it.\n\n> I wonder whether there is any way to teach Coverity, or some other\n> static analyzer, to look for code paths that leak cache refcounts.\n> It seems isomorphic to detecting memory leaks, which Coverity is\n> reasonably good at.\n\nIndeed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 17:28:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible cache reference leak by removeExtObjInitPriv" }, { "msg_contents": "Hi,\nstrncpy, it is not a safe function and has the risk of corrupting memory.\nOn ecpg lib, two sources, make use of strncpy risk, this patch tries to fix.\n\n1. Make room for the last null-characte;\n2. Copies Maximum number of characters - 1.\n\nper Coverity.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 22 Apr 2020 19:48:07 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "[PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Hello.\n\nAt Wed, 22 Apr 2020 19:48:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> strncpy, it is not a safe function and has the risk of corrupting memory.\n> On ecpg lib, two sources, make use of strncpy risk, this patch tries to fix.\n> \n> 1. Make room for the last null-characte;\n> 2. Copies Maximum number of characters - 1.\n> \n> per Coverity.\n\n-\tstrncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate));\n+\tsqlca->sqlstate[sizeof(sqlca->sqlstate) - 1] = '\\0';\n+\tstrncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate) - 1);\n\nDid you look at the definition and usages of the struct member?\nsqlstate is a char[5], which is to be filled with 5-letter SQLSTATE\ncode not terminated by NUL, which can be shorter if NUL is found\nanywhere (I'm not sure there's actually a case of a shorter state\ncode). If you put NUL to the 5th element of the array, you break the\ncontent. The existing code looks perfect to me.\n\n-\tstrncpy(sqlca->sqlerrm.sqlerrmc, message, sizeof(sqlca->sqlerrm.sqlerrmc));\n-\tsqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n+\tsqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = '\\0';\n+\tstrncpy(sqlca->sqlerrm.sqlerrmc, message, sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n\nThe existing strncpy then terminating by NUL works fine. I don't think\nthere's any point in doing the reverse way. Actually\nsizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\nexisting code is not necessarily a bug.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:27:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Em qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> Hello.\n>\n> At Wed, 22 Apr 2020 19:48:07 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> > strncpy, it is not a safe function and has the risk of corrupting memory.\n> > On ecpg lib, two sources, make use of strncpy risk, this patch tries to\n> fix.\n> >\n> > 1. Make room for the last null-characte;\n> > 2. Copies Maximum number of characters - 1.\n> >\n> > per Coverity.\n>\n> - strncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate));\n> + sqlca->sqlstate[sizeof(sqlca->sqlstate) - 1] = '\\0';\n> + strncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate) - 1);\n>\n> Did you look at the definition and usages of the struct member?\n> sqlstate is a char[5], which is to be filled with 5-letter SQLSTATE\n> code not terminated by NUL, which can be shorter if NUL is found\n> anywhere (I'm not sure there's actually a case of a shorter state\n> code). If you put NUL to the 5th element of the array, you break the\n> content. The existing code looks perfect to me.\n>\nSorry, you are right.\n\n>\n> - strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> sizeof(sqlca->sqlerrm.sqlerrmc));\n> - sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n> + sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] =\n> '\\0';\n> + strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n>\n> The existing strncpy then terminating by NUL works fine. I don't think\n> there's any point in doing the reverse way. Actually\n> sizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\n> existing code is not necessarily a bug.\n>\nWithout understanding then, why Coveriy claims bug here.\n\nregards,\nRanier Vilela\n\nEm qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:Hello.\n\nAt Wed, 22 Apr 2020 19:48:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> strncpy, it is not a safe function and has the risk of corrupting memory.\n> On ecpg lib, two sources, make use of strncpy risk, this patch tries to fix.\n> \n> 1. Make room for the last null-characte;\n> 2. Copies Maximum number of characters - 1.\n> \n> per Coverity.\n\n-       strncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate));\n+       sqlca->sqlstate[sizeof(sqlca->sqlstate) - 1] = '\\0';\n+       strncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate) - 1);\n\nDid you look at the definition and usages of the struct member?\nsqlstate is a char[5], which is to be filled with 5-letter SQLSTATE\ncode not terminated by NUL, which can be shorter if NUL is found\nanywhere (I'm not sure there's actually a case of a shorter state\ncode). If you put NUL to the 5th element of the array, you break the\ncontent.  The existing code looks perfect to me.Sorry, you are right. \n\n-       strncpy(sqlca->sqlerrm.sqlerrmc, message, sizeof(sqlca->sqlerrm.sqlerrmc));\n-       sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n+       sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = '\\0';\n+       strncpy(sqlca->sqlerrm.sqlerrmc, message, sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n\nThe existing strncpy then terminating by NUL works fine. I don't think\nthere's any point in doing the reverse way.  Actually\nsizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\nexisting code is not necessarily a bug.Without understanding then, why Coveriy claims bug here.regards,Ranier Vilela", "msg_date": "Thu, 23 Apr 2020 01:21:21 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "At Thu, 23 Apr 2020 01:21:21 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Em qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> escreveu:\n> >\n> > - strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > sizeof(sqlca->sqlerrm.sqlerrmc));\n> > - sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n> > + sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] =\n> > '\\0';\n> > + strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n> >\n> > The existing strncpy then terminating by NUL works fine. I don't think\n> > there's any point in doing the reverse way. Actually\n> > sizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\n> > existing code is not necessarily a bug.\n> >\n> Without understanding then, why Coveriy claims bug here.\n\nWell, handling non-terminated strings with str* functions are a sign\nof bug in most cases. Coverity is very useful but false positives are\nannoying. I wonder what if we attach Coverity annotations to such\ncodes.\n\nBy the way, do you have some ideas of how to let coverity detect\nleakage of resources other than memory? We found several cases of\ncache reference leakage that should be statically detected easily.\n\nhttps://www.postgresql.org/message-id/10513.1587143235@sss.pgh.pa.us\n> I wonder whether there is any way to teach Coverity, or some other\n> static analyzer, to look for code paths that leak cache refcounts.\n> It seems isomorphic to detecting memory leaks, which Coverity is\n> reasonably good at.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:36:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Hi,\n\nOn 2020-04-23 14:36:15 +0900, Kyotaro Horiguchi wrote:\n> At Thu, 23 Apr 2020 01:21:21 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Em qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <\n> > horikyota.ntt@gmail.com> escreveu:\n> > >\n> > > - strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > sizeof(sqlca->sqlerrm.sqlerrmc));\n> > > - sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n> > > + sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] =\n> > > '\\0';\n> > > + strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n> > >\n> > > The existing strncpy then terminating by NUL works fine. I don't think\n> > > there's any point in doing the reverse way. Actually\n> > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\n> > > existing code is not necessarily a bug.\n> > >\n> > Without understanding then, why Coveriy claims bug here.\n> \n> Well, handling non-terminated strings with str* functions are a sign\n> of bug in most cases. Coverity is very useful but false positives are\n> annoying. I wonder what if we attach Coverity annotations to such\n> codes.\n\nIt might be worth doing something about this, for other reasons. We have\ndisabled -Wstringop-truncation in 716585235b1. But I've enabled it in my\ndebug build, because I find it useful. The only warning we're getting\nin non-optimized builds is\n\n/home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c: In function ‘ECPGset_var’:\n/home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c:565:17: warning: ‘strncpy’ output truncated before terminating nul copying 5 bytes from a string of the same length [-Wstringop-truncation]\n 565 | strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOne way we could address this is to use the 'nonstring' attribute gcc\nhas introduced, signalling that sqlca_t->sqlstate isn't zero\nterminated. That removes the above warning.\n\nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#Common-Variable-Attributes\n\n\"The nonstring variable attribute specifies that an object or member declaration with type array of char, signed char, or unsigned char, or pointer to such a type is intended to store character arrays that do not necessarily contain a terminating NUL. This is useful in detecting uses of such arrays or pointers with functions that expect NUL-terminated strings, and to avoid warnings when such an array or pointer is used as an argument to a bounded string manipulation function such as strncpy. For example, without the attribute, GCC will issue a warning for the strncpy call below because it may truncate the copy without appending the terminating NUL character. Using the attribute makes it possible to suppress the warning. However, when the array is declared with the attribute the call to strlen is diagnosed because when the array doesn’t contain a NUL-terminated string the call is undefined. To copy, compare, of search non-string character arrays use the memcpy, memcmp, memchr, and other functions that operate on arrays of bytes. In addition, calling strnlen and strndup with such arrays is safe provided a suitable bound is specified, and not diagnosed. \"\n\nI've not looked at how much work it'd be to make a recent-ish gcc not to\nproduce lots of false positives in optimized builds.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 15:49:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It might be worth doing something about this, for other reasons. We have\n> disabled -Wstringop-truncation in 716585235b1. But I've enabled it in my\n> debug build, because I find it useful.\n\nITYM e71658523 ? I can't find that hash in my repo. Anyway, I agree\nthat disabling that was a bit of a stopgap hack. This 'nonstring'\nattribute seems like it would help for ECPG's usage, at least.\n\n> I've not looked at how much work it'd be to make a recent-ish gcc not to\n> produce lots of false positives in optimized builds.\n\nThe discussion that led up to e71658523 seemed to conclude that the\nonly reasonable way to suppress the majority of those warnings was\nto get rid of the fixed-length MAXPGPATH buffers we use everywhere.\nNow that we have psprintf(), that might be more workable than before,\nbut the effort-to-reward ratio still doesn't seem promising.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 19:08:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Hi,\n\nOn 2021-06-11 19:08:57 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It might be worth doing something about this, for other reasons. We have\n> > disabled -Wstringop-truncation in 716585235b1. But I've enabled it in my\n> > debug build, because I find it useful.\n> \n> ITYM e71658523 ? I can't find that hash in my repo.\n\nOops, yes.\n\n\n> Anyway, I agree that disabling that was a bit of a stopgap hack. This\n> 'nonstring' attribute seems like it would help for ECPG's usage, at\n> least.\n> \n> > I've not looked at how much work it'd be to make a recent-ish gcc not to\n> > produce lots of false positives in optimized builds.\n> \n> The discussion that led up to e71658523 seemed to conclude that the\n> only reasonable way to suppress the majority of those warnings was\n> to get rid of the fixed-length MAXPGPATH buffers we use everywhere.\n> Now that we have psprintf(), that might be more workable than before,\n> but the effort-to-reward ratio still doesn't seem promising.\n\nHm - the MAXPGPATH stuff is about -Wno-format-truncation though, right?\n\nI now tried building with optimizations and -Wstringop-truncation, and\nwhile it does result in a higher number of warnings, those are all in\necpg and fixed with one __attribute__((nonstring)).\n\nnonstring is supported since gcc 8, which also brought the warnings that\ne71658523 is concerned about. Which makes me think that we should be\nable to get away without a configure test. The one complication is that\nthe relevant ecpg code doesn't include c.h. But I think we can just do\nsomething like:\n\ndiff --git i/src/interfaces/ecpg/include/sqlca.h w/src/interfaces/ecpg/include/sqlca.h\nindex c5f107dd33c..d909f5ba2de 100644\n--- i/src/interfaces/ecpg/include/sqlca.h\n+++ w/src/interfaces/ecpg/include/sqlca.h\n@@ -50,7 +50,11 @@ struct sqlca_t\n /* 6: empty */\n /* 7: empty */\n \n- char sqlstate[5];\n+ char sqlstate[5]\n+#if defined(__has_attribute) && __has_attribute(nonstring)\n+ __attribute__((nonstring))\n+#endif\n+ ;\n };\n \n struct sqlca_t *ECPGget_sqlca(void);\n\nNot pretty, but I don't immediately see a really better solution?\n\nShould we also include a pg_attribute_nonstring definition in c.h?\nProbably not, given that we right now don't have another user?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 19:36:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-11 19:08:57 -0400, Tom Lane wrote:\n>> Anyway, I agree that disabling that was a bit of a stopgap hack. This\n>> 'nonstring' attribute seems like it would help for ECPG's usage, at\n>> least.\n\n> nonstring is supported since gcc 8, which also brought the warnings that\n> e71658523 is concerned about. Which makes me think that we should be\n> able to get away without a configure test. The one complication is that\n> the relevant ecpg code doesn't include c.h.\n\nUgh. And we *can't* include that there.\n\n> But I think we can just do something like:\n\n> - char sqlstate[5];\n> + char sqlstate[5]\n> +#if defined(__has_attribute) && __has_attribute(nonstring)\n> + __attribute__((nonstring))\n> +#endif\n> + ;\n> };\n\nHmm. Worth a try, anyway.\n\n> Should we also include a pg_attribute_nonstring definition in c.h?\n> Probably not, given that we right now don't have another user?\n\nYeah, no point till there's another use-case. (I'm not sure\nthere ever will be, so I'm not excited about adding more\ninfrastructure than we have to.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Jun 2021 23:40:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Em sex., 11 de jun. de 2021 às 19:49, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2020-04-23 14:36:15 +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 23 Apr 2020 01:21:21 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > > Em qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <\n> > > horikyota.ntt@gmail.com> escreveu:\n> > > >\n> > > > - strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > > sizeof(sqlca->sqlerrm.sqlerrmc));\n> > > > - sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1]\n> = 0;\n> > > > + sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1]\n> =\n> > > > '\\0';\n> > > > + strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n> > > >\n> > > > The existing strncpy then terminating by NUL works fine. I don't\n> think\n> > > > there's any point in doing the reverse way. Actually\n> > > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\n> > > > existing code is not necessarily a bug.\n> > > >\n> > > Without understanding then, why Coveriy claims bug here.\n> >\n> > Well, handling non-terminated strings with str* functions are a sign\n> > of bug in most cases. Coverity is very useful but false positives are\n> > annoying. I wonder what if we attach Coverity annotations to such\n> > codes.\n>\n> It might be worth doing something about this, for other reasons. We have\n> disabled -Wstringop-truncation in 716585235b1. But I've enabled it in my\n> debug build, because I find it useful. The only warning we're getting\n> in non-optimized builds is\n>\n> /home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c: In\n> function ‘ECPGset_var’:\n> /home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c:565:17:\n> warning: ‘strncpy’ output truncated before terminating nul copying 5 bytes\n> from a string of the same length [-Wstringop-truncation]\n> 565 | strncpy(sqlca->sqlstate, \"YE001\",\n> sizeof(sqlca->sqlstate));\n> |\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\nmemcpy would not suffer from it?\n\nregards,\nRanier Vilela\n\nEm sex., 11 de jun. de 2021 às 19:49, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2020-04-23 14:36:15 +0900, Kyotaro Horiguchi wrote:\n> At Thu, 23 Apr 2020 01:21:21 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Em qua., 22 de abr. de 2020 às 23:27, Kyotaro Horiguchi <\n> > horikyota.ntt@gmail.com> escreveu:\n> > >\n> > > -       strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > sizeof(sqlca->sqlerrm.sqlerrmc));\n> > > -       sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] = 0;\n> > > +       sqlca->sqlerrm.sqlerrmc[sizeof(sqlca->sqlerrm.sqlerrmc) - 1] =\n> > > '\\0';\n> > > +       strncpy(sqlca->sqlerrm.sqlerrmc, message,\n> > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1);\n> > >\n> > > The existing strncpy then terminating by NUL works fine. I don't think\n> > > there's any point in doing the reverse way.  Actually\n> > > sizeof(sqlca->sqlerrm.sqlerrmc) - 1 is enough for the length but the\n> > > existing code is not necessarily a bug.\n> > >\n> > Without understanding then, why Coveriy claims bug here.\n> \n> Well, handling non-terminated strings with str* functions are a sign\n> of bug in most cases.  Coverity is very useful but false positives are\n> annoying.  I wonder what if we attach Coverity annotations to such\n> codes.\n\nIt might be worth doing something about this, for other reasons. We have\ndisabled -Wstringop-truncation in 716585235b1. But I've enabled it in my\ndebug build, because I find it useful. The only warning we're getting\nin non-optimized builds is\n\n/home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c: In function ‘ECPGset_var’:\n/home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c:565:17: warning: ‘strncpy’ output truncated before terminating nul copying 5 bytes from a string of the same length [-Wstringop-truncation]\n  565 |                 strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~memcpy would not suffer from it?regards,Ranier Vilela", "msg_date": "Tue, 15 Jun 2021 07:40:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 07:40:46 -0300, Ranier Vilela wrote:\n> memcpy would not suffer from it?\n\nIt'd not be correct for short sqlstates - you'd read beyond the end of\nthe source buffer. There are cases of it in the ecpg code.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:28:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-15 07:40:46 -0300, Ranier Vilela wrote:\n>> memcpy would not suffer from it?\n\n> It'd not be correct for short sqlstates - you'd read beyond the end of\n> the source buffer. There are cases of it in the ecpg code.\n\nWhat's a \"short SQLSTATE\"? They're all five characters by definition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 13:53:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Hi,\n\nOn 2021-06-15 13:53:08 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-06-15 07:40:46 -0300, Ranier Vilela wrote:\n> >> memcpy would not suffer from it?\n> \n> > It'd not be correct for short sqlstates - you'd read beyond the end of\n> > the source buffer. There are cases of it in the ecpg code.\n> \n> What's a \"short SQLSTATE\"? They're all five characters by definition.\n\nI thought there were places that just dealt with \"00\" etc. And there are - but\nit's just comparisons.\n\nI still don't fully feel comfortable just using memcpy() though, given that\nthe sqlstates originate remotely / from libpq, making it hard to rely on the\nfact that the buffer \"ought to\" always be at least 5 bytes long? As far as I\ncan tell there's no enforcement of PQresultErrorField(..., PG_DIAG_SQLSTATE)\nbeing that long.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 11:48:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" }, { "msg_contents": "Em ter., 15 de jun. de 2021 às 15:48, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2021-06-15 13:53:08 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2021-06-15 07:40:46 -0300, Ranier Vilela wrote:\n> > >> memcpy would not suffer from it?\n> >\n> > > It'd not be correct for short sqlstates - you'd read beyond the end of\n> > > the source buffer. There are cases of it in the ecpg code.\n> >\n> > What's a \"short SQLSTATE\"? They're all five characters by definition.\n>\n> I thought there were places that just dealt with \"00\" etc. And there are -\n> but\n> it's just comparisons.\n>\n> I still don't fully feel comfortable just using memcpy() though, given that\n> the sqlstates originate remotely / from libpq, making it hard to rely on\n> the\n> fact that the buffer \"ought to\" always be at least 5 bytes long? As far as\n> I\n> can tell there's no enforcement of PQresultErrorField(...,\n> PG_DIAG_SQLSTATE)\n> being that long.\n>\nAnd replacing with snprintf, what do you guys think?\n\n n = snprintf(sqlca->sqlstate, sizeof(sqlca->sqlstate), \"%s\", sqlstate);\n Assert(n >= 0 && n < sizeof(sqlca->sqlstate));\n\nregards,\nRanier Vilela", "msg_date": "Tue, 15 Jun 2021 16:45:07 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix buffer not null terminated on (ecpg lib)" } ]
[ { "msg_contents": "Hi all\n\nI build postgers with VS in windows, and the following message output\n\n“ Unable to determine Visual Studio version: The nmake version could not be determined.”\n\nI investigated the VSObjectFactory.pm, and found the match string “if ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)”\n\nIt works fine when no characters after version number, but if there are characters after the version number, it can not match the VS version.\n\nFor example , VS in Chinese , nmake /? output “ 14.00.24210.0 版”\n\n\nMay be we can remove the ‘$’ ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)” => ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?/m)”\n\n\nBest regards\n\n\n\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 09:18:57 +0000", "msg_from": "\"Lin, Cuiping\" <lincuiping@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Build errors in VS" }, { "msg_contents": "\nOn 4/17/20 5:18 AM, Lin, Cuiping wrote:\n> Hi all\n>\n> I build postgers with VS in windows, and the following message output\n>\n> “ Unable to determine Visual Studio version: The nmake version could not be determined.”\n>\n> I investigated the VSObjectFactory.pm, and found the match string “if ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)”\n>\n> It works fine when no characters after version number, but if there are characters after the version number, it can not match the VS version.\n>\n> For example , VS in Chinese , nmake /? output “ 14.00.24210.0 版”\n\n\nHmm, odd, but I guess we need to cater for it.\n\n\n>\n>\n> May be we can remove the ‘$’ ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)” => ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?/m)”\n>\n>\n\nThat will probably be ok. If we do that we should remove the 'm'\nqualifier on the regex too, it would serve no purpose any more.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 09:56:02 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Build errors in VS" }, { "msg_contents": "\nOn 4/17/20 9:56 AM, Andrew Dunstan wrote:\n> On 4/17/20 5:18 AM, Lin, Cuiping wrote:\n>> Hi all\n>>\n>> I build postgers with VS in windows, and the following message output\n>>\n>> “ Unable to determine Visual Studio version: The nmake version could not be determined.”\n>>\n>> I investigated the VSObjectFactory.pm, and found the match string “if ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)”\n>>\n>> It works fine when no characters after version number, but if there are characters after the version number, it can not match the VS version.\n>>\n>> For example , VS in Chinese , nmake /? output “ 14.00.24210.0 版”\n>\n> Hmm, odd, but I guess we need to cater for it.\n>\n>\n>>\n>> May be we can remove the ‘$’ ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?$/m)” => ($output =~ /(\\d+)\\.(\\d+)\\.\\d+(\\.\\d+)?/m)”\n>>\n>>\n> That will probably be ok. If we do that we should remove the 'm'\n> qualifier on the regex too, it would serve no purpose any more.\n>\n>\n\nDone\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:00:15 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Build errors in VS" } ]
[ { "msg_contents": "I alluded to this in [0], but it's better discussed in its own thread.\n\nI think the check that makes pgstattuple_approx reject TOAST tables is a \nmistake. They have visibility and free space map, and it works just \nfine if the check is removed.\n\nAttached is a patch to fix this and add some tests related to how \npgstattuple and pg_visibility accept TOAST tables for inspection.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 17 Apr 2020 13:01:46 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pgstattuple: Have pgstattuple_approx accept TOAST tables" }, { "msg_contents": "On Fri, 2020-04-17 at 13:01 +0200, Peter Eisentraut wrote:\n> I alluded to this in [0], but it's better discussed in its own thread.\n> \n> I think the check that makes pgstattuple_approx reject TOAST tables is a \n> mistake. They have visibility and free space map, and it works just \n> fine if the check is removed.\n> \n> Attached is a patch to fix this and add some tests related to how \n> pgstattuple and pg_visibility accept TOAST tables for inspection.\n> \n> \n> [0]: \n> https://www.postgresql.org/message-id/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com\n\nI gave the patch a spin, and it passes regression tests and didn't\ncause any problems when I played with it.\n\nNo upgrade or dump considerations, of course.\n\nThis is a clear improvement.\n\nI'll mark the patch as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 17 Jun 2020 13:39:30 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: Have pgstattuple_approx accept TOAST tables" }, { "msg_contents": "On 2020-06-17 13:39, Laurenz Albe wrote:\n> On Fri, 2020-04-17 at 13:01 +0200, Peter Eisentraut wrote:\n>> I alluded to this in [0], but it's better discussed in its own thread.\n>>\n>> I think the check that makes pgstattuple_approx reject TOAST tables is a\n>> mistake. They have visibility and free space map, and it works just\n>> fine if the check is removed.\n>>\n>> Attached is a patch to fix this and add some tests related to how\n>> pgstattuple and pg_visibility accept TOAST tables for inspection.\n>>\n>>\n>> [0]:\n>> https://www.postgresql.org/message-id/dc35a398-37d0-75ce-07ea-1dd71d98f8ec@2ndquadrant.com\n> \n> I gave the patch a spin, and it passes regression tests and didn't\n> cause any problems when I played with it.\n> \n> No upgrade or dump considerations, of course.\n> \n> This is a clear improvement.\n> \n> I'll mark the patch as \"ready for committer\".\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 01:14:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: Have pgstattuple_approx accept TOAST tables" } ]
[ { "msg_contents": "Hi, hackers!\n\n\nI found a problem with selectivity estimation for NULL-returning operators.\nmatchingsel() is not ready to use as a restriction selectivity estimator for\noperators like our jsonpath operators @? and @@, because it calls operator\nfunction on values obtained from pg_statistic through plain FunctionCall2Coll()\nwhich does not accept NULL results (see mcv_selectivity() etc.).\n\n=# CREATE TABLE test AS SELECT '{}'::jsonb js FROM generate_series(1, 1000);\n=# ANALYZE test;\n=# SELECT * FROM test WHERE js @@ '$ == 1';\nERROR: function 4011 returned NULL\n\n\nI'm not sure what we should to fix: operators or matchingsel(). So, attached\ntwo possible independent fixes:\n\n1. Return FALSE instead of NULL in jsonpath operators. The corresponding\nfunctions jsonb_path_exists() and jsonb_path_match() still return NULL in\nerror cases.\n\n2. Fix NULL operator results in selectivity estimation functions.\nIntroduced BoolFunctionCall2Coll() for replacing NULL with FALSE, that is used\nfor calling non-comparison operators (I'm not sure that comparison can return\nNULLs). Maybe it is worth add a whole set of functions to fmgr.c for replacing\nNULL results with the specified default Datum value.\n\n\nIf the selectivity estimation code will be left unchanged, then I think it\nshould be noted in documentation that matchingsel() is not applicable to\nNULL-returning operators (there is already a similar note about hash-joinable\noperators).\n\n\nBut if we will fix NULL handling, I think it would be worth to fix it everywhere\nin the selectivity estimation code. Without this, completely wrong results can\nbe get not only for NULL values, but also for NULL operator results:\n\n=# EXPLAIN SELECT * FROM test WHERE NOT js @@ '$ == 1'; -- 0 rows returned\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on test (cost=0.00..17.50 rows=1000 width=5)\n Filter: (NOT (js @@ '($ == 1)'::jsonpath))\n(2 rows)\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 17 Apr 2020 18:50:53 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "matchingsel() and NULL-returning operators" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> I found a problem with selectivity estimation for NULL-returning operators.\n> matchingsel() is not ready to use as a restriction selectivity estimator for\n> operators like our jsonpath operators @? and @@, because it calls operator\n> function on values obtained from pg_statistic through plain FunctionCall2Coll()\n> which does not accept NULL results (see mcv_selectivity() etc.).\n\nAh, good point.\n\n> I'm not sure what we should to fix: operators or matchingsel().\n\nSeems reasonable to let matchingsel support such cases.\n\n> Introduced BoolFunctionCall2Coll() for replacing NULL with FALSE, that is used\n> for calling non-comparison operators (I'm not sure that comparison can return\n> NULLs).\n\nNormally what we do is just invoke the function directly without going\nthrough that layer. If you need to cope with NULL then the simplicity\nof notation of FunctionCallN is lost to you anyway. I don't think we\nparticularly need an additional API that's intermediate between those.\n\n> But if we will fix NULL handling, I think it would be worth to fix it\n> everywhere in the selectivity estimation code.\n\nI'm disinclined to move the goalposts so far for places where there have\nbeen no complaints; especially not post-feature-freeze.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 12:01:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: matchingsel() and NULL-returning operators" } ]
[ { "msg_contents": "Hi\n\nI propose new function string_to_table. This function is significantly\nfaster (and simpler) variant of regexp_split_to_array function. There was\nsame process years ago when we implemented string_agg as faster variant of\narray_to_string(array_agg()). string_to_table is faster variant (and little\nbit more intuitive alternative of unnest(string_to_array()).\n\nstring_to_table is about 15% faster than unnest(string_to_array()) and\nabout 40% faster than regexp_split_to_array.\n\nInitial patch is attached\n\nNotes, comments?\n\nRegards\n\nPavel", "msg_date": "Fri, 17 Apr 2020 19:47:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - function string_to_table" }, { "msg_contents": "On Fri, Apr 17, 2020 at 07:47:15PM +0200, Pavel Stehule wrote:\n> I propose new function string_to_table. This function is significantly\n\n+1\n\n> +/*\n> + * Add text to result set (table or array). Build a table when set is a expected or build\n> + * a array\n\nas expected (??)\n*an* array\n\n> +select string_to_table('abc', '', 'abc');\n> + string_to_table \n> +-----------------\n> + \n> +(1 row)\n\nMaybe you should \\pset null '(null)' for this\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:29:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "pá 17. 4. 2020 v 23:29 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Fri, Apr 17, 2020 at 07:47:15PM +0200, Pavel Stehule wrote:\n> > I propose new function string_to_table. This function is significantly\n>\n> +1\n>\n> > +/*\n> > + * Add text to result set (table or array). Build a table when set is a\n> expected or build\n> > + * a array\n>\n> as expected (??)\n> *an* array\n>\n\nI tried to fix this comment\n\n\n>\n> > +select string_to_table('abc', '', 'abc');\n> > + string_to_table\n> > +-----------------\n> > +\n> > +(1 row)\n>\n> Maybe you should \\pset null '(null)' for this\n>\n\nchanging NULL output can break lot of existing tests, but I add second\ncolumn with info about null\n\n+select string_to_table('1,2,3,4,*,6', ',', '*'),\nstring_to_table('1,2,3,4,*,6', ',', '*') IS NULL;\n+ string_to_table | ?column?\n+-----------------+----------\n+ 1 | f\n+ 2 | f\n+ 3 | f\n+ 4 | f\n+ | t\n+ 6 | f\n+(6 rows)\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>", "msg_date": "Sat, 18 Apr 2020 05:45:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "+{ oid => '2228', descr => 'split delimited text',\r\n+ proname => 'string_to_table', prorows => '1000', proretset => 't',\r\n+ prorettype => 'text', proargtypes => 'text text',\r\n+ prosrc => 'text_to_table' },\r\n+{ oid => '2282', descr => 'split delimited text with null string',\r\n+ proname => 'string_to_table', prorows => '1000', proretset => 't',\r\n+ prorettype => 'text', proargtypes => 'text text text',\r\n+ prosrc => 'text_to_table_null' },\r\n\r\nI go through the patch, and everything looks good to me. But I do not know\r\nwhy it needs a 'text_to_table_null()', it's ok to put a 'text_to_table' there, I think.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n+{ oid => '2228', descr => 'split delimited text',+  proname => 'string_to_table', prorows => '1000', proretset => 't',+  prorettype => 'text', proargtypes => 'text text',+  prosrc => 'text_to_table' },+{ oid => '2282', descr => 'split delimited text with null string',+  proname => 'string_to_table', prorows => '1000', proretset => 't',+  prorettype => 'text', proargtypes => 'text text text',+  prosrc => 'text_to_table_null' },I go through the patch, and everything looks good to me. But I do not knowwhy it needs a 'text_to_table_null()', it's ok to put a 'text_to_table' there, I think.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 4 Jun 2020 17:49:25 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Hi\n\nčt 4. 6. 2020 v 11:49 odesílatel movead.li@highgo.ca <movead.li@highgo.ca>\nnapsal:\n\n> +{ oid => '2228', descr => 'split delimited text',\n> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n> + prorettype => 'text', proargtypes => 'text text',\n> + prosrc => 'text_to_table' },\n> +{ oid => '2282', descr => 'split delimited text with null string',\n> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n> + prorettype => 'text', proargtypes => 'text text text',\n> + prosrc => 'text_to_table_null' },\n>\n> I go through the patch, and everything looks good to me. But I do not know\n> why it needs a 'text_to_table_null()', it's ok to put a 'text_to_table'\n> there, I think.\n>\n\nIt is a convention in Postgres - every SQL unique signature has its own\nunique internal C function.\n\nI am sending a refreshed patch.\n\nRegards\n\nPavel\n\n\n\n\n>\n> ------------------------------\n> Regards,\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n>", "msg_date": "Fri, 5 Jun 2020 13:55:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "pá 5. 6. 2020 v 13:55 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> čt 4. 6. 2020 v 11:49 odesílatel movead.li@highgo.ca <movead.li@highgo.ca>\n> napsal:\n>\n>> +{ oid => '2228', descr => 'split delimited text',\n>> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n>> + prorettype => 'text', proargtypes => 'text text',\n>> + prosrc => 'text_to_table' },\n>> +{ oid => '2282', descr => 'split delimited text with null string',\n>> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n>> + prorettype => 'text', proargtypes => 'text text text',\n>> + prosrc => 'text_to_table_null' },\n>>\n>> I go through the patch, and everything looks good to me. But I do not know\n>> why it needs a 'text_to_table_null()', it's ok to put a 'text_to_table'\n>> there, I think.\n>>\n>\n> It is a convention in Postgres - every SQL unique signature has its own\n> unique internal C function.\n>\n> I am sending a refreshed patch.\n>\n\nrebase\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>>\n>> ------------------------------\n>> Regards,\n>> Highgo Software (Canada/China/Pakistan)\n>> URL : www.highgo.ca\n>> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n>>\n>", "msg_date": "Sun, 5 Jul 2020 13:30:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "ne 5. 7. 2020 v 13:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 5. 6. 2020 v 13:55 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> čt 4. 6. 2020 v 11:49 odesílatel movead.li@highgo.ca <movead.li@highgo.ca>\n>> napsal:\n>>\n>>> +{ oid => '2228', descr => 'split delimited text',\n>>> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n>>> + prorettype => 'text', proargtypes => 'text text',\n>>> + prosrc => 'text_to_table' },\n>>> +{ oid => '2282', descr => 'split delimited text with null string',\n>>> + proname => 'string_to_table', prorows => '1000', proretset => 't',\n>>> + prorettype => 'text', proargtypes => 'text text text',\n>>> + prosrc => 'text_to_table_null' },\n>>>\n>>> I go through the patch, and everything looks good to me. But I do not\n>>> know\n>>> why it needs a 'text_to_table_null()', it's ok to put a 'text_to_table'\n>>> there, I think.\n>>>\n>>\n>> It is a convention in Postgres - every SQL unique signature has its own\n>> unique internal C function.\n>>\n>> I am sending a refreshed patch.\n>>\n>\n> rebase\n>\n\ntwo fresh fix\n\na) remove garbage from patch that breaks doc\n\nb) these functions should not be strict - be consistent with\nstring_to_array functions\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>>\n>>> ------------------------------\n>>> Regards,\n>>> Highgo Software (Canada/China/Pakistan)\n>>> URL : www.highgo.ca\n>>> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n>>>\n>>", "msg_date": "Mon, 6 Jul 2020 07:05:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Hi.\n\nI have been looking at the patch: string_to_table-20200706-2.patch\n\nBelow are some review comments for your consideration.\n\n====\n\nCOMMENT func.sgml (style)\n\n+ <para>\n+ splits string into table using supplied delimiter and\n+ optional null string.\n+ </para>\n\nThe format style of the short description is inconsistent with the\nother functions.\ne.g. Should start with Capital letter.\ne.g. Should tag the parameter names properly\n\nSomething like:\n<para>\nSplits <parameter>string</parameter> into a table\nusing supplied <parameter>delimiter</parameter>\nand optional null string <parameter>nullstr</parameter>.\n</para>\n\n====\n\nCOMMENT func.sgml (what does nullstr do)\n\nThe description does not sufficiently describe the purpose/behaviour\nof the nullstr.\n\ne.g. Firstly I thought that it meant if 2 consecutive delimiters were\nencountered it would substitute this string as the row value. But it\nis doing the opposite of what I guessed - if the extracted row value\nis the same as nullstr then a NULL row is inserted instead.\n\n====\n\nCOMMENT func.sgml (wrong sample output)\n\n+<programlisting>xx\n+yy,\n+zz</programlisting>\n\nThis output is incorrect for the sample given. There is no \"yy,\" in\nthe output because there is a 'yy' nullstr substitution.\n\nShould be:\n---\nxx\nNULL\nzz\n---\n\n====\n\nCOMMENT func.sgml (related to regexp_split_to_table)\n\nBecause this new function is similar to the existing\nregexp_split_to_table, perhaps they should cross-reference each other\nso a reader of this documentation is made aware of the alternative\nfunction?\n\n====\n\nCOMMENT (test cases)\n\nIt is impossible to tell difference in the output between empty\nstrings and nulls currently, so maybe you can change all the tests to\nhave a form like below so they can be validated properly:\n\n# select v, v IS NULL as \"is null\" from\nstring_to_table('a,b,*,c,d,',',','*') g(v);\n v | is null\n---+---------\n a | f\n b | f\n | t\n c | f\n d | f\n | f\n(6 rows)\n\nor maybe like this is even easier:\n\n# select quote_nullable(string_to_table('a|*||c|d|','|','*'));\n quote_nullable\n----------------\n 'a'\n NULL\n ''\n 'c'\n 'd'\n ''\n(6 rows)\n\nSomething similar was already proposed before [1] but that never got\nput into the test code.\n[1] https://www.postgresql.org/message-id/CAFj8pRDSzDYmaS06dfMXBfbr8x%2B3xjDJxA5kbL3h8%2BeOGoRUcA%40mail.gmail.com\n\n====\n\nCOMMENT (test cases)\n\nThere are multiple combinations of the parameters to this function and\nMANY different results depending on different values they can take, so\nthe existing tests only cover a small sample.\n\nI have attached a lot more test scenarios that you may want to include\nfor better test coverage. Everything seemed to work as expected.\n\nPSA test-cases.pdf\n\n====\n\nCOMMENT (accum_result)\n\n+ Datum values[1];\n+ bool nulls[1];\n+\n+ values[0] = PointerGetDatum(result_text);\n+ nulls[0] = is_null;\n\nWhy not use variables instead of arrays with only 1 element?\n\n====\n\nCOMMENT (text_to_array_internal)\n\n+ if (!tstate.astate)\n+ PG_RETURN_ARRAYTYPE_P(construct_empty_array(TEXTOID));\n\nMaybe the condition is more readable when expressed as:\nif (tstate.astate == NULL)\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 20 Aug 2020 12:06:42 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Hi\n\nčt 20. 8. 2020 v 4:07 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:\n\n> Hi.\n>\n> I have been looking at the patch: string_to_table-20200706-2.patch\n>\n> Below are some review comments for your consideration.\n>\n> ====\n>\n> COMMENT func.sgml (style)\n>\n> + <para>\n> + splits string into table using supplied delimiter and\n> + optional null string.\n> + </para>\n>\n> The format style of the short description is inconsistent with the\n> other functions.\n> e.g. Should start with Capital letter.\n> e.g. Should tag the parameter names properly\n>\n> Something like:\n> <para>\n> Splits <parameter>string</parameter> into a table\n> using supplied <parameter>delimiter</parameter>\n> and optional null string <parameter>nullstr</parameter>.\n> </para>\n>\n>\ndone\n\n\n> ====\n>\n> COMMENT func.sgml (what does nullstr do)\n>\n> The description does not sufficiently describe the purpose/behaviour\n> of the nullstr.\n>\n> e.g. Firstly I thought that it meant if 2 consecutive delimiters were\n> encountered it would substitute this string as the row value. But it\n> is doing the opposite of what I guessed - if the extracted row value\n> is the same as nullstr then a NULL row is inserted instead.\n>\n>\ndone\n\n\n> ====\n>\n> COMMENT func.sgml (wrong sample output)\n>\n> +<programlisting>xx\n> +yy,\n> +zz</programlisting>\n>\n> This output is incorrect for the sample given. There is no \"yy,\" in\n> the output because there is a 'yy' nullstr substitution.\n>\n> Should be:\n> ---\n> xx\n> NULL\n> zz\n> ---\n>\n\nfixed\n\n\n> ====\n>\n> COMMENT func.sgml (related to regexp_split_to_table)\n>\n> Because this new function is similar to the existing\n> regexp_split_to_table, perhaps they should cross-reference each other\n> so a reader of this documentation is made aware of the alternative\n> function?\n>\n\nI wrote new sentence with ref\n\n\n>\n> ====\n>\n> COMMENT (test cases)\n>\n> It is impossible to tell difference in the output between empty\n> strings and nulls currently, so maybe you can change all the tests to\n> have a form like below so they can be validated properly:\n>\n> # select v, v IS NULL as \"is null\" from\n> string_to_table('a,b,*,c,d,',',','*') g(v);\n> v | is null\n> ---+---------\n> a | f\n> b | f\n> | t\n> c | f\n> d | f\n> | f\n> (6 rows)\n>\n> or maybe like this is even easier:\n>\n> # select quote_nullable(string_to_table('a|*||c|d|','|','*'));\n> quote_nullable\n> ----------------\n> 'a'\n> NULL\n> ''\n> 'c'\n> 'd'\n> ''\n> (6 rows)\n>\n\nI prefer the first variant, it is clean. It is good idea, done\n\n\n> Something similar was already proposed before [1] but that never got\n> put into the test code.\n> [1]\n> https://www.postgresql.org/message-id/CAFj8pRDSzDYmaS06dfMXBfbr8x%2B3xjDJxA5kbL3h8%2BeOGoRUcA%40mail.gmail.com\n>\n> ====\n>\n> COMMENT (test cases)\n>\n> There are multiple combinations of the parameters to this function and\n> MANY different results depending on different values they can take, so\n> the existing tests only cover a small sample.\n>\n> I have attached a lot more test scenarios that you may want to include\n> for better test coverage. Everything seemed to work as expected.\n>\n\nok, merged\n\n\n> PSA test-cases.pdf\n>\n> ====\n>\n> COMMENT (accum_result)\n>\n> + Datum values[1];\n> + bool nulls[1];\n> +\n> + values[0] = PointerGetDatum(result_text);\n> + nulls[0] = is_null;\n>\n> Why not use variables instead of arrays with only 1 element?\n>\n\nTechnically it is equivalent, but I think so using one element array is\nmore correct, because function heap_form_tuple expects an array. Sure in C\nlanguage there is no difference between pointer to value or pointer to\narray, but minimally the name of the argument \"values\" implies so argument\nis an array.\n\nThis pattern is used more times in Postgres. You can find a fragments where\nalthough we know so array has only one field, still we works with array\n\nmisc.c\nhash.c\nexecTuples.c\n\nbut I can this code simplify little bit - I can use function\ntuplestore_putvalues(tupstore, tupdesc, values, nulls);\n\nI see, so this code can be reduced more, and I don't need local variables,\nbut I prefer to be consistent with other parts, and I feel better if I pass\nan array where the array is expected.\n\nThis is not extra important, and I can it change, just I think this variant\nis cleaner little bit\n\n\n\n> ====\n>\n> COMMENT (text_to_array_internal)\n>\n> + if (!tstate.astate)\n> + PG_RETURN_ARRAYTYPE_P(construct_empty_array(TEXTOID));\n>\n> Maybe the condition is more readable when expressed as:\n> if (tstate.astate == NULL)\n>\n>\ndone\n\n\nnew patch attached\n\nThank you for precious review\n\nRegards\n\nPavel\n\n====\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>", "msg_date": "Thu, 20 Aug 2020 21:21:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "On Fri, Aug 21, 2020 at 5:21 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> new patch attached\n\nThanks for taking some of my previous review comments.\n\nI have re-checked the string_to_table_20200820.patch.\n\nBelow are some remaining questions/comments:\n\n====\n\nCOMMENT (help text)\n\n+ Splits the <parameter>string</parameter> at occurrences\n+ of <parameter>delimiter</parameter> and forms the remaining data\n+ into a <type>text</type> tavke.\n\nWhat did you mean by \"remaining\" in that description?\nIt gets a bit strange thinking about remaining NULLs, or remaining\nempty strings.\n\nWhy not just say \"... and forms the data into a <type>text</type> table.\"\n\n---\n\n+ Splits the <parameter>string</parameter> at occurrences\n+ of <parameter>delimiter</parameter> and forms the remaining data\n+ into a <type>text</type> tavke.\n\nTypo: \"tavke.\" -> \"table.\"\n\n====\n\nCOMMENT (help text reference to regexp_split_to_table)\n\n+ input <parameter>string</parameter> can be done by function\n+ <function>regexp_split_to_table</function> (see <xref\nlinkend=\"functions-posix-regexp\"/>).\n+ </para>\n\nIn the previous review I suggested adding a reference to the\nregexp_split_to_table function.\nA hyperlink would be a bonus, but maybe it is not possible.\n\nThe hyperlink added in the latest patch is to page for POSIX Regular\nExpressions, which doesn't seem appropriate.\n\n====\n\nQUESTION (test cases)\n\nThanks for merging lots of my additional test cases!\n\nActually, the previous PDF I sent was 2 pages long but you only merged\nthe tests of page 1.\nI wondered was it accidental to omit all those 2nd page tests?\n\n====\n\nQUESTION (function name?)\n\nI noticed that ALL current string functions that use delimiters have\nthe word \"split\" in their name.\n\ne.g.\n* regexp_split_to_array\n* regexp_split_to_table\n* split_part\n\nBut \"string_to_table\" is not following this pattern.\n\nMaybe a different choice of function name would be more consistent\nwith what is already there?\ne.g. split_to_table, string_split_to_table, etc.\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Aug 2020 17:43:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "pá 21. 8. 2020 v 9:44 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:\n\n> On Fri, Aug 21, 2020 at 5:21 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n> > new patch attached\n>\n> Thanks for taking some of my previous review comments.\n>\n> I have re-checked the string_to_table_20200820.patch.\n>\n> Below are some remaining questions/comments:\n>\n> ====\n>\n> COMMENT (help text)\n>\n> + Splits the <parameter>string</parameter> at occurrences\n> + of <parameter>delimiter</parameter> and forms the remaining data\n> + into a <type>text</type> tavke.\n>\n> What did you mean by \"remaining\" in that description?\n> It gets a bit strange thinking about remaining NULLs, or remaining\n> empty strings.\n>\n> Why not just say \"... and forms the data into a <type>text</type> table.\"\n>\n> ---\n>\n> + Splits the <parameter>string</parameter> at occurrences\n> + of <parameter>delimiter</parameter> and forms the remaining data\n> + into a <type>text</type> tavke.\n>\n> Typo: \"tavke.\" -> \"table.\"\n>\n\nThis text is taken from doc for string_to_array\n\n\n> ====\n>\n> COMMENT (help text reference to regexp_split_to_table)\n>\n> + input <parameter>string</parameter> can be done by function\n> + <function>regexp_split_to_table</function> (see <xref\n> linkend=\"functions-posix-regexp\"/>).\n> + </para>\n>\n> In the previous review I suggested adding a reference to the\n> regexp_split_to_table function.\n> A hyperlink would be a bonus, but maybe it is not possible.\n>\n> The hyperlink added in the latest patch is to page for POSIX Regular\n> Expressions, which doesn't seem appropriate.\n>\n\nok I remove it\n\n>\n> ====\n>\n> QUESTION (test cases)\n>\n> Thanks for merging lots of my additional test cases!\n>\n> Actually, the previous PDF I sent was 2 pages long but you only merged\n> the tests of page 1.\n> I wondered was it accidental to omit all those 2nd page tests?\n>\n\nI'll check it\n\n>\n> ====\n>\n> QUESTION (function name?)\n>\n> I noticed that ALL current string functions that use delimiters have\n> the word \"split\" in their name.\n>\n> e.g.\n> * regexp_split_to_array\n> * regexp_split_to_table\n> * split_part\n>\n> But \"string_to_table\" is not following this pattern.\n>\n> Maybe a different choice of function name would be more consistent\n> with what is already there?\n> e.g. split_to_table, string_split_to_table, etc.\n>\n\nI don't agree. This function is twin (with almost identical behaviour) for\n\"string_to_array\" function, so I think so the name is correct.\n\n\n> ====\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\npá 21. 8. 2020 v 9:44 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:On Fri, Aug 21, 2020 at 5:21 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> new patch attached\n\nThanks for taking some of my previous review comments.\n\nI have re-checked the string_to_table_20200820.patch.\n\nBelow are some remaining questions/comments:\n\n====\n\nCOMMENT (help text)\n\n+        Splits the <parameter>string</parameter> at occurrences\n+        of <parameter>delimiter</parameter> and forms the remaining data\n+        into a <type>text</type> tavke.\n\nWhat did you mean by \"remaining\" in that description?\nIt gets a bit strange thinking about remaining NULLs, or remaining\nempty strings.\n\nWhy not just say \"... and forms the data into a <type>text</type> table.\"\n\n---\n\n+        Splits the <parameter>string</parameter> at occurrences\n+        of <parameter>delimiter</parameter> and forms the remaining data\n+        into a <type>text</type> tavke.\n\nTypo: \"tavke.\" -> \"table.\"This text is taken from doc for string_to_array\n\n====\n\nCOMMENT (help text reference to regexp_split_to_table)\n\n+        input <parameter>string</parameter> can be done by function\n+        <function>regexp_split_to_table</function> (see <xref\nlinkend=\"functions-posix-regexp\"/>).\n+       </para>\n\nIn the previous review I suggested adding a reference to the\nregexp_split_to_table function.\nA hyperlink would be a bonus, but maybe it is not possible.\n\nThe hyperlink added in the latest patch is to page for POSIX Regular\nExpressions, which doesn't seem appropriate.ok I remove it \n\n====\n\nQUESTION (test cases)\n\nThanks for merging lots of my additional test cases!\n\nActually, the previous PDF I sent was 2 pages long but you only merged\nthe tests of page 1.\nI wondered was it accidental to omit all those 2nd page tests?I'll check it \n\n====\n\nQUESTION (function name?)\n\nI noticed that ALL current string functions that use delimiters have\nthe word \"split\" in their name.\n\ne.g.\n* regexp_split_to_array\n* regexp_split_to_table\n* split_part\n\nBut \"string_to_table\" is not following this pattern.\n\nMaybe a different choice of function name would be more consistent\nwith what is already there?\ne.g.  split_to_table, string_split_to_table, etc.I don't agree. This function is twin (with almost identical behaviour) for \"string_to_array\" function, so I think so the name is correct. \n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 21 Aug 2020 11:08:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "pá 21. 8. 2020 v 11:08 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 21. 8. 2020 v 9:44 odesílatel Peter Smith <smithpb2250@gmail.com>\n> napsal:\n>\n>> On Fri, Aug 21, 2020 at 5:21 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>> > new patch attached\n>>\n>> Thanks for taking some of my previous review comments.\n>>\n>> I have re-checked the string_to_table_20200820.patch.\n>>\n>> Below are some remaining questions/comments:\n>>\n>> ====\n>>\n>> COMMENT (help text)\n>>\n>> + Splits the <parameter>string</parameter> at occurrences\n>> + of <parameter>delimiter</parameter> and forms the remaining data\n>> + into a <type>text</type> tavke.\n>>\n>> What did you mean by \"remaining\" in that description?\n>> It gets a bit strange thinking about remaining NULLs, or remaining\n>> empty strings.\n>>\n>> Why not just say \"... and forms the data into a <type>text</type> table.\"\n>>\n>> ---\n>>\n>> + Splits the <parameter>string</parameter> at occurrences\n>> + of <parameter>delimiter</parameter> and forms the remaining data\n>> + into a <type>text</type> tavke.\n>>\n>> Typo: \"tavke.\" -> \"table.\"\n>>\n>\n> This text is taken from doc for string_to_array\n>\n\nI fixed typo. I hope and expect so doc will be finalized by native\nspeakers.\n\n\n>\n>\n>> ====\n>>\n>> COMMENT (help text reference to regexp_split_to_table)\n>>\n>> + input <parameter>string</parameter> can be done by function\n>> + <function>regexp_split_to_table</function> (see <xref\n>> linkend=\"functions-posix-regexp\"/>).\n>> + </para>\n>>\n>> In the previous review I suggested adding a reference to the\n>> regexp_split_to_table function.\n>> A hyperlink would be a bonus, but maybe it is not possible.\n>>\n>> The hyperlink added in the latest patch is to page for POSIX Regular\n>> Expressions, which doesn't seem appropriate.\n>>\n>\n> ok I remove it\n>\n>>\n>> ====\n>>\n>> QUESTION (test cases)\n>>\n>> Thanks for merging lots of my additional test cases!\n>>\n>> Actually, the previous PDF I sent was 2 pages long but you only merged\n>> the tests of page 1.\n>> I wondered was it accidental to omit all those 2nd page tests?\n>>\n>\n> I'll check it\n>\n\nI forgot it - now it is merged. Maybe it is over dimensioned for one\nfunction, but it is (at the end) a test of string_to_array function too.\n\n\n>\n>> ====\n>>\n>> QUESTION (function name?)\n>>\n>> I noticed that ALL current string functions that use delimiters have\n>> the word \"split\" in their name.\n>>\n>> e.g.\n>> * regexp_split_to_array\n>> * regexp_split_to_table\n>> * split_part\n>>\n>> But \"string_to_table\" is not following this pattern.\n>>\n>> Maybe a different choice of function name would be more consistent\n>> with what is already there?\n>> e.g. split_to_table, string_split_to_table, etc.\n>>\n>\n> I don't agree. This function is twin (with almost identical behaviour) for\n> \"string_to_array\" function, so I think so the name is correct.\n>\n\nUnfortunately - there is not consistency in naming already, But I think so\nstring_to_table is a better name, because this function is almost identical\nwith string_to_array.\n\nRegards\n\nPavel\n\n\n>\n>> ====\n>>\n>> Kind Regards,\n>> Peter Smith.\n>> Fujitsu Australia\n>>\n>", "msg_date": "Fri, 21 Aug 2020 11:45:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "I have re-checked the string_to_table_20200821.patch.\n\nBelow is one remaining problem.\n\n====\n\nCOMMENT (help text)\n\n+ Splits the <parameter>string</parameter> at occurrences\n+ of <parameter>delimiter</parameter> and forms the remaining data\n+ into a table with one <type>text</type> type column.\n+ If <parameter>delimiter</parameter> is <literal>NULL</literal>,\n+ each character in the <parameter>string</parameter> will become a\n+ separate element in the array.\n\nSeems like here is a cut/paste error from the string_to_array help text.\n\n\"separate element in the array\" should say \"separate row of the table\"\n\n====\n\n>>> Maybe a different choice of function name would be more consistent\n>>> with what is already there?\n>>> e.g. split_to_table, string_split_to_table, etc.\n>>\n>> I don't agree. This function is twin (with almost identical behaviour) for \"string_to_array\" function, so I think so the name is correct.\n\nOK\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 24 Aug 2020 12:18:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "po 24. 8. 2020 v 4:19 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:\n\n> I have re-checked the string_to_table_20200821.patch.\n>\n> Below is one remaining problem.\n>\n> ====\n>\n> COMMENT (help text)\n>\n> + Splits the <parameter>string</parameter> at occurrences\n> + of <parameter>delimiter</parameter> and forms the remaining data\n> + into a table with one <type>text</type> type column.\n> + If <parameter>delimiter</parameter> is <literal>NULL</literal>,\n> + each character in the <parameter>string</parameter> will become a\n> + separate element in the array.\n>\n> Seems like here is a cut/paste error from the string_to_array help text.\n>\n> \"separate element in the array\" should say \"separate row of the table\"\n>\n\nfixed\n\n\n> ====\n>\n> >>> Maybe a different choice of function name would be more consistent\n> >>> with what is already there?\n> >>> e.g. split_to_table, string_split_to_table, etc.\n> >>\n> >> I don't agree. This function is twin (with almost identical behaviour)\n> for \"string_to_array\" function, so I think so the name is correct.\n>\n> OK\n>\n> ====\n>\n\nplease, check attached patch\n\nRegards\n\nPavel\n\n\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>", "msg_date": "Mon, 24 Aug 2020 18:33:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Hi.\n\nI have re-checked the string_to_table_20200824.patch.\n\n====\n\nOn Tue, Aug 25, 2020 at 2:34 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>> COMMENT (help text)\n>>\n>> + Splits the <parameter>string</parameter> at occurrences\n>> + of <parameter>delimiter</parameter> and forms the remaining data\n>> + into a table with one <type>text</type> type column.\n>> + If <parameter>delimiter</parameter> is <literal>NULL</literal>,\n>> + each character in the <parameter>string</parameter> will become a\n>> + separate element in the array.\n>>\n>> Seems like here is a cut/paste error from the string_to_array help text.\n>>\n>> \"separate element in the array\" should say \"separate row of the table\"\n>\n>\n> fixed\n>\n\nNo. You wrote \"separate row of table\". Should say \"separate row of the table\".\n\n====\n\nQUESTION (pg_proc.dat)\n\nI noticed the oids of the functions are modified in this latest patch.\nThey seem 1000's away from the next nearest oid.\nI was curious about the reason for those particular numbers (8432, 8433)?\n\n====\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 Aug 2020 09:19:02 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "út 25. 8. 2020 v 1:19 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:\n\n> Hi.\n>\n> I have re-checked the string_to_table_20200824.patch.\n>\n> ====\n>\n> On Tue, Aug 25, 2020 at 2:34 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n> >> COMMENT (help text)\n> >>\n> >> + Splits the <parameter>string</parameter> at occurrences\n> >> + of <parameter>delimiter</parameter> and forms the remaining\n> data\n> >> + into a table with one <type>text</type> type column.\n> >> + If <parameter>delimiter</parameter> is <literal>NULL</literal>,\n> >> + each character in the <parameter>string</parameter> will\n> become a\n> >> + separate element in the array.\n> >>\n> >> Seems like here is a cut/paste error from the string_to_array help text.\n> >>\n> >> \"separate element in the array\" should say \"separate row of the table\"\n> >\n> >\n> > fixed\n> >\n>\n> No. You wrote \"separate row of table\". Should say \"separate row of the\n> table\".\n>\n\nshould be fixed now\n\n\n> ====\n>\n> QUESTION (pg_proc.dat)\n>\n> I noticed the oids of the functions are modified in this latest patch.\n> They seem 1000's away from the next nearest oid.\n> I was curious about the reason for those particular numbers (8432, 8433)?\n>\n\nWhen you run ./unused_oids script, then you get this message\n\n[pavel@nemesis catalog]$ ./unused_oids\n4 - 9\n560 - 583\n786 - 789\n811 - 816\n1136 - 1137\n2121\n2137\n2228\n3435\n3585\n4035\n4142\n4179 - 4180\n4198 - 4199\n4225 - 4301\n4388 - 4401\n4450 - 4451\n4532 - 4565\n4572 - 4999\n5097 - 5999\n6015 - 6099\n6105\n6107 - 6109\n6116\n6122 - 8431\n8434 - 8455\n8457 - 9999\nPatches should use a more-or-less consecutive range of OIDs.\nBest practice is to start with a random choice in the range 8000-9999.\nSuggested random unused OID: 8973 (1027 consecutive OID(s) available\nstarting here)\n\nFor me, this is simple protection against oid collision under development,\nand I expect so commiters does oid' space defragmentation.\n\nRegards\n\nPavel\n\n\n> ====\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>", "msg_date": "Tue, 25 Aug 2020 08:57:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "On Tue, Aug 25, 2020 at 4:58 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> When you run ./unused_oids script, then you get this message\n>\n> [pavel@nemesis catalog]$ ./unused_oids\n<snip>\n> Patches should use a more-or-less consecutive range of OIDs.\n> Best practice is to start with a random choice in the range 8000-9999.\n> Suggested random unused OID: 8973 (1027 consecutive OID(s) available starting here)\n>\n> For me, this is simple protection against oid collision under development, and I expect so commiters does oid' space defragmentation.\n\nI have not used that tool before. Thanks for teaching me!\n\n===\n\nI have re-checked the string_to_table_20200825.patch.\n\nEverything looks good to me now, so I am marking this as \"ready for committer\".\n\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 25 Aug 2020 19:18:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "út 25. 8. 2020 v 11:19 odesílatel Peter Smith <smithpb2250@gmail.com>\nnapsal:\n\n> On Tue, Aug 25, 2020 at 4:58 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > When you run ./unused_oids script, then you get this message\n> >\n> > [pavel@nemesis catalog]$ ./unused_oids\n> <snip>\n> > Patches should use a more-or-less consecutive range of OIDs.\n> > Best practice is to start with a random choice in the range 8000-9999.\n> > Suggested random unused OID: 8973 (1027 consecutive OID(s) available\n> starting here)\n> >\n> > For me, this is simple protection against oid collision under\n> development, and I expect so commiters does oid' space defragmentation.\n>\n> I have not used that tool before. Thanks for teaching me!\n>\n\n:)\n\n\n> ===\n>\n> I have re-checked the string_to_table_20200825.patch.\n>\n> Everything looks good to me now, so I am marking this as \"ready for\n> committer\".\n>\n\nThank you very much :)\n\nRegard\n\nPavel\n\n>\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\nút 25. 8. 2020 v 11:19 odesílatel Peter Smith <smithpb2250@gmail.com> napsal:On Tue, Aug 25, 2020 at 4:58 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> When you run ./unused_oids script, then you get this message\n>\n> [pavel@nemesis catalog]$ ./unused_oids\n<snip>\n> Patches should use a more-or-less consecutive range of OIDs.\n> Best practice is to start with a random choice in the range 8000-9999.\n> Suggested random unused OID: 8973 (1027 consecutive OID(s) available starting here)\n>\n> For me, this is simple protection against oid collision under development, and I expect so commiters does oid' space defragmentation.\n\nI have not used that tool before. Thanks for teaching me!:) \n\n===\n\nI have re-checked the string_to_table_20200825.patch.\n\nEverything looks good to me now, so I am marking this as \"ready for committer\".Thank you very much :)RegardPavel\n\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 25 Aug 2020 11:22:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ string_to_table-20200825.patch ]\n\nI reviewed this, whacked it around a little, and pushed it.\n\nPossibly the most controversial thing I did was to move the existing\ndocumentation entry for string_to_array() into the string-functions\ntable. I did not like it one bit that the patch was documenting\nstring_to_table() far away from string_to_array(), and on reflection\nI concluded that you'd picked the right place and the issue here is\nthat string_to_array() was in the wrong place.\n\nAlso, I pared the proposed regression tests a great deal, ending up\nwith something that matches the existing tests for string_to_array().\nThe proposed tests seemed mighty duplicative, and they even contained\nsyntax errors, so I didn't believe that they were carefully considered.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Sep 2020 18:30:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "On Thu, Sep 3, 2020 at 8:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The proposed tests seemed mighty duplicative, and they even contained\n> syntax errors, so I didn't believe that they were carefully considered.\n\nCan you please share examples of what syntax errors were in those\nprevious tests?\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Sep 2020 11:52:57 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Thu, Sep 3, 2020 at 8:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The proposed tests seemed mighty duplicative, and they even contained\n>> syntax errors, so I didn't believe that they were carefully considered.\n\n> Can you please share examples of what syntax errors were in those\n> previous tests?\n\nAt about line 415 of string_to_table-20200825.patch:\n\n+select v, v is null as \"is null\" from string_to_table('1,2,3,4,,6', ',') g(v) g(v);\n+ERROR: syntax error at or near \"g\"\n+LINE 1: ...\"is null\" from string_to_table('1,2,3,4,,6', ',') g(v) g(v);\n+ ^\n\nWithout the duplicate \"g(v)\", this is identical to the preceding test\ncase.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Sep 2020 21:59:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - function string_to_table" }, { "msg_contents": "čt 3. 9. 2020 v 0:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ string_to_table-20200825.patch ]\n>\n> I reviewed this, whacked it around a little, and pushed it.\n>\n> Possibly the most controversial thing I did was to move the existing\n> documentation entry for string_to_array() into the string-functions\n> table. I did not like it one bit that the patch was documenting\n> string_to_table() far away from string_to_array(), and on reflection\n> I concluded that you'd picked the right place and the issue here is\n> that string_to_array() was in the wrong place.\n>\n> Also, I pared the proposed regression tests a great deal, ending up\n> with something that matches the existing tests for string_to_array().\n> The proposed tests seemed mighty duplicative, and they even contained\n> syntax errors, so I didn't believe that they were carefully considered.\n>\n\nThank you\n\nPavel\n\n\n> regards, tom lane\n>\n\nčt 3. 9. 2020 v 0:30 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ string_to_table-20200825.patch ]\n\nI reviewed this, whacked it around a little, and pushed it.\n\nPossibly the most controversial thing I did was to move the existing\ndocumentation entry for string_to_array() into the string-functions\ntable.  I did not like it one bit that the patch was documenting\nstring_to_table() far away from string_to_array(), and on reflection\nI concluded that you'd picked the right place and the issue here is\nthat string_to_array() was in the wrong place.\n\nAlso, I pared the proposed regression tests a great deal, ending up\nwith something that matches the existing tests for string_to_array().\nThe proposed tests seemed mighty duplicative, and they even contained\nsyntax errors, so I didn't believe that they were carefully considered.Thank youPavel\n\n                        regards, tom lane", "msg_date": "Thu, 3 Sep 2020 05:01:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - function string_to_table" } ]
[ { "msg_contents": "Our documentation explains many details about commands, tools, \nparameters in detail and with high accuracy. Nevertheless my impression \nis that we neglect the 'big picture': why certain processes exist and \nwhat their relation to each other is, summary of strategies, \nvisualization of key situations, ... . People with mature knowledge \ndon't miss this information because they know all about it. But for \nbeginners such explanations would be a great help. In the time before \nGSoD 2019 we had similar discussions.\n\nI plan to extend over time the part 'Tutorial' by an additional chapter \nwith an overview about key design decisions and basic features. The \ntypical audience should consist of persons with limited pre-knowledge in \ndatabase systems and some interest in PostgreSQL. In the attachment you \nfind a patch for the first sub-chapter. Subsequent sub-chapters should \nbe: MVCC, transactions, VACUUM, backup, replication, ... - mostly with \nthe focus on the PostgreSQL implementation and not on generic topics \nlike b-trees.\n\nThere is a predecessor of this patch: \nhttps://www.postgresql.org/message-id/974e09b8-edf5-f38f-2fb5-a5875782ffc9%40purtz.de \n. In the meanwhile its glossary-part is separated and commited. The new \npatch contains two elements: textual descriptions and 4 figures. My \nopinion concerning figures is set out in detail in the previous patch.\n\nKind regards, Jürgen Purtz", "msg_date": "Fri, 17 Apr 2020 19:56:08 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Additional Chapter for Tutorial" }, { "msg_contents": ">\n> I plan to extend over time the part 'Tutorial' by an additional chapter\n> with an overview about key design decisions and basic features. The\n> typical audience should consist of persons with limited pre-knowledge in\n> database systems and some interest in PostgreSQL. In the attachment you\n> find a patch for the first sub-chapter. Subsequent sub-chapters should\n> be: MVCC, transactions, VACUUM, backup, replication, ... - mostly with\n> the focus on the PostgreSQL implementation and not on generic topics\n> like b-trees.\n>\n\n+1\n\nI plan to extend over time the part 'Tutorial' by an additional chapter \nwith an overview about key design decisions and basic features. The \ntypical audience should consist of persons with limited pre-knowledge in \ndatabase systems and some interest in PostgreSQL. In the attachment you \nfind a patch for the first sub-chapter. Subsequent sub-chapters should \nbe: MVCC, transactions, VACUUM, backup, replication, ... - mostly with \nthe focus on the PostgreSQL implementation and not on generic topics \nlike b-trees.+1", "msg_date": "Fri, 17 Apr 2020 14:18:17 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-04-17 19:56, Jürgen Purtz wrote:\n> Our documentation explains many details about commands, tools,\n> parameters in detail and with high accuracy. Nevertheless my\n> impression is that we neglect the 'big picture': why certain processes\n\n> [0001-architecture.patch]\n\nVery good stuff, and useful. I think.\n\nI mean that but nevertheless here is a lot of comment :)\n\n(I didn't fully compile as docs, just read the 'text' from the patch \nfile)\n\n\nCollabortion\nCollaboration\n\ndrop 'resulting'\n\n\nHe acts in close cooperation with the\nIt acts in close cooperation with the\n\nHe loads the configuration files, allocates the\nIt loads the configuration files, allocates the\n\nprocess</firstterm>. He checks the authorization, starts a\nprocess</firstterm>. it checks the authorization, starts a\n\nand instructs the client application to connect to him. All further\nand instructs the client application to connect to it. All further\n\nby him.\nby it.\n\nIn an first attempt\nIn a first attempt\n\nmuch huger than memory, it's likely that\nmuch larger than memory, it's likely that\n\nRAM is performed in units of complete pages while retaining\nRAM is performed in units of complete pages, retaining\n\nSooner or later it is necessary to overwrite old RAM\nSooner or later it becomes necessary to overwrite old RAM\n\ntransfered\ntransferred\n (multiple times)\n\nwho runs\nwhich runs\n\nHe writes\nit writes\n\nThis is the primarily duty of the\nThis is primarily the duty of the\n or possibly:\nThis is the primary duty of the\n\nhe starts periodically\nit starts periodically\n\nspeeds up a possibly occurring recovery.\ncan speed up recovery.\n\nwriten\nwritten\n\ncollects counter about accesses\ncollects counters about accesses\n\nand others. He stores the obtained information in system\nand more. It stores the obtained information in system\n\nsudirectories consists\nsubdirectories consist <-- plural, no -s\n\nthere are information\nthere is information\n\nand contains the ID of the\nand contains the ID (pid) of the\n\n( IMHO, it is conventional (and therefore easier to read) to have 'e.g.' \nfollowed by a comma, and not by a semi-colon, although obviously that's \nnot really wrong either. )\n\n\nThanks,\n\nErik Rijkers\n\n\n\n\n", "msg_date": "Fri, 17 Apr 2020 20:40:32 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 17.04.20 20:40, Erik Rijkers wrote:\n> Very good stuff, and useful. I think.\n>\n> I mean that but nevertheless here is a lot of comment :)\n>\n> (I didn't fully compile as docs, just read the 'text' from the patch \n> file)\n\nThanks. Added nearly all of the suggestions.\n\n\n--\n\nJürgen Purtz", "msg_date": "Mon, 20 Apr 2020 10:30:20 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 20.04.20 10:30, Jürgen Purtz wrote:\n> On 17.04.20 20:40, Erik Rijkers wrote:\n>> Very good stuff, and useful. I think.\n>>\n>> I mean that but nevertheless here is a lot of comment :)\n>>\n>> (I didn't fully compile as docs, just read the 'text' from the patch \n>> file)\n>\n> Thanks. Added nearly all of the suggestions.\n>\n>\nWhat is new? Added two sub-chapters 'mvcc' and 'vacuum' plus graphics. \nMade some modifications in previous sub-chapters and in existing titles. \nAdded some glossary entries.\n\n--\n\nJürgen Purtz", "msg_date": "Wed, 29 Apr 2020 16:13:46 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-04-29 16:13, Jürgen Purtz wrote:\n> On 20.04.20 10:30, Jürgen Purtz wrote:\n>> On 17.04.20 20:40, Erik Rijkers wrote:\n>>> Very good stuff, and useful. I think.\n>>> \n>>> I mean that but nevertheless here is a lot of comment :)\n>>> \n>>> (I didn't fully compile as docs, just read the 'text' from the patch \n>>> file)\n>> \n>> Thanks. Added nearly all of the suggestions.\n>> \n>> \n> What is new? Added two sub-chapters 'mvcc' and 'vacuum' plus graphics.\n> Made some modifications in previous sub-chapters and in existing\n> titles. Added some glossary entries.\n\n> [0003-architecture.patch]\n\nHi Jürgen,\n\n\nHere are again some suggested changes, up to line 600 (of the patch - \nthat is around start of the new NVCC paragraph)\n\nI may have repeated some thing you have already rejected (it was too \nmuch work to go back and check). I am not a native speaker of english.\n\nOne general remark: in my humble opinion, you write too many capitalized \nwords. It's not really a problem but overall it's becomes bit too much. \n But I have not marked these. perhaps some future iteration.\n\nI'll probably read through the latter part of the patch later (probably \ntomorrow).\n\nThanks,\n\nErik Rijkers\n\n\n\nthey merely send requests to the server side and receives\nthey merely send requests to the server side and receive\n\nis a group of tightly coupled other server side processes plus a\nis a group of tightly coupled other server-side processes plus a\n\nClient requests (SELECT, UPDATE, ...) usually leads to the\nClient requests (SELECT, UPDATE, ...) usually lead to the\n\nBecause files are much larger than memory, it's likely that\nBecause files are often larger than memory, it's likely that\n\nRAM is performed in units of complete pages, retaining their size and \nlayout.\nRAM is performed in units of complete pages.\n\nReading file pages is notedly slower than reading\nReading file pages is slower than reading\n\nof the <firstterm>Backend processes</firstterm> has done the job those \npages are available for all other\nof the <firstterm>Backend processes</firstterm> has read pages into \nmemory those pages are available for all other\n\nthey must be transferred back to disk. This is a two-step process.\nthey must be written back to disk. This is a two-step process.\n\nBecause of the sequential nature of this writing, it is much\nBecause of this writing is sequential, it is much\n\nin an independent process. Nevertheless all\nin an independent process. Nevertheless, all\n\nhuge I/O activities can block other processes significantly,\nI/O activities can block other processes,\n\nit starts periodically and acts only for a short period.\nit starts periodically and is active only for a short period.\n\nduty. As its name suggests, he has to create\nduty. As its name suggests, it has to create\n\nIn consequence, after a <firstterm>Checkpoint</firstterm>\nAfter a <firstterm>Checkpoint</firstterm>,\n\nIn correlation with data changes,\nAs a result of data changes,\n\ntext lines about serious and non-serious events which can happen\ntext lines about serious and less serious events which can happen\n\ndatabase contains many <glossterm \nlinkend=\"glossary-schema\">schema</glossterm>,\ndatabase contains many <glossterm \nlinkend=\"glossary-schema\">schemas</glossterm>,\n\nbelongs to a certain <firstterm>schema</firstterm>, they cannot\nbelongs to a single <firstterm>schema</firstterm>, they cannot\n\nA <firstterm>Cluster</firstterm> is the outer frame for a\nA <firstterm>Cluster</firstterm> is the outer container for a\n\n<literal>postgres</literal> as a copy of\n<literal>postgres</literal> is generated as a copy of\n\nrole of <literal>template0</literal> as the origin\nrole of <literal>template0</literal> as the pristine origin\n\nare different objects and absolutely independent from each\nare different objects and independent from each\n\ncomplete <firstterm>cluster</firstterm>, independent from\n<firstterm>cluster</firstterm>, independent from\n\nanywhere in the file system. In many cases, the environment\nsomewhere in the file system. In many cases, the environment\n\nsome files, all of which are necessary to store long lasting\nsome files, all of which are necessary to store long-lasting\n\n<firstterm>tablespaces</firstterm> itself.\n<firstterm>tablespaces</firstterm> themselves.\n\n<firstterm>Postgres</firstterm> (respectively \n<firstterm>Postmaster</firstterm>) process.\n<firstterm>Postgres</firstterm> process (also known as \n<firstterm>Postmaster</firstterm>).\n\n<title>MVCC</title>\n<title>MVCC - Multiversion Concurrency Control</title>\n\nThe dabase must take a sensible decision to prevent the application\nThe database must take a sensible decision to prevent the application\n\n# this sentence I just don't understand - can you please elucidate?\nThe database must take a sensible decision to prevent the application\nfrom promising delivery of the single article to both clients.\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:35:22 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - (review first half of 0003)" }, { "msg_contents": "On 2020-04-29 16:13, Jürgen Purtz wrote:\n> On 20.04.20 10:30, Jürgen Purtz wrote:\n>> On 17.04.20 20:40, Erik Rijkers wrote:\n>>> Very good stuff, and useful. I think.\n>>>\n>>> I mean that but nevertheless here is a lot of comment :)\n>>>\n>>> (I didn't fully compile as docs, just read the 'text' from the patch\n>>> file)\n>>\n>> Thanks. Added nearly all of the suggestions.\n>>\n>>\n> What is new? Added two sub-chapters 'mvcc' and 'vacuum' plus graphics.\n> Made some modifications in previous sub-chapters and in existing titles.\n> Added some glossary entries.\n\nI don't see this really as belonging into the tutorial. The tutorial \nshould be hands-on, how do you get started, how do you get some results.\n\nYour material is more of an overview of the whole system. What's a new \nuser supposed to do with that?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 21:12:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 29.04.20 21:12, Peter Eisentraut wrote:\n>\n> I don't see this really as belonging into the tutorial.  The tutorial \n> should be hands-on, how do you get started, how do you get some results.\n>\nYes, the tutorial should be a short overview and give instructions how \nto start. IMO the first 4 sub-chapters fulfill this expectation. Indeed, \nthe fifth (VACUUM) is extensive and offers many details.\n\nDuring the inspection of the existing documentation I recognized that \nthere are many details about VACUUM, AUTOVACUUM, all of their parameters \nas well as their behavior. But the information is spread across many \npages: Automatic Vacuuming, Client Connection Defaults, Routine \nVacuuming, Resource Consumption, VACUUM. Even for a person with some \npre-knowledge it is hard to get an overview how this fits together and \nwhy things are solved in exactly this way. In the end we have very good \ndescriptions of all details but I miss the 'big picture'. Therefore I \nsummarized central aspects and tried to give an answer to the question \n'why is it done in this way?'. I do not dispute that the current version \nof the page is not adequate for beginners. But at some place we should \nhave such a summary about vacuuming and freezing.\n\nHow to proceed?\n\n- Remove the page and add a short paragraph to the MVCC page instead.\n\n- Cut down the page to a tiny portion.\n\n- Divide it into two parts: a) a short introduction and b) the rest \nafter a statement like 'The following offers more details and parameters \nthat are more interesting for an experienced user than for a beginner. \nYou can easily skip it.'\n\n\n> Your material is more of an overview of the whole system.  What's a \n> new user supposed to do with that?\n\nWhen I dive into a new subject, I'm more interested in its architecture \nthan in its details. We shall offer an overview about the major PG \ncomponents and strategies to beginners.\n\n\n--\n\nJürgen Purtz\n\n\n\n\n", "msg_date": "Thu, 30 Apr 2020 14:31:10 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 30.04.20 14:31, Jürgen Purtz wrote:\n> On 29.04.20 21:12, Peter Eisentraut wrote:\n>>\n>> I don't see this really as belonging into the tutorial.  The tutorial \n>> should be hands-on, how do you get started, how do you get some results.\n>>\n> Yes, the tutorial should be a short overview and give instructions how \n> to start. IMO the first 4 sub-chapters fulfill this expectation. \n> Indeed, the fifth (VACUUM) is extensive and offers many details.\n>\n> During the inspection of the existing documentation I recognized that \n> there are many details about VACUUM, AUTOVACUUM, all of their \n> parameters as well as their behavior. But the information is spread \n> across many pages: Automatic Vacuuming, Client Connection Defaults, \n> Routine Vacuuming, Resource Consumption, VACUUM. Even for a person \n> with some pre-knowledge it is hard to get an overview how this fits \n> together and why things are solved in exactly this way. In the end we \n> have very good descriptions of all details but I miss the 'big \n> picture'. Therefore I summarized central aspects and tried to give an \n> answer to the question 'why is it done in this way?'. I do not dispute \n> that the current version of the page is not adequate for beginners. \n> But at some place we should have such a summary about vacuuming and \n> freezing.\n>\n> How to proceed?\n>\n> - Remove the page and add a short paragraph to the MVCC page instead.\n>\n> - Cut down the page to a tiny portion.\n>\n> - Divide it into two parts: a) a short introduction and b) the rest \n> after a statement like 'The following offers more details and \n> parameters that are more interesting for an experienced user than for \n> a beginner. You can easily skip it.'\n>\n>\n>> Your material is more of an overview of the whole system.  What's a \n>> new user supposed to do with that?\n>\n> When I dive into a new subject, I'm more interested in its \n> architecture than in its details. We shall offer an overview about the \n> major PG components and strategies to beginners.\n>\n>\nIn comparison with to previous patch this one contains:\n\n- Position and title changed to reflect its intention and importance.\n\n- A <note> delimits VACUUM basics from details. This is done because I \ncannot find another suitable place for such a summarizing description.\n\n- Three additional sub-chapters.\n\n--\n\nJürgen Purtz", "msg_date": "Tue, 2 Jun 2020 17:01:31 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "> On 2 Jun 2020, at 17:01, Jürgen Purtz <juergen@purtz.de> wrote:\n\n> In comparison with to previous patch this one contains:\n> \n> - Position and title changed to reflect its intention and importance.\n> \n> - A <note> delimits VACUUM basics from details. This is done because I cannot find another suitable place for such a summarizing description.\n> \n> - Three additional sub-chapters.\n\nThis patch no longer applies, due to conflicts in start.sgml, can you please\nsubmit a rebased version?\n\ncheers ./daniel\n\n", "msg_date": "Sun, 12 Jul 2020 22:45:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "\nOn 12.07.20 22:45, Daniel Gustafsson wrote:\n> This patch no longer applies, due to conflicts in start.sgml, can you please\n> submit a rebased version?\n\nok. but I need some days.  juergen\n\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 08:15:20 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "Which version is this application for?\n\nI tried for v12 and v13 Beta, both failed.\n\nRegards,\nNaresh G\n\nOn Mon, Jul 13, 2020 at 11:45 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n>\n> On 12.07.20 22:45, Daniel Gustafsson wrote:\n> > This patch no longer applies, due to conflicts in start.sgml, can you\n> please\n> > submit a rebased version?\n>\n> ok. but I need some days. juergen\n>\n>\n>\n>\n>\n\nWhich version is this application for?I tried for v12 and v13 Beta, both failed.Regards,Naresh GOn Mon, Jul 13, 2020 at 11:45 AM Jürgen Purtz <juergen@purtz.de> wrote:\nOn 12.07.20 22:45, Daniel Gustafsson wrote:\n> This patch no longer applies, due to conflicts in start.sgml, can you please\n> submit a rebased version?\n\nok. but I need some days.  juergen", "msg_date": "Mon, 13 Jul 2020 17:50:51 +0530", "msg_from": "Naresh gandi <naresh5310@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "> On 13 Jul 2020, at 14:20, Naresh gandi <naresh5310@gmail.com> wrote:\n\n(please avoid top-posting)\n\n> Which version is this application for?\n> \n> I tried for v12 and v13 Beta, both failed.\n\nUnless being a bugfix, all patches are only considered against the main\ndevelopment branch in Git. As this is new material, it would be for v14.\n\ncheers ./daniel\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:24:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 12.07.20 22:45, Daniel Gustafsson wrote:\n>\n> This patch no longer applies, due to conflicts in start.sgml, can you please\n> submit a rebased version?\n>\n> cheers ./daniel\n>\nNew version attached.\n\n--\n\nJürgen Purtz", "msg_date": "Fri, 17 Jul 2020 11:32:50 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-07-17 11:32, Jürgen Purtz wrote:\n> On 12.07.20 22:45, Daniel Gustafsson wrote:\n>> \n>> This patch no longer applies, due to conflicts in start.sgml, can you \n>> please\n>> submit a rebased version?\n>> \n>> cheers ./daniel\n>> \n> New version attached.\n> \n> [0005-architecture.patch]\n\nHi,\n\nI went through the architecture.sgml file once, and accumulated the \nattached edits.\n\nThere are still far too many Unneeded Capitals On Words for my taste but \nI have not changed many of those. We could use some more opinions on \nthat, I suppose. (if it becomes too silent maybe include the \npgsql-hackers again?)\n\nThanks,\n\n\nErik Rijkers\n\n\n\n\n\n\n\n> --\n> \n> Jürgen Purtz", "msg_date": "Sat, 18 Jul 2020 19:17:50 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 18.07.20 19:17, Erik Rijkers wrote:\n>\n> Hi,\n>\n> I went through the architecture.sgml file once, and accumulated the \n> attached edits.\n>\n> There are still far too many Unneeded Capitals On Words for my taste \n> but I have not changed many of those. We could use some more opinions \n> on that, I suppose. (if it becomes too silent maybe include the \n> pgsql-hackers again?)\n>\n> Thanks,\n>\n>\n> Erik Rijkers\n\nThe attached patch contains:\n\n- integration of Erik's suggestions\n\n- coordination of terms in text, graphic and glossary\n\n- some changes in upper-case usage\n\n- fewer usage of <firstterm> with two exceptions: The first chapter 4.1 \nemphasize all important terms to help beginners in their learning \nprocess; chapter 4.5. emphasize the term 'autovacuum' to straighten the \nfact that - despite its similarities - the tool autovacuum is something \nelse than the SQL command vacuum.\n\n\n--\n\nJürgen Purtz", "msg_date": "Tue, 21 Jul 2020 13:51:07 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "Hi all,\n\n\nI want to import XML file into PG database table.\n\nI've find functions to get the XML content of a cell after imported an XML file with the pg_get_file function.\n\nBut, I want to explode the XML content to colums. How can I do this ?\n\n\nPG 10 under Ubuntu 18\n\n_________________________________\n\nCordialement, Pascal CROZET\n\nDBA - Qualis Consulting\n\n• 300 Route Nationale 6 – 69760 LIMONEST\n\n_________________________________\n\n\n\n\n\n\n\n\n\nHi all,\n\n\nI want to import XML file into PG database table.\n\nI've find functions to get the XML content of a cell after imported an XML file with the pg_get_file function.\nBut, I want to explode the XML content to colums. How can I do this ?\n\n\nPG 10 under Ubuntu 18\n\n\n\n\n\n\n\n_________________________________\n\n\nCordialement, \nPascal CROZET\n\nDBA - Qualis Consulting\n\n• 300 Route Nationale 6 – 69760 LIMONEST\n\n_________________________________", "msg_date": "Tue, 21 Jul 2020 23:49:08 +0000", "msg_from": "PASCAL CROZET <pascal.crozet@qualis-consulting.com>", "msg_from_op": false, "msg_subject": "RE: Additional Chapter for Tutorial" }, { "msg_contents": "Again, I don't see how this belongs into the tutorial. It is mostly \nadvanced low-level information that is irrelevant for someone starting \nup, it is not hands-on, so quite unlike the rest of the tutorial, and \nfor the most part the information just duplicates what is already \nexplained elsewhere.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 1 Sep 2020 23:30:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 01.09.20 23:30, Peter Eisentraut wrote:\n> It is mostly advanced low-level information that is irrelevant for \n> someone starting up,\n\nThat applies only to the VACUUM chapter. VACUUM and AUTOVACUUM are \ncontrolled by a lot of parameters. Therefor the current documentation \nconcerning the two mechanism spreads the description across different \npages (20.4, 25.1, VACUUM command). Because of the structure of our \ndocumentation that's ok. But we should have a summary page somewhere - \nnot necessarily in the tutorial.\n\n> the most part the information just duplicates what is already \n> explained elsewhere.\n\nThat is the nature of a tutorial respectively a summary.\n\n--\n\nJ. Purtz\n\n\n\n\n", "msg_date": "Wed, 2 Sep 2020 09:04:38 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-09-02 09:04, Jürgen Purtz wrote:\n> On 01.09.20 23:30, Peter Eisentraut wrote:\n>> It is mostly advanced low-level information that is irrelevant for\n>> someone starting up,\n> That applies only to the VACUUM chapter. VACUUM and AUTOVACUUM are\n> controlled by a lot of parameters. Therefor the current documentation\n> concerning the two mechanism spreads the description across different\n> pages (20.4, 25.1, VACUUM command). Because of the structure of our\n> documentation that's ok. But we should have a summary page somewhere -\n> not necessarily in the tutorial.\n\nThere is probably room for improvement, but the section numbers you \nmention are not about VACUUM, AFAICT, so I can't really comment on what \nyou have in mind.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Sep 2020 18:26:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 10.09.20 18:26, Peter Eisentraut wrote:\n> On 2020-09-02 09:04, Jürgen Purtz wrote:\n>> On 01.09.20 23:30, Peter Eisentraut wrote:\n>>> It is mostly advanced low-level information that is irrelevant for\n>>> someone starting up,\n>> That applies only to the VACUUM chapter. VACUUM and AUTOVACUUM are\n>> controlled by a lot of parameters. Therefor the current documentation\n>> concerning the two mechanism spreads the description across different\n>> pages (20.4, 25.1, VACUUM command). Because of the structure of our\n>> documentation that's ok. But we should have a summary page somewhere -\n>> not necessarily in the tutorial.\n>\n> There is probably room for improvement, but the section numbers you \n> mention are not about VACUUM, AFAICT, so I can't really comment on \n> what you have in mind.\n>\nBecause of the additional chapter for the 'tutorial' on my local \ncomputer, the numbers increased for me. The regular chapter numbers are \n19.4 and 24.1. Sorry for the confusion. In detail:\n\n19.4: parameters to configure the server, especially five parameters \n'vacuum_cost_xxx'.\n\n19.10: parameters to configure autovacuum.\n\n19.11: parameters to configure client connections, especially five \nparameters 'vacuum_xxx' concerning their freeze-behavior.\n\n24.1: explains the general necessity of (auto)vacuum and their strategies.\n\nThe page about the SQL command VACUUM explains the different options \n(FULL, FREEZE, ..) and their meaning.\n\n--\n\nJürgen Purtz\n\n\n\n\n", "msg_date": "Fri, 11 Sep 2020 09:49:23 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Wed, Sep 2, 2020 at 12:04 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> On 01.09.20 23:30, Peter Eisentraut wrote:\n> > It is mostly advanced low-level information that is irrelevant for\n> > someone starting up,\n>\n> That applies only to the VACUUM chapter. VACUUM and AUTOVACUUM are\n> controlled by a lot of parameters. Therefor the current documentation\n> concerning the two mechanism spreads the description across different\n> pages (20.4, 25.1, VACUUM command). Because of the structure of our\n> documentation that's ok. But we should have a summary page somewhere -\n> not necessarily in the tutorial.\n>\n> > the most part the information just duplicates what is already\n> > explained elsewhere.\n>\n> That is the nature of a tutorial respectively a summary.\n>\n>\nI've begun looking at this and have included quite a few html comments\nwithin the patch. However, the two main items that I have found so far are:\n\nOne, I agree with Peter that this seems misplaced in Tutorial. I would\ncreate a new Internals Chapter and place this material there, or maybe\nconsider a sub-chapter under \"Overview of PostgreSQL Internals\". If this\nis deemed to be of a more primary importance than the content in the\nInternals section I would recommend placing it in Reference. I feel it\ndoes fit there and given the general importance of that section readers\nwill be inclined to click into it and skim over its content.\n\nTwo, I find the amount of detail being provided here to be on the too-much\nside. A bit more judicious use of links into the appropriate detail\nchapters seems warranted.\n\nI took a pretty heavy hand to the original section though aside from the\nscope comment it can probably be considered a bit weighted toward style\npreferences. Though I did note/rewrite a couple of things that seemed\nfactually incorrect - and seemingly not done intentionally in the interest\nof simplification. Specifically the client connection process and, I\nthink, the relationship between the checkpointer and background writer.\n\nI do like the idea and the general flow of the material so far - though I\nhaven't really looked at the overall structure yet, just started reading\nand editing from the top of the new file.\n\nI've attached the original 0007 patch and my diff against it applied to\nHEAD.\n\nTook a quick peek at the image (at the end) and while I will need a second\npass over this section regardless I figured I'd provide this subset of\nfeedback now in order to move things along a bit.\n\nDavid J.", "msg_date": "Wed, 21 Oct 2020 13:33:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 21.10.20 22:33, David G. Johnston wrote:\n> I've begun looking at this and have included quite a few html comments \n> within the patch.  However, the two main items that I have found so \n> far are:\n>\n> One, I agree with Peter that this seems misplaced in Tutorial.  I \n> would create a new Internals Chapter and place this material there, or \n> maybe consider a sub-chapter under \"Overview of PostgreSQL \n> Internals\".  If this is deemed to be of a more primary importance than \n> the content in the Internals section I would recommend placing it in \n> Reference.  I feel it does fit there and given the general importance \n> of that section readers will be inclined to click into it and skim \n> over its content.\n\nI like the idea of dividing the material into two different chapters. \nThe existing part \"I. Tutorial\" contains the first concrete steps: \ninstallation, creating database and database objects, using SQL basic \nand advanced features. Its typical audience consists of persons doing \ntheir first steps with PG. The new material is aimed at persons \ninterested in implementation aspects of PG. Therefore, the part \"VII. \nInternals\" seems to be the natural place to integrate it, something like \n\"Architecture and Implementation Aspects\" or \"Architecture and \nImplementation Cornerstones\".\n\nCreating such a chapter in \"VII. Internals\" will increase the existing \nchapter numbers 50 - 71, which may lead to some confusion. On the other \nhand the content can possibly be applied to all supported PG versions at \nthe same time, which will lead to a consistent behavior. Extending one \nof the existing chapters won't work because all of them handle their own \ntopic, eg.: \"50. Overview of PostgreSQL Internals\" (misleading title?) \nfocuses on the handling of SQL statements from parsing to execution.\n\nWhat are your thoughts?\n\n--\n\nJ. Purtz\n\n\n\n\n\n\n\nOn 21.10.20 22:33, David G. Johnston\n wrote:\n\n\nI've begun\n looking at this and have included quite a few html comments\n within the patch.  However, the two main items that I have found\n so far are:\n\n\nOne, I agree with\n Peter that this seems misplaced in Tutorial.  I would create a\n new Internals Chapter and place this material there, or maybe\n consider a sub-chapter under \"Overview of PostgreSQL\n Internals\".  If this is deemed to be of a more primary\n importance than the content in the Internals section I would\n recommend placing it in Reference.  I feel it does fit there and\n given the general importance of that section readers will be\n inclined to click into it and skim over its content.\n\nI like the idea of dividing the material into two different\n chapters. The existing part \"I. Tutorial\" contains the first\n concrete steps: installation, creating database and database\n objects, using SQL basic and advanced features. Its typical\n audience consists of persons doing their first steps with PG. The\n new material is aimed at persons interested in implementation\n aspects of PG. Therefore, the part \"VII. Internals\" seems to be\n the natural place to integrate it, something like \"Architecture\n and Implementation Aspects\" or \"Architecture and Implementation\n Cornerstones\".\nCreating such a chapter in \"VII. Internals\" will increase the\n existing chapter numbers 50 - 71, which may lead to some\n confusion. On the other hand the content can possibly be applied\n to all supported PG versions at the same time, which will lead to\n a consistent behavior. Extending one of the existing chapters\n won't work because all of them handle their own topic, eg.: \"50.\n Overview of PostgreSQL Internals\" (misleading title?) focuses on\n the handling of SQL statements from parsing to execution. \n\n What are your thoughts?\n--\nJ. Purtz", "msg_date": "Fri, 23 Oct 2020 15:58:46 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Fri, Oct 23, 2020 at 6:58 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> Creating such a chapter in \"VII. Internals\" will increase the existing\n> chapter numbers 50 - 71, which may lead to some confusion. On the other\n> hand the content can possibly be applied to all supported PG versions at\n> the same time, which will lead to a consistent behavior. Extending one of\n> the existing chapters won't work because all of them handle their own\n> topic, eg.: \"50. Overview of PostgreSQL Internals\" (misleading title?)\n> focuses on the handling of SQL statements from parsing to execution.\n>\n> What are your thoughts?\n>\nv14 has already added a new chapter, installation from binaries. It was\nnot back-patched. To my knowledge no one brought up these points - numbers\nchanging or back-patching the new material. I don't see that this\nenhancement needs to be treated any differently.\n\nDavid J.\n\nOn Fri, Oct 23, 2020 at 6:58 AM Jürgen Purtz <juergen@purtz.de> wrote:\nCreating such a chapter in \"VII. Internals\" will increase the\n existing chapter numbers 50 - 71, which may lead to some\n confusion. On the other hand the content can possibly be applied\n to all supported PG versions at the same time, which will lead to\n a consistent behavior. Extending one of the existing chapters\n won't work because all of them handle their own topic, eg.: \"50.\n Overview of PostgreSQL Internals\" (misleading title?) focuses on\n the handling of SQL statements from parsing to execution. \n\n What are your thoughts?v14 has already added a new chapter, installation from binaries.  It was not back-patched.  To my knowledge no one brought up these points - numbers changing or back-patching the new material.  I don't see that this enhancement needs to be treated any differently.David J.", "msg_date": "Fri, 23 Oct 2020 09:15:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 21.10.20 22:33, David G. Johnston wrote:\n> I've begun looking at this and have included quite a few html comments \n> within the patch.  However, the two main items that I have found so \n> far are:\n>\n> One, I agree with Peter that this seems misplaced in Tutorial.  I \n> would create a new Internals Chapter and place this material there, or \n> maybe consider a sub-chapter under \"Overview of PostgreSQL \n> Internals\".  If this is deemed to be of a more primary importance than \n> the content in the Internals section I would recommend placing it in \n> Reference.  I feel it does fit there and given the general importance \n> of that section readers will be inclined to click into it and skim \n> over its content.\n>\n> Two, I find the amount of detail being provided here to be on the \n> too-much side.  A bit more judicious use of links into the appropriate \n> detail chapters seems warranted.\n>\n> I took a pretty heavy hand to the original section though aside from \n> the scope comment it can probably be considered a bit weighted toward \n> style preferences.  Though I did note/rewrite a couple of things that \n> seemed factually incorrect - and seemingly not done intentionally in \n> the interest of simplification.  Specifically the client connection \n> process and, I think, the relationship between the checkpointer and \n> background writer.\n>\n> I do like the idea and the general flow of the material so far - \n> though I haven't really looked at the overall structure yet, just \n> started reading and editing from the top of the new file.\n>\n> I've attached the original 0007 patch and my diff against it applied \n> to HEAD.\n>\n> Took a quick peek at the image (at the end) and while I will need a \n> second pass over this section regardless I figured I'd provide this \n> subset of feedback now in order to move things along a bit.\n>\n> David J.\n\nThe attached patch is an intermediate, mostly formal step. It includes:\n\n- Moving the chapter to \"Part VII. Internals\".\n\n- Changing the title of the current chapter \"Chapter 50. Overview of \nPostgreSQL Internals\" to \"Overview of Query Handling\" because the old \ntitle is too generic. This chapter is focused on the handling of queries.\n\n- Integration of David's smaller suggestions. For the more important \nsuggestions I need some days.\n\nThe patch is intended to give every interested person an overall \nimpression of the chapter within its new position. Because it has moved \nfrom part 'Tutorial' to 'Internals' the text should be very accurate \nconcerning technical issues - like all the other chapters in this part. \nA tutorial chapter has a more superficial nature.\n\n--\n\nJ. Purtz", "msg_date": "Mon, 26 Oct 2020 14:33:35 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "Removing -docs as moderation won’t let me cross-post.\n\nOn Monday, October 26, 2020, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Monday, October 26, 2020, Jürgen Purtz <juergen@purtz.de> wrote:\n>\n>> On 21.10.20 22:33, David G. Johnston wrote:\n>>\n>>\n>> Two, I find the amount of detail being provided here to be on the\n>> too-much side. A bit more judicious use of links into the appropriate\n>> detail chapters seems warranted.\n>>\n>> The patch is intended to give every interested person an overall\n>> impression of the chapter within its new position. Because it has moved\n>> from part 'Tutorial' to 'Internals' the text should be very accurate\n>> concerning technical issues - like all the other chapters in this part. A\n>> tutorial chapter has a more superficial nature.\n>>\n> Haven’t reviewed the patches yet but...\n>\n> I still think that my comment applies even with the move to internals.\n> The value here is putting together a coherent narrative and making deeper\n> implementation details accessible. If those details are already covered\n> elsewhere in the documentation (not source code) links should be given\n> serious consideration.\n>\n> David J.\n>\n>\n\nRemoving -docs as moderation won’t let me cross-post.On Monday, October 26, 2020, David G. Johnston <david.g.johnston@gmail.com> wrote:On Monday, October 26, 2020, Jürgen Purtz <juergen@purtz.de> wrote:\n\nOn 21.10.20 22:33, David G. Johnston\n wrote:\n\n\n\nTwo, I find the\n amount of detail being provided here to be on the too-much\n side.  A bit more judicious use of links into the appropriate\n detail chapters seems warranted.\n\n\nThe patch is intended to give every interested person an overall\n impression of the chapter within its new position. Because it has\n moved from part 'Tutorial' to 'Internals' the text should be very\n accurate concerning technical issues - like all the other chapters\n in this part. A tutorial chapter has a more superficial nature.Haven’t reviewed the patches yet but...I still think that my comment applies even with the move to internals.  The value here is putting together a coherent narrative and making deeper implementation details accessible.  If those details are already covered elsewhere in the documentation (not source code) links should be given serious consideration.David J.", "msg_date": "Mon, 26 Oct 2020 07:53:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 26.10.20 15:53, David G. Johnston wrote:\n> Removing -docs as moderation won’t let me cross-post.\n>\n> On Monday, October 26, 2020, David G. Johnston \n> <david.g.johnston@gmail.com <mailto:david.g.johnston@gmail.com>> wrote:\n>\n> On Monday, October 26, 2020, Jürgen Purtz <juergen@purtz.de\n> <mailto:juergen@purtz.de>> wrote:\n>\n> On 21.10.20 22:33, David G. Johnston wrote:\n>>\n>> Two, I find the amount of detail being provided here to be on\n>> the too-much side.  A bit more judicious use of links into\n>> the appropriate detail chapters seems warranted.\n>>\n> The patch is intended to give every interested person an\n> overall impression of the chapter within its new position.\n> Because it has moved from part 'Tutorial' to 'Internals' the\n> text should be very accurate concerning technical issues -\n> like all the other chapters in this part. A tutorial chapter\n> has a more superficial nature.\n>\n> Haven’t reviewed the patches yet but...\n>\n> I still think that my comment applies even with the move to\n> internals.  The value here is putting together a coherent\n> narrative and making deeper implementation details accessible.  If\n> those details are already covered elsewhere in the documentation\n> (not source code) links should be given serious consideration.\n>\n> David J.\n>\nPlease find the new patch in the attachment after integrating David's \nsuggestions: a) versus the last patch and b) versus master.\n\nNotably it contains\n\n * nearly all of his suggestions (see sgml file for comments 'DGJ')\n * reduction of <firstterm>. This was a hangover from the\n pre-glossary-times. I tried to emphasis standard terms. This is no\n longer necessary because nowadays they are clearly defined in the\n glossary.\n\n--\n\nJ. Purtz", "msg_date": "Fri, 30 Oct 2020 11:57:04 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-10-30 11:57, Jürgen Purtz wrote:\n> On 26.10.20 15:53, David G. Johnston wrote:\n>> Removing -docs as moderation won’t let me cross-post.\n>> \n\nHi,\n\nI applied 0009-architecture-vs-master.patch to head\nand went through architecture.sgml (only that file),\nthen produced the attached .diff\n\n\nAnd I wrote down some separate items:\n\n1.\n'Two Phase Locking' and 'TPL' should be, I think,\n'Two-Phase Commit'. Please someone confirm.\n(no changes made)\n\n2.\nTo compare xid to sequence because they similarly 'count up' seems a bad \nidea.\n(I don't think it's always true in the case of sequences)\n(no changes made)\n\n3.\n'accesses' seems a somewhat strange word most of the time just 'access' \nmay be better. Not sure - native speaker wanted. (no changes made)\n\n4.\n'heap', in postgres, means often (always?) files. But more generally, \nthe meaning is more associated with memory. Therefore it would be good \nI think to explicitly use 'heap file' at least in the beginning once to \nmake clear that heap implies 'safely written away to disk'. Again, I'm \nnot quite sure if my understanding is correct - I have made no changes \nin this regard.\n\n\n\nErik Rijkers", "msg_date": "Fri, 30 Oct 2020 17:45:00 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Fri, Oct 30, 2020 at 05:45:00PM +0100, Erik Rijkers wrote:\n> On 2020-10-30 11:57, Jürgen Purtz wrote:\n> > On 26.10.20 15:53, David G. Johnston wrote:\n> > > Removing -docs as moderation won’t let me cross-post.\n> > > \n> \n> Hi,\n> \n> I applied 0009-architecture-vs-master.patch to head\n> and went through architecture.sgml (only that file),\n> then produced the attached .diff\n\nNow I applied 0009 as well as Erik's changes and made some more of my own :)\n\nI'm including all patches so CFBOT is happy.\n\n> 3.\n> 'accesses' seems a somewhat strange word most of the time just 'access' may\n> be better. Not sure - native speaker wanted. (no changes made)\n\nYou're right, and I included that part.\n\n-- \nJustin", "msg_date": "Sat, 31 Oct 2020 16:34:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 30.10.20 17:45, Erik Rijkers wrote:\n> Hi,\n>\n> I applied 0009-architecture-vs-master.patch to head\n> and went through architecture.sgml (only that file),\n> then produced the attached .diff\n>\n>\n> And I wrote down some separate items:\n>\n> 1.\n> 'Two Phase Locking' and 'TPL' should be, I think,\n> 'Two-Phase Commit'. Please someone confirm.\n> (no changes made)\n>\n> 2.\n> To compare xid to sequence because they similarly 'count up' seems a \n> bad idea.\n> (I don't think it's always true in the case of sequences)\n> (no changes made)\n>\n> 3.\n> 'accesses' seems a somewhat strange word most of the time just \n> 'access' may be better.  Not sure - native speaker wanted. (no changes \n> made)\n>\n> 4.\n> 'heap', in postgres, means often (always?) files. But more generally, \n> the meaning is more associated with memory.  Therefore it would be \n> good I think to explicitly use 'heap file' at least in the beginning \n> once to make clear that heap implies 'safely written away to disk'.  \n> Again, I'm not quite sure if my understanding is correct - I have made \n> no changes in this regard.\n>\n>\n>\n> Erik Rijkers\n\nAll suggestions so far are summarized in the attached patch with the \nfollowing exceptions:\n\n- 'Two Phase Locking' is the intended term.\n\n- Not adopted:\n\n      Second, the transfer of dirty buffers from Shared Memory to\n      files must take place. This is the primary task of the\n-    Background Writer process. Because I/O activities can block\n+    Checkpointer process. Because I/O activities can block\n      other processes, it starts periodically and\n\nPartly adopted:\n\n-    the data in the old version of the row does not change! ...\n\n-    before. Nothing is thrown away so far! Only <literal>xmax</literal> ...\n\n--\n\nJ. Purtz", "msg_date": "Sun, 1 Nov 2020 16:38:43 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-11-01 16:38, Jürgen Purtz wrote:\n> On 30.10.20 17:45, Erik Rijkers wrote:\n>> \n>> And I wrote down some separate items:\n>> \n>> 1.\n>> 'Two Phase Locking' and 'TPL' should be, I think,\n>> 'Two-Phase Commit'. Please someone confirm.\n>> (no changes made)\n>> \n>> Erik Rijkers\n> \n> All suggestions so far are summarized in the attached patch with the\n> following exceptions:\n> \n> - 'Two Phase Locking' is the intended term.\n\nOK, so what is 'Two Phase Locking'? The term is not explained, and not \nused anywhere else in the manual. You propose to introduce it here, in \nthe tutorial. I don't know what it means, and I am not really a \nbeginner.\n\n'Two Phase Locking' should be explained somewhere, and how it relates \n(or not) to Two-Phase Commit (2PC), don't you agree?\n\n\nErik Rijkers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 02 Nov 2020 07:15:28 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 02.11.20 07:15, Erik Rijkers wrote:\n> On 2020-11-01 16:38, Jürgen Purtz wrote:\n>> On 30.10.20 17:45, Erik Rijkers wrote:\n>>>\n>>> And I wrote down some separate items:\n>>>\n>>> 1.\n>>> 'Two Phase Locking' and 'TPL' should be, I think,\n>>> 'Two-Phase Commit'. Please someone confirm.\n>>> (no changes made)\n>>>\n>>> Erik Rijkers\n>>\n>> All suggestions so far are summarized in the attached patch with the\n>> following exceptions:\n>>\n>> - 'Two Phase Locking' is the intended term.\n>\n> OK, so what is 'Two Phase Locking'?  The term is not explained, and \n> not used anywhere else in the manual.  You propose to introduce it \n> here, in the tutorial.  I don't know what it means, and I am not \n> really a beginner.\n>\n> 'Two Phase Locking' should be explained somewhere, and how it relates \n> (or not) to Two-Phase Commit (2PC), don't you agree?\n>\n>\n> Erik Rijkers\n>\n>\nIt may be possible to explain OCC and 2PL in two or three sentences \nwithin the glossary. But I think, we shall not try to explain such \ngeneral strategies. They are not specific to PG and even not \nimplemented. Instead, if the paragraph is too detailed, we can use a \nmore general formulation without explicitly naming locking strategies.\n\nOLD:\n\n     A first approach to implement protections against concurrent\n     access to the same data may be the locking of critical\n     rows. Two such techniques are:\n     <emphasis>Optimistic Concurrency Control</emphasis> (OCC)\n     and <emphasis>Two Phase Locking</emphasis> (2PL).\n     <productname>PostgreSQL</productname> implements a third, more\n     sophisticated technique: <firstterm>Multiversion Concurrency\n     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n\nProposal:\n\n     A first approach to implement protections against concurrent\n     access to the same data may be the locking of critical\n     rows.\n     <productname>PostgreSQL</productname> implements a more\n     sophisticated technique which avoids any locking: \n<firstterm>Multiversion Concurrency\n     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n\nAny thoughts or other suggestions?\n\n--\n\nJ. Purtz\n\n\n\n\n", "msg_date": "Mon, 2 Nov 2020 09:26:27 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-11-02 09:26, Jürgen Purtz wrote:\n\n> OLD:\n> \n>     A first approach to implement protections against concurrent\n>     access to the same data may be the locking of critical\n>     rows. Two such techniques are:\n>     <emphasis>Optimistic Concurrency Control</emphasis> (OCC)\n>     and <emphasis>Two Phase Locking</emphasis> (2PL).\n>     <productname>PostgreSQL</productname> implements a third, more\n>     sophisticated technique: <firstterm>Multiversion Concurrency\n>     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n> \n> Proposal:\n> \n>     A first approach to implement protections against concurrent\n>     access to the same data may be the locking of critical\n>     rows.\n>     <productname>PostgreSQL</productname> implements a more\n>     sophisticated technique which avoids any locking:\n> <firstterm>Multiversion Concurrency\n>     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n> \n> Any thoughts or other suggestions?\n> \n\nYes, just leave it out. Much better, as far as I'm concerned.\n\nErik\n\n\n\n\n", "msg_date": "Mon, 02 Nov 2020 09:44:54 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 02.11.20 09:44, Erik Rijkers wrote:\n> On 2020-11-02 09:26, Jürgen Purtz wrote:\n>\n>> OLD:\n>>\n>>     A first approach to implement protections against concurrent\n>>     access to the same data may be the locking of critical\n>>     rows. Two such techniques are:\n>>     <emphasis>Optimistic Concurrency Control</emphasis> (OCC)\n>>     and <emphasis>Two Phase Locking</emphasis> (2PL).\n>>     <productname>PostgreSQL</productname> implements a third, more\n>>     sophisticated technique: <firstterm>Multiversion Concurrency\n>>     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n>>\n>> Proposal:\n>>\n>>     A first approach to implement protections against concurrent\n>>     access to the same data may be the locking of critical\n>>     rows.\n>>     <productname>PostgreSQL</productname> implements a more\n>>     sophisticated technique which avoids any locking:\n>> <firstterm>Multiversion Concurrency\n>>     Control</firstterm> (MVCC). The crucial advantage of MVCC ...\n>>\n>> Any thoughts or other suggestions?\n>>\n>\n> Yes, just leave it out. Much better, as far as I'm concerned.\n>\n> Erik\n>\n>\nBecause there have been no more comments in the last days I created a \nconsolidated patch. It contains Erik's suggestion and some tweaks for \nthe text size within graphics.\n\n--\n\nJ. Purtz", "msg_date": "Sat, 7 Nov 2020 13:24:53 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-11-07 13:24, Jürgen Purtz wrote:\n>> \n> Because there have been no more comments in the last days I created a\n> consolidated patch. It contains Erik's suggestion and some tweaks for\n> the text size within graphics.\n> \n> [0011-architecture.patch]\n\nHi,\n\nI went through architecture.sgml once more; some proposed changes \nattached.\n\nAnd in some .svg files I noticed 'jungest' which should be 'youngest', I \nsuppose.\nI did not change them but below is filelist of grep -l 'jung'.\n\n./doc/src/sgml/images/freeze-ink.svg\n./doc/src/sgml/images/freeze-ink-svgo.svg\n./doc/src/sgml/images/freeze-raw.svg\n./doc/src/sgml/images/wraparound-ink.svg\n./doc/src/sgml/images/wraparound-ink-svgo.svg\n./doc/src/sgml/images/wraparound-raw.svg\n\n\nThanks,\n\nErik", "msg_date": "Sat, 07 Nov 2020 20:15:07 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 07.11.20 20:15, Erik Rijkers wrote:\n> On 2020-11-07 13:24, Jürgen Purtz wrote:\n>>>\n>> Because there have been no more comments in the last days I created a\n>> consolidated patch. It contains Erik's suggestion and some tweaks for\n>> the text size within graphics.\n>>\n>> [0011-architecture.patch]\n>\n> Hi,\n>\n> I went through architecture.sgml once more; some proposed changes \n> attached.\n>\n> And in some .svg files I noticed 'jungest' which should be 'youngest', \n> I suppose.\n> I did not change them but below is filelist of  grep -l 'jung'.\n>\n> ./doc/src/sgml/images/freeze-ink.svg\n> ./doc/src/sgml/images/freeze-ink-svgo.svg\n> ./doc/src/sgml/images/freeze-raw.svg\n> ./doc/src/sgml/images/wraparound-ink.svg\n> ./doc/src/sgml/images/wraparound-ink-svgo.svg\n> ./doc/src/sgml/images/wraparound-raw.svg\n>\n>\n> Thanks,\n>\n> Erik\n>\n>\nGood catches. Everything applied.\n\n--\n\nJ. Purtz", "msg_date": "Sun, 8 Nov 2020 16:55:56 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Sun, Nov 8, 2020 at 8:56 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n>\n> Good catches. Everything applied.\n>\n\nReviewed the first three sections.\n\ntemplate0 - I would remove the schema portions of this and simply note this\nas being a pristine recovery database in the diagram.\n\nI would drop the word \"more\" and just say \"system schemas\". I would drop\npg_toast from the list of system schema and focus on the three user-facing\nones.\n\nInstead of \"my_schema\" (optional) I would do \"my_schema\" (example)\n\nServer Graphic\n#3 Global SQL Objects: Objects which are shared among all databases within\na cluster.\n#6 Client applications are prohibited from connecting to template0\n#1 If by you we mean \"the client\" saying that you work \"in the cluster\ndata\" doesn't really help. I would emphasize the point that the client\nsees an endpoint the Postmaster publishes as a port or socket file and that\nplus the database name defines the endpoint the client connects to (meld\nwith #5)\n\nIn lieu of some of the existing detail provided about structure I would add\ninformation about configuration and search_path at this level.\n\nI like the object type enumeration - I would suggest grouping them by type\nin a manner consistent with the documentation and making each one a link to\nits \"primary\" section - the SQL Command reference if all else fails.\n\nThe \"i\" in internal in 51.3 (the image) needs capitalization).\n\nYou correctly add both Extension and Collation as database-level objects\nbut they are not mentioned anywhere else. They do belong here and need to\nbe tied in properly in the text.\n\nThe whole thing needs a good pass focused on capitalization. Both for\ntypos and to decide when various primary concepts like Instance should be\ncapitalized and when not.\n\n51.4 - When you look at the diagram seeing /pg/data/base looks really cool,\nbut when reading the prose where both the \"pg\" and the \"base\" are omitted\nand all you get are repeated references to \"data\", the directory name\nchoice becomes an issue IMO. I suggest (and changed the attached) to name\nthe actual root directory \"pgdata\". You should change the /pg/ directory\nname to something like \".../tutorial_project/\".\n\nSince you aren't following alphabetical order anyway I would place\npg_tblspc after globals since tablespaces are globals and thus proximity\nlinks them here - and pointing out that pg_tblspc holds the data makes\nstating that global doesn't contain tablespace data unnecessary.\n\nMaybe point out somewhere the the \"base/databaseOID\" directory represents\nthe default tablespace for each database, which isn't \"global\", only the\nnon-default tablespaces are considered globals (or just get rid of the\nmentioned on \"non-default tablespace\" for now).\n\nDavid J.", "msg_date": "Mon, 9 Nov 2020 16:14:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Sun, Nov 8, 2020 at 8:56 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> Good catches. Everything applied.\n>\n\nMVCC Section\n\nThe first paragraph and example in the MVCC section is a good example but\nseems misplaced - its relationship to MVCC generally is tenuous, rather I\nwould expect a discussion of the serializable isolation mode to follow.\n\nI'm not sure how much detail this section wants to get into given the\ncoverage of concurrency elsewhere in the documentation. \"Not much\" would\nbe my baseline.\n\nI would suggest spelling out what \"OLTP\" stands for and ideally pointing\nthe user to the glossary for the term.\n\nTending more toward a style gripe but the amount of leader phrases and\nredundancy are at a level that I am noticing them when I read this but do\nnot have the same impression having read large portions of documentation.\nIn particular:\n\n\"When we speak about transaction IDs, you need to know that xids are like\nsequences.\"\n\n\"But keep in mind that xids are independent of any time measurement — in\nmilliseconds or otherwise. If you dive deeper into PostgreSQL, you will\nrecognize parameters with names such as 'xxx_age'. Despite their names,\nthese '_age' parameters do not specify a period of time but represent a\ncertain number of transactions, e.g., 100 million.\"\n\nCould just be: xids are sequences and age computations involving them\nmeasure a transaction count as opposed to a time interval.\n\nThen I would consider adding a bit more detail/context here.\n\nxids are 32bit sequences, with a reserved value to handle wrap-around.\nThere are 4 billion values in the sequence but wrap-around handling must\noccur every 2 billion transactions. Age computations involving xids measure\na transaction count as opposed to a time interval.\n\nI would move the mentioning of \"vacuum\" to the main paragraph about delete\nand not solely as a \"keep in mind\" note.\n\nThe part before the diagram seems like it should be much shorter, concise,\nand provide links to the excellent documentation. The part after the\nimage, and the image itself, are good material, though possibly should be\nin a main administration chapter instead of an internals chapter.\n\nThe first bullet of \"keep in mind\" is both wordy and wrong - in particular\n\"as xids grow old row versions get out of scope over time\" doesn't make\nsense (or rather it only does in the context of wrap-around, not normal\nvisibility). Having the only mention of bloat be here is also not ideal,\nit too should be weaved into the main narrative. The \"keep in mind\"\nsection here should be a recap of already covered material in a succinct\nform, nothing should be new to someone who just read the entire section.\n\nI don't think that usage of exclamation marks (!) is warranted here, though\nemphasis on the key phrase wouldn't hurt.\n\nVacuum Section\n\navoid -> prevent (continued growth)\n\nAutovacuum is enabled by default. The whole note needs commas.\n\nI'd try to get rid of \"at arbitrary point in time\"\n\n\"Instance.\" we've already described where instances are previously (\"on the\nserver\")\n\nThe other sections - these seem misplaced for the tutorial, update the main\ndocumentation if this information is wholly missing or lacking. The MVCC\nchapter can incorporate overview information as it is a strict consequence\nof that implementation.\n\nStatistics belong elsewhere - the tutorial should not use poor command\nimplementation choices as a guide for user education.\n\nIn short, this whole section should not exist and its content moved to more\nappropriate areas (mainly MVCC). Vacuum is a tool that one must use but\nthe narrative should be about the system generally.\n\nDavid J.\n\nOn Sun, Nov 8, 2020 at 8:56 AM Jürgen Purtz <juergen@purtz.de> wrote:Good catches. Everything applied.MVCC SectionThe first paragraph and example in the MVCC section is a good example but seems misplaced - its relationship to MVCC generally is tenuous, rather I would expect a discussion of the serializable isolation mode to follow.I'm not sure how much detail this section wants to get into given the coverage of concurrency elsewhere in the documentation.  \"Not much\" would be my baseline.I would suggest spelling out what \"OLTP\" stands for and ideally pointing the user to the glossary for the term.Tending more toward a style gripe but the amount of leader phrases and redundancy are at a level that I am noticing them when I read this but do not have the same impression having read large portions of documentation. In particular:\"When we speak about transaction IDs, you need to know that xids are like sequences.\"\"But keep in mind that xids are independent of any time measurement — in milliseconds or otherwise. If you dive deeper into PostgreSQL, you will recognize parameters with names such as 'xxx_age'. Despite their names, these '_age' parameters do not specify a period of time but represent a certain number of transactions, e.g., 100 million.\"Could just be: \n\nxids are sequences and age computations involving them measure a transaction count as opposed to a time interval.\n\nThen I would consider adding a bit more detail/context here.xids are 32bit sequences, with a reserved value to handle wrap-around.  There are 4 billion values in the sequence but wrap-around handling must occur every 2 billion transactions. Age computations involving xids measure a transaction count as opposed to a time interval.I would move the mentioning of \"vacuum\" to the main paragraph about delete and not solely as a \"keep in mind\" note.The part before the diagram seems like it should be much shorter, concise, and provide links to the excellent documentation.  The part after the image, and the image itself, are good material, though possibly should be in a main administration chapter instead of an internals chapter.The first bullet of \"keep in mind\" is both wordy and wrong - in particular \"as xids grow old row versions get out of scope over time\" doesn't make sense (or rather it only does in the context of wrap-around, not normal visibility).  Having the only mention of bloat be here is also not ideal, it too should be weaved into the main narrative.  The \"keep in mind\" section here should be a recap of already covered material in a succinct form, nothing should be new to someone who just read the entire section.I don't think that usage of exclamation marks (!) is warranted here, though emphasis on the key phrase wouldn't hurt.Vacuum Sectionavoid -> prevent (continued growth)Autovacuum is enabled by default.  The whole note needs commas.I'd try to get rid of \"at arbitrary point in time\"\"Instance.\" we've already described where instances are previously (\"on the server\")The other sections - these seem misplaced for the tutorial, update the main documentation if this information is wholly missing or lacking.  The MVCC chapter can incorporate overview information as it is a strict consequence of that implementation.Statistics belong elsewhere - the tutorial should not use poor command implementation choices as a guide for user education.In short, this whole section should not exist and its content moved to more appropriate areas (mainly MVCC).  Vacuum is a tool that one must use but the narrative should be about the system generally.David J.", "msg_date": "Tue, 10 Nov 2020 14:58:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 10.11.20 00:14, David G. Johnston wrote:\n> Reviewed the first three sections.\n>\n> template0 - I would remove the schema portions of this and simply note \n> this as being a pristine recovery database in the diagram.\nok\n>\n> I would drop the word \"more\" and just say \"system schemas\".  I would \n> drop pg_toast from the list of system schema and focus on the three \n> user-facing ones.\nok\n>\n> Instead of \"my_schema\" (optional) I would do \"my_schema\" (example)\nThe terms 'optional' and 'default' are used at various places with their \nliteral meaning. We shall not change them.\n>\n> Server Graphic\n> #3 Global SQL Objects: Objects which are shared among all databases \n> within a cluster.\n> #6 Client applications are prohibited from connecting to template0\nok\n> #1 If by you we mean \"the client\" saying that you work \"in the cluster \n> data\" doesn't really help.  I would emphasize the point that the \n> client sees an endpoint the Postmaster publishes as a port or socket \n> file and that plus the database name defines the endpoint the client \n> connects to (meld with #5)\nok, with some changes.\n>\n> In lieu of some of the existing detail provided about structure I \n> would add information about configuration and search_path at this level.\nSearch path appended. But IMO configuration questions are out of scope \nof this sub-chapter.\n>\n> I like the object type enumeration - I would suggest grouping them by \n> type in a manner consistent with the documentation and making each one \n> a link to its \"primary\" section - the SQL Command reference if all \n> else fails.\nok. But don't how to group them in a better way.\n>\n> The \"i\" in internal in 51.3 (the image) needs capitalization).\nok\n>\n> You correctly add both Extension and Collation as database-level \n> objects but they are not mentioned anywhere else.  They do belong here \n> and need to be tied in properly in the text.\nHave some courage to the gap, it's an introductory chapter.\n>\n> The whole thing needs a good pass focused on capitalization.  Both for \n> typos and to decide when various primary concepts like Instance should \n> be capitalized and when not.\n'Instance' and 'Cluster' are now uppercase because of their importance, \neverything else lowercase for better reading.\n>\n> 51.4 - When you look at the diagram seeing /pg/data/base looks really \n> cool, but when reading the prose where both the \"pg\" and the \"base\" \n> are omitted and all you get are repeated references to \"data\", the \n> directory name choice becomes an issue IMO.  I suggest (and changed \n> the attached) to name the actual root directory \"pgdata\".  You should \n> change the /pg/ directory name to something like \".../tutorial_project/\".\n\nThe graphic shall reflect the default behavior of PG. Without the \nparameter -D, initdb creates the new cluster in the directory where \nPGDATA points to. This is in many cases |/var/lib/pgsql/data|. Therefore \n'data' and its subdirectory 'base' are not my invention but reflects the \ndefault situation.\n\n(Diving a little deeper into this issue I noticed that there is a \nparameter 'cluster_name' in the config file. But it does not change the \nname of the cluster's root directory, it only changes the names of the \nrunning processes. Choosing 'instance_name' instead of 'cluster_name' as \nthe parameter's name would be a better choice imo - but that is not what \nwe are speaking about in the context of the new chapter).\n\nI changed the very first directory in the graphic to visualize the \nstandard behavior; I reverted your recommendation to use 'pgdata' \ninstead of 'data' in the text part.\n\n> Since you aren't following alphabetical order anyway I would place \n> pg_tblspc after globals since tablespaces are globals and thus \n> proximity links them here - and pointing out that pg_tblspc holds the \n> data makes stating that global doesn't contain tablespace data \n> unnecessary.\nok\n>\n> Maybe point out somewhere the the \"base/databaseOID\" directory \n> represents the default tablespace for each database, which isn't \n> \"global\", only the non-default tablespaces are considered globals (or \n> just get rid of the mentioned on \"non-default tablespace\" for now).\n\nok\n\nmore:\n\n1) some changes concerning the nature of connections (52.2: logical \nperspective). IMO accessing multiple databases within one connection is \nnot a question of configuring, you have to take more actions. But I'm \nnot sure we should mention this at all.\n\n2) you propose to cancel or trim down the paragraphs behind figure 51.2. \n(cluster, database, schema). I believe that a textual description of \nthis hierarchy is essential for the understanding of the system. Because \nit isn't described explicitly at a different place, it should remain.\n\n--- snipp -------- from other e-mail ----\n\n> MVCC Section\n>\n> The first paragraph and example in the MVCC section is a good example \n> but seems misplaced - its relationship to MVCC generally is tenuous, \n> rather I would expect a discussion of the serializable isolation mode \n> to follow.\n>\n> I'm not sure how much detail this section wants to get into given the \n> coverage of concurrency elsewhere in the documentation.  \"Not much\" \n> would be my baseline.\nThe paragraph focus on the fact that new row versions are generated \ninstead of locking something. Explaining serialization isolation modes \nis imo very complicate and out of the scope of this subchapter. If we \nwant to give an overview - in addition to the exiting documentation - it \nshould be a separate subchapter.\n>\n> I would suggest spelling out what \"OLTP\" stands for and ideally \n> pointing the user to the glossary for the term.\nok, but not added to glossary. The given explanation \"... with a massive \nnumber of concurrent write actions\" should be sufficient.\n>\n> Tending more toward a style gripe but the amount of leader phrases and \n> redundancy are at a level that I am noticing them when I read this but \n> do not have the same impression having read large portions of \n> documentation. In particular:\nBecause I'm not a native English speaker, orthographic and style hits \nare always welcome.\n>\n> \"When we speak about transaction IDs, you need to know that xids are \n> like sequences.\"\n>\n> \"But keep in mind that xids are independent of any time measurement — \n> in milliseconds or otherwise. If you dive deeper into PostgreSQL, you \n> will recognize parameters with names such as 'xxx_age'. Despite their \n> names, these '_age' parameters do not specify a period of time but \n> represent a certain number of transactions, e.g., 100 million.\"\n>\n> Could just be:  xids are sequences and age computations involving them \n> measure a transaction count as opposed to a time interval.\nok\n>\n> Then I would consider adding a bit more detail/context here.\n>\n> xids are 32bit sequences, with a reserved value to handle \n> wrap-around.  There are 4 billion values in the sequence but \n> wrap-around handling must occur every 2 billion transactions. Age \n> computations involving xids measure a transaction count as opposed to \n> a time interval.\n>\n> I would move the mentioning of \"vacuum\" to the main paragraph about \n> delete and not solely as a \"keep in mind\" note.\nThe mentioning here at the food of the page is a crossover to the next \nsubchapter.\n>\n> The part before the diagram seems like it should be much shorter, \n> concise, and provide links to the excellent documentation.  The part \n> after the image, and the image itself, are good material, though \n> possibly should be in a main administration chapter instead of an \n> internals chapter.\n\nvacuum: The problem - and one reason for the existence of this \nsubchapter - is that vacuum's documentation is scattered across may pages:\n\n19.4: parameters to configure the server, especially five parameters \n'vacuum_cost_xxx'.\n\n19.10: parameters to configure autovacuum.\n\n19.11: parameters to configure client connections, especially five \nparameters 'vacuum_xxx' concerning their freeze-behavior.\n\n24.1: explains the general necessity of (auto)vacuum and their strategies.\n\nThe page about the SQL command VACUUM explains the different options \n(FULL, FREEZE, ..) and their meaning.\n\nBecause of the structure of our documentation as well as the complexity \nof the issue that's ok. The existing documentation describes every \nparameter very well, but I'm missing a page where the 'big picture' of \nvacuum is explained (not necessarily here). It shall show the \nrelationship between the huge number of parameters and an explanation \n*why* they exists. As far as we don't have such a page within the vacuum \ndocumentation the proposed subchapter fills the gap. (The provided \ngraphics can be included multiple times without generating redundancies \n- here and at arbitrary other places.)\n\n>\n> The first bullet of \"keep in mind\" is both wordy and wrong - in \n> particular \"as xids grow old row versions get out of scope over time\" \n> doesn't make sense (or rather it only does in the context of \n> wrap-around, not normal visibility).  Having the only mention of bloat \n> be here is also not ideal, it too should be weaved into the main \n> narrative.  The \"keep in mind\" section here should be a recap of \n> already covered material in a succinct form, nothing should be new to \n> someone who just read the entire section.\nok.\n>\n> I don't think that usage of exclamation marks (!) is warranted here, \n> though emphasis on the key phrase wouldn't hurt.\nok\n>\n> Vacuum Section\n>\n> avoid -> prevent (continued growth)\nok\n>\n> Autovacuum is enabled by default.  The whole note needs commas.\nok\n>\n> I'd try to get rid of \"at arbitrary point in time\"\nok\n>\n> \"Instance.\" we've already described where instances are previously \n> (\"on the server\")\nok\n>\n> The other sections - these seem misplaced for the tutorial, update the \n> main documentation if this information is wholly missing or lacking.  \n> The MVCC chapter can incorporate overview information as it is a \n> strict consequence of that implementation.\n>\n> Statistics belong elsewhere - the tutorial should not use poor command \n> implementation choices as a guide for user education.\n>\n> In short, this whole section should not exist and its content moved to \n> more appropriate areas (mainly MVCC).  Vacuum is a tool that one must \n> use but the narrative should be about the system generally.\n>\n>\nconcerning vacuum section: see my comments above\n\nconcerning 'the other sections' (transactions, reliability, backup \n(plus: someone should add 'replication', I'm not familiar with this \nissue)): The intention of the chapter is to give a *summary* about PG's \nessential architecture and about central implementation aspects. This \nimplies that the chapters does not present any new information. They \nshall only show (or repeat) essential things in their context and \nexplain *why* they are used. In this sense the three chapters may be \nreasonable. Concerning this, I like to hear some comments from other people.\n\n\nAttachments:\n\n0013-architecture.patch: complete patch vs. master\n\n0013-architecture.sgml.diff: changes in file architecture.sgml since 0012\n\n0013-images.diff: changes in files *-raw.svg since 0012\n\n-- \n\nJ. Purtz", "msg_date": "Sun, 15 Nov 2020 19:45:35 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On 2020-11-15 19:45, Jürgen Purtz wrote:\n>> \n\n(smallish) Changes to arch-dev.sgml\n\nErik", "msg_date": "Fri, 20 Nov 2020 22:52:32 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 20/11/2020 23:52, Erik Rijkers wrote:\n> (smallish) Changes to arch-dev.sgml\n\nThis looks good to me. One little complaint:\n\n> @@ -125,7 +122,7 @@\n> use a <firstterm>supervisor process</firstterm> (also\n> <firstterm>master process</firstterm>) that spawns a new\n> server process every time a connection is requested. This supervisor\n> - process is called <literal>postgres</literal> and listens at a\n> + process is called <literal>postgres</literal> (formerly 'postmaster') and listens at a\n> specified TCP/IP port for incoming connections. Whenever a request\n> for a connection is detected the <literal>postgres</literal>\n> process spawns a new server process. The server tasks\n\nI believe we still call it the postmaster process. We renamed the binary \na long time ago (commit 5266f221a2), and the above text was changed as \npart of that commit. I think that was a mistake, and this should say simply:\n\n... This supervisor process is called <literal>postmaster</literal> and ...\n\nlike it did before we renamed the binary.\n\nBarring objections, I'll commit this with that change (as attached).\n\n- Heikki", "msg_date": "Mon, 18 Jan 2021 16:13:22 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 18.01.21 15:13, Heikki Linnakangas wrote:\n> On 20/11/2020 23:52, Erik Rijkers wrote:\n>> (smallish) Changes to arch-dev.sgml\n>\n> This looks good to me. One little complaint:\n>\n>> @@ -125,7 +122,7 @@\n>>      use a <firstterm>supervisor process</firstterm> (also\n>>      <firstterm>master process</firstterm>) that spawns a new\n>>      server process every time a connection is requested. This \n>> supervisor\n>> -    process is called <literal>postgres</literal> and listens at a\n>> +    process is called <literal>postgres</literal> (formerly \n>> 'postmaster') and listens at a\n>>      specified TCP/IP port for incoming connections. Whenever a request\n>>      for a connection is detected the <literal>postgres</literal>\n>>      process spawns a new server process. The server tasks\n>\n> I believe we still call it the postmaster process. We renamed the \n> binary a long time ago (commit 5266f221a2), and the above text was \n> changed as part of that commit. I think that was a mistake, and this \n> should say simply:\n>\n> ... This supervisor process is called <literal>postmaster</literal> \n> and ...\n>\n> like it did before we renamed the binary.\n>\n> Barring objections, I'll commit this with that change (as attached).\n>\n> - Heikki\n\nI fear that the patch 'Additional chapter for Tutorial' grows beyond \nmanageable limits. It runs since nearly one year, the size of 228 KB is \nvery huge, many people havemade significant contributions. But a commit \nseems to be in far distance. Having said that, I'm pleased with Heikki's \nproposal to split changes in the existing file 'arch-dev.sgml' from the \nrest of the patch and commit them separately.\n\nBut I have some concerns with the chapter '51.2. How Connections Are \nEstablished'. It uses central terms like 'client process', 'server \nprocess','supervisor process', 'master process', 'server tasks', \n'backend (server)', 'frontend (client)', 'server', 'client'. Some month \nago, we have cleared his terminology in the new chapter 'glossary'. As \nlong as it leads to readable text, we shall use the glossary-terms \ninstead of the current ones. And we shall include some links to the \nglossary.\n\nI propose to start a new thread which contains only changes to \n'arch-dev.sgml'. In pgsql-hackers or in pgsql-docs list? Initialized by \nHeikki or by me?\n\n--\n\nJürgen Purtz\n\n\n\n\n\n\n\n\nOn 18.01.21 15:13, Heikki Linnakangas\n wrote:\n\nOn\n 20/11/2020 23:52, Erik Rijkers wrote:\n \n(smallish) Changes to arch-dev.sgml\n \n\n\n This looks good to me. One little complaint:\n \n\n@@ -125,7 +122,7 @@\n \n      use a <firstterm>supervisor process</firstterm>\n (also\n \n      <firstterm>master process</firstterm>) that\n spawns a new\n \n      server process every time a connection is requested. This\n supervisor\n \n -    process is called <literal>postgres</literal>\n and listens at a\n \n +    process is called <literal>postgres</literal>\n (formerly 'postmaster') and listens at a\n \n      specified TCP/IP port for incoming connections. Whenever a\n request\n \n      for a connection is detected the\n <literal>postgres</literal>\n \n      process spawns a new server process. The server tasks\n \n\n\n I believe we still call it the postmaster process. We renamed the\n binary a long time ago (commit 5266f221a2), and the above text was\n changed as part of that commit. I think that was a mistake, and\n this should say simply:\n \n\n ... This supervisor process is called\n <literal>postmaster</literal> and ...\n \n\n like it did before we renamed the binary.\n \n\n Barring objections, I'll commit this with that change (as\n attached).\n \n\n - Heikki\n \n\nI fear that the patch 'Additional chapter for Tutorial' grows\n beyond manageable\n limits. It runs since nearly one year, the size of 228 KB is very\n huge, many people have\n made significant contributions. But a commit seems to be in\n far distance. Having said that, I'm pleased with Heikki's\n proposal to split changes in the existing file\n 'arch-dev.sgml' from the rest of the patch and commit them\n separately.\nBut\n I have some concerns with the chapter '51.2. How Connections\n Are Established'. It uses central terms like 'client process',\n 'server process',\n 'supervisor process', 'master process', 'server tasks',\n 'backend (server)', 'frontend (client)', 'server', 'client'.\n Some month ago, we have cleared his terminology in the new\n chapter 'glossary'. As long as it leads to readable text, we\n shall use the glossary-terms instead of the current ones.\n And we shall include some links to the glossary.\n\nI\n propose to start a new thread which contains only changes to\n 'arch-dev.sgml'. In pgsql-hackers or in pgsql-docs list?\n Initialized by Heikki or by me?\n--\nJürgen\n Purtz", "msg_date": "Tue, 19 Jan 2021 07:37:25 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 18.01.21 15:13, Heikki Linnakangas wrote:\n> On 20/11/2020 23:52, Erik Rijkers wrote:\n>> (smallish) Changes to arch-dev.sgml\n>\n> This looks good to me. One little complaint:\n>\n>> @@ -125,7 +122,7 @@\n>>      use a <firstterm>supervisor process</firstterm> (also\n>>      <firstterm>master process</firstterm>) that spawns a new\n>>      server process every time a connection is requested. This \n>> supervisor\n>> -    process is called <literal>postgres</literal> and listens at a\n>> +    process is called <literal>postgres</literal> (formerly \n>> 'postmaster') and listens at a\n>>      specified TCP/IP port for incoming connections. Whenever a request\n>>      for a connection is detected the <literal>postgres</literal>\n>>      process spawns a new server process. The server tasks\n>\n> I believe we still call it the postmaster process. We renamed the \n> binary a long time ago (commit 5266f221a2), and the above text was \n> changed as part of that commit. I think that was a mistake, and this \n> should say simply:\n>\n> ... This supervisor process is called <literal>postmaster</literal> \n> and ...\n>\n> like it did before we renamed the binary.\n>\n> Barring objections, I'll commit this with that change (as attached).\n>\n> - Heikki\n\nSome additional changes in 51.2:\n\n  - smaller number of different terms\n\n  - aligning with Glossary\n\n  - active voice instead of passive voice\n\n  - commas\n\n---\n\nJ. Purtz", "msg_date": "Thu, 21 Jan 2021 13:38:26 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 21/01/2021 14:38, Jürgen Purtz wrote:\n> This supervisor process is called <glossterm\n> linkend=\"glossary-postmaster\">postmaster</glossterm> and listens at\n> a specified TCP/IP port for incoming connections. Whenever he\n> detects a request for a connection, he spawns a new backend process.\n\nIt sounds weird to refer to a process with \"he\". I left out this hunk, \nand the other with similar changes.\n\nCommitted the rest, thanks!.\n\n- Heikki\n\n\n", "msg_date": "Fri, 22 Jan 2021 11:15:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 1/22/21 4:15 AM, Heikki Linnakangas wrote:\n> On 21/01/2021 14:38, Jürgen Purtz wrote:\n>> This supervisor process is called <glossterm\n>> linkend=\"glossary-postmaster\">postmaster</glossterm> and listens at\n>> a specified TCP/IP port for incoming connections. Whenever he\n>> detects a request for a connection, he spawns a new backend process.\n> \n> It sounds weird to refer to a process with \"he\". I left out this hunk, \n> and the other with similar changes.\n> \n> Committed the rest, thanks!.\n\nSo it looks like this was committed. Is there anything left to do?\n\nIf not, we should close the CF entry.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 25 Mar 2021 09:00:11 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 2021-Mar-25, David Steele wrote:\n\n> On 1/22/21 4:15 AM, Heikki Linnakangas wrote:\n> > On 21/01/2021 14:38, J�rgen Purtz wrote:\n> > > This supervisor process is called <glossterm\n> > > linkend=\"glossary-postmaster\">postmaster</glossterm> and listens at\n> > > a specified TCP/IP port for incoming connections. Whenever he\n> > > detects a request for a connection, he spawns a new backend process.\n> > \n> > It sounds weird to refer to a process with \"he\". I left out this hunk,\n> > and the other with similar changes.\n> > \n> > Committed the rest, thanks!.\n> \n> So it looks like this was committed. Is there anything left to do?\n\nYes, there is. AFAICS Heikki committed a small wordsmithing patch --\nnot the large patch with the additional chapter.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Sat, 3 Apr 2021 10:39:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 03.04.21 15:39, Alvaro Herrera wrote:\n> Yes, there is. AFAICS Heikki committed a small wordsmithing patch --\n> not the large patch with the additional chapter.\n\nWhat can i do to move the matter forward?\n\n--\n\nJ. Purtz\n\n\n\n\n", "msg_date": "Sat, 3 Apr 2021 19:43:48 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 2021-Apr-03, J�rgen Purtz wrote:\n\n> On 03.04.21 15:39, Alvaro Herrera wrote:\n> > Yes, there is. AFAICS Heikki committed a small wordsmithing patch --\n> > not the large patch with the additional chapter.\n> \n> What can i do to move the matter forward?\n\nPlease post a version that applies to the current sources. If the\nlatest version posted does, please state so.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Sat, 3 Apr 2021 16:01:10 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 03.04.21 21:01, Alvaro Herrera wrote:\n> On 2021-Apr-03, Jürgen Purtz wrote:\n>\n>> On 03.04.21 15:39, Alvaro Herrera wrote:\n>>> Yes, there is. AFAICS Heikki committed a small wordsmithing patch --\n>>> not the large patch with the additional chapter.\n>> What can i do to move the matter forward?\n> Please post a version that applies to the current sources. If the\n> latest version posted does, please state so.\n>\nThe small patch 'arch-dev.sgml.20210121.diff' contains only some \nclearing up concerning the used terminology and its alignments with the \nglossary. The patch was rejected by Heikki.\n\nThe latest version of the huge patch '0013-architecture.patch' is valid \nand doesn't contain merge conflicts.\n\n--\nJürgen Purtz\n\n\n\n", "msg_date": "Sun, 4 Apr 2021 11:07:55 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 2021-Apr-04, J�rgen Purtz wrote:\n\n> The small patch 'arch-dev.sgml.20210121.diff' contains only some clearing up\n> concerning the used terminology and its alignments with the glossary. The\n> patch was rejected by Heikki.\n\nThis comment is not helpful, because it's not obvious where would I find\nthat patch. Also, you say \"the patch was rejected by Heikki\" but\nupthread he said he committed it. His comment was that he left out some\nparagraphs because of a style issue. Did you re-post that patch after\nfixing the style issues? If you did, I couldn't find it.\n\n\n> The latest version of the huge patch '0013-architecture.patch' is valid and\n> doesn't contain merge conflicts.\n\nYeah, OK, but I have to dive deep in the thread to find it. Please post\nit again. When you have a patch series, please post it as a whole every\ntime -- that makes it easier for a committer to review it.\n\nYou seem to be making your life hard by not using git to assist you. Do\nyou know you can have several commits in a branch of your own, rebase it\nto latest master, merge master to it, rebase on top of master, commit\nfixups, \"rebase -i\" and change commit ordering to remove unnecessary\nfixup commits, and so on? Such techniques are extremely helpful when\ndealing with a patch series. When you want to post a new version to the\nlist, you can just do \"git format-patch -v14 origin/master\" to produce a\nset of patch files. You don't need to manually give names to your patch\nfiles, or come up with a versioning scheme. Just increment the argument\nto -v by +1 each time you (or somebody else) posts a new version of the\npatch series.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Sun, 4 Apr 2021 13:02:48 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 04.04.21 19:02, Alvaro Herrera wrote:\n> On 2021-Apr-04, Jürgen Purtz wrote:\n>\n>> The small patch 'arch-dev.sgml.20210121.diff' contains only some clearing up\n>> concerning the used terminology and its alignments with the glossary. The\n>> patch was rejected by Heikki.\n> This comment is not helpful, because it's not obvious where would I find\n> that patch. Also, you say \"the patch was rejected by Heikki\" but\n> upthread he said he committed it. His comment was that he left out some\n> paragraphs because of a style issue. Did you re-post that patch after\n> fixing the style issues? If you did, I couldn't find it.\n>\n>\n>> The latest version of the huge patch '0013-architecture.patch' is valid and\n>> doesn't contain merge conflicts.\n> Yeah, OK, but I have to dive deep in the thread to find it. Please post\n> it again. When you have a patch series, please post it as a whole every\n> time -- that makes it easier for a committer to review it.\n>\n> You seem to be making your life hard by not using git to assist you. Do\n> you know you can have several commits in a branch of your own, rebase it\n> to latest master, merge master to it, rebase on top of master, commit\n> fixups, \"rebase -i\" and change commit ordering to remove unnecessary\n> fixup commits, and so on? Such techniques are extremely helpful when\n> dealing with a patch series. When you want to post a new version to the\n> list, you can just do \"git format-patch -v14 origin/master\" to produce a\n> set of patch files. You don't need to manually give names to your patch\n> files, or come up with a versioning scheme. Just increment the argument\n> to -v by +1 each time you (or somebody else) posts a new version of the\n> patch series.\n>\nThe thread contains a sequence of files '0001_architecture.patch' to \n'0013_architecture.patch' (with gaps in the numbering) created by me and \nother authors over the last 12 month. This is what I call the 'huge \npatch'. Indeed, the files are created more or less manually without the \nformat-patch option. I welcome the reference to rebase and format-patch \nand I'm considering to use it in the future.\n\nIn addition to this chain Erik introduced in November within the same \nthread some changes to the chapter \"Overview of Query Handling\", which \nsubsequently was expanded by Heikki and me with the sequence of \n'arch-dev.sgml.xxxxx.diff' files. This is what I call the 'small patch'. \nIt's independent from the 'huge patch'. That 'small patch' is partly \ncommitted by Heikki. In case that a committer gives the uncommitted part \na second chance, I append a patch. Because I'm not a native English \nspeaker, every improvement in the linguistic is highly welcome.\n\n--\n\nJürgen Purtz", "msg_date": "Mon, 5 Apr 2021 15:18:44 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "On 2021-Apr-05, J�rgen Purtz wrote:\n\n> In addition to this chain Erik introduced in November within the same thread\n> some changes to the chapter \"Overview of Query Handling\", which subsequently\n> was expanded by Heikki and me with the sequence of\n> 'arch-dev.sgml.xxxxx.diff' files. This is what I call the 'small patch'.\n> It's independent from the 'huge patch'. That 'small patch' is partly\n> committed by Heikki. In case that a committer gives the uncommitted part a\n> second chance, I append a patch. Because I'm not a native English speaker,\n> every improvement in the linguistic is highly welcome.\n\nPushed this one with cosmetic adjustments.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Cuando ma�ana llegue pelearemos segun lo que ma�ana exija\" (Mowgli)\n\n\n", "msg_date": "Mon, 5 Apr 2021 11:48:14 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial - arch-dev.sgml" }, { "msg_contents": "Hi Jürgen,\n\nWhat's going to happen with this work?\n\nIf you intend to have it eventually committed, I think it will be \nnecessary to make the patches smaller, and bring them into the \ncommitfest app, so that others can follow progress.\n\nI for one, cannot see/remember/understand what has been done, or even \nwhether you intend to continue with it.\n\nThanks,\n\nErik\n\n\n", "msg_date": "Thu, 20 May 2021 23:02:30 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "> Hi Jürgen,\n>\n> What's going to happen with this work?\n>\n> If you intend to have it eventually committed, I think it will be \n> necessary to make the patches smaller, and bring them into the \n> commitfest app, so that others can follow progress.\n>\n> I for one, cannot see/remember/understand what has been done, or even \n> whether you intend to continue with it.\n>\n> Thanks,\n>\n> Erik\n>\n>\nPeter changed the status to 'Returned with feedback' at the end of the \nlast commit fest. I'm not absolutely sure, but my understanding is that \nthe patch is rejected.\n\n--\n\nJürgen Purtz\n\n\n\n\n", "msg_date": "Fri, 21 May 2021 08:47:15 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": true, "msg_subject": "Re: Additional Chapter for Tutorial" }, { "msg_contents": "On Fri, 2021-05-21 at 08:47 +0200, Jürgen Purtz wrote:\n> Peter changed the status to 'Returned with feedback' at the end of the \n> last commit fest. I'm not absolutely sure, but my understanding is that \n> the patch is rejected.\n\nThere is a different status for that.\n\n\"Returned with feedback\" means: there was review, and further work by\nthe author is needed, or we need more discussion if we want that or not\nor how it should be, but there hasn't been a lot of feedback from the author\nlately, so it seems that just moving it on to the next commitfest is not\nthe right thing to do.\n\nYou are welcome to re-submit the patch if you address the feedback.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 21 May 2021 10:52:42 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Additional Chapter for Tutorial" } ]
[ { "msg_contents": "We choose a split point in nbtsplitloc.c primarily based on evenly\ndividing space among left and right halves of the split, while giving\nsecondary consideration to suffix truncation (there is other logic\nthat kicks in when there are many duplicates, which isn't related to\nwhat I want to talk about). See my commit fab25024 from Postgres 12\nfor background information.\n\nThe larger the split interval, the more unbalanced page splits are\nallowed to be, which is a cost that we may be willing to pay to\ntruncate more suffix attributes in the new high key. Split interval\nrepresents a trade-off between two competing considerations (a\npotential cost versus a potential benefit). Unfortunately, commit\nfab25024 defined \"split interval\" based on logic that makes the\nassumption that tuples on the same page are more or less of a uniform\nsize. That assumption was questionable when suffix truncation went in,\nbut now that deduplication exists the assumption seems quite risky. I\nam concerned that suffix truncation will be far more aggressive than\nappropriate when the delta-optimal split point is near large posting\nlist tuples, that are size outliers on the page. The logic in\nnbtsplitloc.c might accept a much more unbalanced page split than is\ntruly reasonable because the average width of tuples on the page isn't\nso large, even though the tuples around where we want to split the\npage are very large.\n\nFortunately it's pretty easy to nail this down. We can determine the\ndefault strategy split interval based on the cost that we actually\ncare about (not a fuzzy proxy of that cost): a leftfree and rightfree\nspace tolerance from the space optimal candidate split point,\nexpressed in bytes (actually, expressed as a proportion of the total\nspace that is used for data items on the page, which is converted into\nbytes to form a tolerance). That's the way it's done in the attached\npatch.\n\nI plan to commit this patch next week, barring any objections.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 17 Apr 2020 12:11:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Split interval (used by nbtree suffix truncation) and posting list\n tuples" } ]
[ { "msg_contents": "pq_putmessage() is a macro which calls a function that is normally\nsocket_putmessage(), which returns either 0 on success or EOF in the\ncase of failure. Most callers ignore the return value, sometimes with\nan explicit cast to void, and other times without such a cast. As far\nas I can see, the only case where we actually do anything significant\nwith the return value is in basebackup.c, where two of the calls to\npq_putmessage() do this:\n\n if (pq_putmessage('d', buf, cnt))\n ereport(ERROR,\n (errmsg(\"base\nbackup could not send data, aborting backup\")));\n\nThe other six calls in that file do not have a similar protection, and\nthere does not seem to be any explanation of why those two places are\nspecial as compared either with other places in the same file or with\nother parts of PostgreSQL, so I'm a little bit confused as to what's\ngoing on here. Why do we return 0 or EOF only to ignore it, instead of\nthrowing ERROR like we do in most places?\n\nOne problem is that we might get into error recursion trouble: we\ndon't want to try to send an ErrorResponse, fail, and then respond by\ngenerating another ErrorResponse, which will again fail, leading to\nblowing out the error stack. It's also worth considering that the\nerror might be occurring halfway through sending some other protocol\nmessage, in which case we can't start a new protocol message without\nfinishing the previous one. But the point is that in a case like this,\nwe've lost the client connection anyway. It seems like what we ought\nto be doing is what ProcessInterrupts does in this situation:\n\n /* don't send to client, we already know the\nconnection to be dead. */\n whereToSendOutput = DestNone;\n ereport(FATAL,\n (errcode(ERRCODE_CONNECTION_FAILURE),\n errmsg(\"connection to client lost\")));\n\nWhy don't we just make internal_flush() do that directly in the event\nof a failure, instead of setting flags that make it happen later and\nthen returning an error indicator? Can it be unsafe to throw an error\nhere? Do we have places that are calling pq_putmessage() inside\ncritical sections or something?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:49:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "return value from pq_putmessage() is widely ignored" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> pq_putmessage() is a macro which calls a function that is normally\n> socket_putmessage(), which returns either 0 on success or EOF in the\n> case of failure. Most callers ignore the return value, sometimes with\n> an explicit cast to void, and other times without such a cast. As far\n> as I can see, the only case where we actually do anything significant\n> with the return value is in basebackup.c, where two of the calls to\n> pq_putmessage() do this:\n> if (pq_putmessage('d', buf, cnt))\n> ereport(ERROR,\n> (errmsg(\"base\n> backup could not send data, aborting backup\")));\n\nA preliminary survey says that basebackup.c is wrong here, and it\nshould be ignoring the return value just like the rest of the world.\npqformat.c is of the opinion that pqcomm.c is taking care of it:\n\n (void) pq_putmessage(buf->cursor, buf->data, buf->len);\n /* no need to complain about any failure, since pqcomm.c already did */\n\nand in fact that appears to be the case. As far as I can see, the\nonly place that's doing anything appropriate with the result is\nsocket_putmessage_noblock:\n\n res = pq_putmessage(msgtype, s, len);\n Assert(res == 0); /* should not fail when the message fits in\n * buffer */\n\nPerhaps the value of that Assert is not worth the amount of\nconfusion generated by having a result value, and we should\njust drop it and change pq_putmessage to return void.\n\n> One problem is that we might get into error recursion trouble: we\n> don't want to try to send an ErrorResponse, fail, and then respond by\n> generating another ErrorResponse, which will again fail, leading to\n> blowing out the error stack.\n\nYup. This is why it's dealt with internally to pqcomm.c.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:20:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: return value from pq_putmessage() is widely ignored" } ]
[ { "msg_contents": "I've recently been thinking about some optimizations to scalar array\nop expression evaluation [1], and Tom mentioned that I might want to\nconsider looking into previous efforts into caching stable\nsubexpressions as a component of that (particularly if I wanted it to\nbe useful for more than constant arrays.\n\nI read through the many threads over the years on this topic, and I\nthought it'd be worth sending a summary email -- both as a record of\nthe current state of things (for either my or someone else's reference\nin getting this effort going again) and possibly as a way to generate\ninterest in the subject.\n\nThe general idea is that non-volatile expressions in a query ought to\nbe able to be calculated once (or once per param change) and reused\neach time through the scan loop. A somewhat related idea (but I don't\nbelieve to be necessary for a version 1) would be to subsequently\nreduce multiple usages of the same expression within a query to a\nsingle evaluation [again, possibly per param change].\n\nThis could potentially speed up a wide range of queries, potentially\nbenefiting everything from (executing repeatedly for each tuple) a\ncomparison to an expensive function to detoasting to casting each\nmember of a subquery to internal preprocessing of a value for to allow\nan optimization in expression evaluation.\n\nThe first thread on $SUBJECT I'm aware of (courtesy of Tom Lane) was\nin the 2011-2102 timeframe: [2] \"[WIP] Caching for stable expressions\nwith constant arguments v2\". This message had a patch attached, but\nhad no replies. Following on the heels of that (and by the same\nauthor) we have [3] \"Caching for stable expressions with constant\narguments v6\". There was some decent discussion here, but ultimately\nthe author was unable to continue working on it.\n\nIn 2017 we have [4] \"WIP Patch: Precalculate stable functions\" which\nnoted the value for full text search expressions like `WHERE\nbody_tsvector @@ to_tsquery('postgres');`. After a suggestion by Tom\nto look at the aforementioned thread from 2012, this patch re-emerged\nin [5] \"WIP Patch: Precalculate stable functions, infrastructure v1\".\n From what I can tell this effort advanced the state of this project\nfairly significantly, and moved to implementing the caching as a\nPARAM_EXEC param after suggestions from Tom and Andres. This thread\nalso died out, however, but is probably a pretty good starting point\nfor future work and discussion.\n\nIn 2017 we also have a note in [6] that this effort might also be\nuseful in \"Re: Inlining functions with 'expensive' parameters\"\n(specifically for PostGIS in this case). Essentially, if we inline\nfunction calls, then we have to worry about cost because we might\nexecute it more than once, but that can be fixed by being able to use\none evaluation to back multiple usages in the query.\n\nIn early 2019 Tom mentioned in [7] that this infrastructure would also\nlikely resolve performance issue Tomas Vondra had noted in \"Re:\noverhead due to casting extra parameters with aggregates (over and\nover)\" . Essentially a subquery returning a large number of numeric\nvalues was being implicitly casted (repeatedly) in the main query.\nAdding an explicit cast in the subquery resolved the issue, but seemed\nlike a pretty significant (and perceptually unnecessary) gotcha.\n\nI'm hoping collating this all in one place is helpful; at the very\nleast it will be helpful to me as a reference should I find the time\nto push this forward some more.\n\nJames\n\n[1]: https://www.postgresql.org/message-id/flat/CAAaqYe-UQBba7sScrucDOyHb7cDoNbWf_rcLrOWeD4ikP3_qTQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/CABRT9RBdRFS8sQNsJHxZOhC0tJe1x2jnomiz%3DFOhFkS07yRwQA%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/CABRT9RA-RomVS-yzQ2wUtZ=m-eV61LcbrL1P1J3jydPStTfc6Q@mail.gmail.com\n[4]: https://www.postgresql.org/message-id/flat/ba261b9fc25dea4069d8ba9a8fcadf35%40postgrespro.ru\n[5]: https://www.postgresql.org/message-id/flat/da87bb6a014e029176a04f6e50033cfb%40postgrespro.ru\n[6]: https://www.postgresql.org/message-id/flat/6480.1510861492%40sss.pgh.pa.us#c296736e96a3ea7a61dc1dd88f1891bc\n[7]: https://www.postgresql.org/message-id/flat/10046.1569257616%40sss.pgh.pa.us#569f0f9f20be8212201b1df6cdb22ee0\n\n\n", "msg_date": "Fri, 17 Apr 2020 20:43:04 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Summary: State of Caching Stable Subexpressions" } ]
[ { "msg_contents": "Hello PostgreSQL-development,\n\nOracle has implementation:\n\nselect id, amount, sum(DISTINCT amount) over () as total\n from xx;\n\n\nhttps://dbfiddle.uk/?rdbms=oracle_18&fiddle=8eeb60183ec9576ddb4b2c9f2874d09f\n\n\nWhy this is not possible in PG?\nhttps://dbfiddle.uk/?rdbms=postgres_12&fiddle=97c05203af4c927ff9f206e164752767\n\n\nWhy Window-specific functions do not allow DISTINCT to be used within the function argument list.?\nWhich problems are exists?\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Sat, 18 Apr 2020 14:46:55 +0300", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Implementation DISTINCT for window aggregate function: SUM" }, { "msg_contents": "On Sat, 18 Apr 2020 at 23:47, Eugen Konkov <kes-kes@yandex.ru> wrote:\n> select id, amount, sum(DISTINCT amount) over () as total\n> from xx;\n\n> Why this is not possible in PG?\n\nMainly because nobody has committed anything to implement it yet.\n\n> Why Window-specific functions do not allow DISTINCT to be used within the function argument list.?\n> Which problems are exists?\n\nThere are some details in [1] which you might be interested in.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAN1Pwonf4waD%2BPWkEFK8ANLua8fPjZ4DmV%2BhixO62%2BLiR8gwaA%40mail.gmail.com\n\n\n", "msg_date": "Sun, 19 Apr 2020 15:07:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implementation DISTINCT for window aggregate function: SUM" } ]